Essay on The Global Grid: Where Trust is Not Fool Proof
Submitted By sonicgil
Words: 1785
Pages: 8
The Global Grid: Where Trust is Not Fool Proof In today’s modern society, it can be a hard thought to imagine a time without computers being interlinked to each other, sending massive quantities of information and data back and forth to each other around the globe. The thoughts and memories of a time when we didn’t have computer grids, however, have all but faded away. It’s slightly frightening for some to understand that grid failure is a possibility considering all the factors that can fail to make it a reality. We rely on computers for nearly every aspect of the information that we receive on a daily bases but what would happen if we suddenly couldn’t receive our precious texts, emails, news, or social media alerts? What happens to the world when the electricity goes out and the land is put into darkness? I believe that having these organized technological grids is an important part of our society; however, I believe that in order to create a failsafe against a total shutdown, moderation of our daily usage must be applied. Grid Computing began in the early years of the 1980’s according frank barman. This was a decade that many researches dedicated to the creation of software that could manage and monitor connections between multiple computer processors. There were multiple virtual machines created to help stabilize the communication between processors. “Parallel Virtual Machine (PVM), Message Passing Interface (MPI) High Performance Fortain (HPF), and Open MP were developed to support communication for scalable applications.” These concepts were created to allow for computers to create and sustain an infinite possibility of complexity and continuity of the architecture of grids. Years later, the same machines and principles are still being used for our modern day grids and grid projects. According to Jonathan Strickland, there are several types of grid computing. One of the earliest examples of grid computing was a project known as SETI; its primary objective was to scan the sky using multiple radio telescopes. A problem arose within the project as the telescopes gathered too much data for any one computer to collect and so the creators of the project created a program called Seti@home. seti@home’s main objective was to establish a virtual network of interlinked computers and effectively created a virtual supercomputer. Today, seti has over 1,379,657 current user computers to rely on and store information. (Infronet) This is of course just one of the many examples of where grid computing can be very useful, especially when collecting data at one central point. Currently, the world’s largest computer grid project is the Large Hadron Collider that stores the data from machines that smash particles together and manages more than 15 million gigabytes of data every year in 140 computer centers in 33 countries. (Phys) The process of creating a computer grid is usually the same for every grid computing project according to Strickland. A user that would like to use his computer to be a part of the project (usually it’s a computer owned by the project manager) will download an application from the project leader’s website. After an installation, the main control will send data back and forth to the computer via data transfer cords or through the internet for the computer to process with its spare CPU power that it isn’t using. Once a computer has completed a task and has relayed the data back to the central control computer, it usually gets another task and so on. The applications of a practical grid help create many businesses and corporations working at a pace that computers have made a reality.
According to berman, when the grid was first conceived, most theories that arose of how the grid would be used were that it’s most popular application was to sever the need for the computers to be in the same area, however, not only did it complete this task, it also allowed computers to free up available usage and power by