• Home
  • /
  • Topic
  • /
  • Science
  • /
  • Genetic Code
  • /
  • Genetic Algorithms Parallel Genetic Algorithms 1d Bin Packing Supercomputers Dissertation Or Thesis Complete
Verified Document

Genetic Algorithms Parallel Genetic Algorithms 1d Bin Packing Supercomputers Dissertation Or Thesis Complete

Solving the 1D Bin Packing Problem Using a Parallel Genetic Algorithm: A Benchmark Test The past few decades have witnessed the introduction in a wide range of technological innovations that have had an enormous impact on consumers, businesses and governmental agencies. Computer-based applications in particular have been key in facilitating the delivery of a wide range of services and information, and computer processing speeds have consistently increased incrementally. Computer processing speeds, though, have a natural limit, with electricity being unable to travel faster than the speed of light. Therefore, even the optimal processing speeds attainable in the future will remain constrained in this regard, but there are some alternative approaches to computer processing that can further increase the functionality of computers, including parallel computing and genetic algorithms which are discussed further below.

Parallel Computing

In computing, the term "parallelism" is used to describe a system's architecture, in other words, "The organization and interconnection of components of computer systems" (Faulkner, Senker & Velho, 1999, p. 135). Although processing speeds have continued to double roughly every 18 months following Moore's Law, computers that use sequential architectures are still constrained in several ways in their ability to perform calculation functions. Irrespective of how fast a computer functions in sequential architectural configurations, each separate instruction or processing operation must be completed prior to the initiation of the next instruction or processing operation (Faulkner et al., 1999). Clearly, increases in processing speed can help sequential processing function more efficiently, but even here there is a fundamental constraint. As Faulkner and his associates emphasize, "This approach has its barriers, not least the speed of light since electrical signals cannot travel faster than light. Another way to increase the performance of computers is to find an alternative architecture" (Faulkner et al., 1999, p. 135). In response to the need for faster and more efficient computer processing, a number of devices have been introduced during the past half century or so that have employed numerous processors that operate in parallel to accomplish a single task (Faulker et al., 1999). According to Faulkner and his associates, "Most of this development has involved the gradual introduction of parallelism into supercomputers. There are six main forms: concurrent input -- output, operations, pipelining, memory interleaving and hierarchy, parallel functional units, vector processing, and multiple central processors. However, none of these approaches completely abandons the sequential architecture" (1999, p. 136).

Parallel systems are comprised of a number of processors, each of which assumes responsibility for completion of discrete portions of a processing job simultaneously. Although attempts to produce truly parallel processing began during the late 1960s, it was not until the 1980s that the approach became commercially viable with the introduction of so-called "field effect chips" (Faulkner et al., 1999). According to MacKenzie, "In field effect chips, current flows only in the surface plane of the microchip; in a bipolar chip, the current flow perpendicular to the chip as well as along it" (1998, p. 106). Based on its ability to be mass produced with relative ease, highly integrated field effect technology introduced a number of opportunities for configuring large numbers of field effect chips in parallel architectures (MacKenzie, 1998). As Faulkner and his colleagues point out, "Although slower than the bipolar chips conventionally used in supercomputers, large numbers of [field effect chips] could be arrayed or configured in highly parallel architectures to achieve competitive processing speeds" (1999, p. 137). Beginning in the 1980s, parallel architectures became increasingly competitive with so-called supercomputers. In this regard, MacKenzie notes that, "Until the very end of the 1980s, these did not claim to rival mainstream supercomputing in absolute floating-point performance, promising instead a superior price-performance ratio. However, by the start of the 1990s, with the most advanced field effect chips (such as the million-transistor Intel i1860) being claimed to offer on a single chip a floating-point processing performance approaching that of a 1976 Cray I, rivalry in absolute performance was growing" (1998, p. 107).

More recently, parallel computers have in fact started to compare favorably with the performance levels achieved by supercomputers through the use of incremental parallelism; however, the primary advantage from the start has been this increased price-performance ratio by reducing the costs that are associated with attaining the fastest processing speeds (Faulkner et al., 1999). Parallel computers differ from sequential processing methods in three basic ways that are related to their design as described in Table 1 below.

Table 1

Basic differences between parallel and sequential processing

Difference

Description

The type and number of processors,...

The system may be either 'coarse-grain' -- composed of a small number of sophisticated processors -- or 'fine-grain' -- employing a large number of simple processors.
Parallel architectures vary in the kind of instruction given to the different processors.

Parallel-data computers, also known as single instruction stream/multiple data stream, have all the processors simultaneously executing the same operation on different parts of the data. Parallel-process computers, or multiple instruction stream/multiple data stream are able to divide large programs into many smaller ones that separate processors execute simultaneously.

Parallel computers differ in the way the processors are connected to one another and to the memory units.

In 'shared memory' machines, communications between the processors is via a common memory that is accessed through a network of connections or a memory bus. This approach is hampered by the occurrence of potential for 'traffic jams.' 'Distributed memory' machines have some amount of memory attached to each processor, forming the nodes or processor-memory pairs; communication is via a message-passing system. The problem here is how to connect the nodes so that communication between them is fast: no node has to support too many connections. Various geometries have been devised including the simple rectangular grid used by the DAP (distributed array processor) machine, developed in the UK, and specialized geometries such as the Butterfly computer and the n-cube or hypercube, both developed in the U.S. In the latter, the nodes are placed at the corners of a multidimensional cube whose edges correspond to the links between processors: in the sixteen processor architecture, for instance, each processor is linked to four other processors. A more advanced approach is now found in the British designed transputer, in which the connections between processors are 'virtual' rather than hardwired. The transputer is a microprocessor which 'can be used both as a building block for parallel processing machines and as a single very high performance microprocessor for computers and embedded applications. In principle, different geometries can be designed -- or (in the case of computers using transputers) reconfigured -- so as to 'match' the application problem to be solved.

Source: Faulkner et al., 1999, p. 138

It is important to note that parallel computing, though, is not a zero-sum alternative to sequential processing. In this regard, Benkler emphasizes that, "One must break up the problem into well-designed pieces that can run simultaneously, rather than having one processor wait for the other to finish, and there is overhead in managing the information input and output flows to and from the processors, both to each other and to the system memory" (2005, p. 273). The analysis of the performance-price ratio between parallel and sequential processing therefore requires an assessment of what benefits can be realized by integrating numerous separate processors compared to the costs that are involved in building these capacities into a single machine (Benkler, 2005). Because parallel processing offers some advantages over the "all-in-one" approach, the popularity of parallel processing has continued to increase in recent years. In this regard, Benkler concludes that, "By the end of the twentieth century, most supercomputers were made of thousands of simple processors lashed together and parallelization became an important field of computer science" (Benkler, 2005, p. 274). These trends in alternative processing approaches have been augmented by other innovations, including evolutionary computing which is discussed further below.

Evolutionary Computing

As the term suggests, evolutionary computing is founded on the same principles that Charles Darwin proposed in Origin of Species. For instance, Pennock notes that, "Computer scientists and engineers, inspired by the workings of evolution in nature, realized that they could apply the same powerful Darwinian mechanism in computers for practical purposes, such as for complex industrial design" (2007, p. 212). In recent years, evolutionary computing techniques have been used to develop algorithms for online decision making applications (Karr, 2007).

In fact, evolutionary computing has been used by the National Aeronautics and Space Administration (NASA) to good effect already by developing evolutionary algorithms that facilitate design and development. For example, a research team at NASA's Ames Research Center used evolutionary computing techniques to automatically create the design for an antenna for use on a satellite in earth orbit. According to Plotkin, "The space antenna, which looks like an unwound paper clip, violated conventional wisdom about antenna design and confounded the human antenna engineers who first saw it. The NASA antenna was created using "evolutionary computation" programs, so named because they solve problems…

Sources used in this document:
References

Anderson-Cook, C.M. (2005). Practical genetic algorithms. Journal of the American Statistical

Association, 100(471), 1099.

Benkler, Y. (2004). Sharing nicely: On shareable goods and the emergence of sharing as a modality of economic production. Yale Law Journal, 114(2), 273-274.

Blacklight. (2010, October 11). Pittsburgh Supercomputing Center. Retrieved from http://www.
Dissecting IBM Watson's Jeopardy! Game. (2011, February 15). RedEvo Aphelion. Retrieved from http://latestinfolive.dl4down.com/?tag=ibm-watson-system-specs.
Research Walloon. Retrieved from http://beams.ulb.ac.be/beams/research/projects / past/46.html.
Foster, I. (2011). Parallelism and computing. Argonne National Laboratory: Mathematics and Computer Science Division. Retrieved from http://www.mcs.anl.gov/~itf/dbpp/text / node41.html.
IBM Watson. (2011). IBM. Retrieved from http://www-03.ibm.com/innovation/us/watson / watson-for-a-smarter-planet/watson-schematic.html.
Retrieved from http://www.eetimes.com/electronics-news/4210875/Sandia-upgrades-supercomputer-benchmarks.
Mearian, L. (2011, February 11). IBM Watson supercomputer has 80% of human brain power. ComputerWorldUK. Retrieved from http://www.computerworlduk.com/news / infrastructure/3262121/ibm-watson-supercomputer-has-80-percent-of-human-brain-power/.
InsideHPC. Retrieved from http://insidehpc.com/2010/12/17/graph500-benchmark-a-truer-test-of-supercomputing/.
University of California at Levine: Information and Computer Sciences. Retrieved from http://www.ics.uci.edu/~bic/messengers/papers/NOVA-DSC.pdf.
Retrieved from http://www.psc.edu/general/software/software.php?form_submitted=
Wire. Retrieved from http://www.hpcwire.com/blogs/Pittsburgh-Supercomputing-Center-Kicking-It-Old-School-104821334.html.
http://embarcaderos.net/2011/01/23/parallel-processing-and-multi-core-utilization-with-java/
Cite this Document:
Copy Bibliography Citation

Related Documents

Cracking and Protecting My Genetic Code Cracking
Words: 941 Length: 3 Document Type: Essay

Cracking and Protecting My Genetic Code Cracking Your Genetic Code Genetics has advanced to the point that it is inevitable that genotyping services will become commonplace and increasingly inexpensive, unless legislatures take action to limit the public's access to this information. Since genotyping services are essentially businesses catering to the public and providers of medical services, an issue about quality and the relevance of findings becomes important. Near the beginning of the

How Should Society Deal With Information About the Genetic Code
Words: 2159 Length: 6 Document Type: Research Paper

cheap genomic sequencing has widespread and unforeseen cultural, political, and societal implications that have only just begun to reverberate through the human population at large. Genomic sequencing not only reveals some of the causes and connections behind certain diseases or disorders, but also puts the lie to certain forms of bigotry which assumed that dramatic phenotypic differences represented a similarly dramatic genetic or biological difference (put another way, genome

Genetic Screening
Words: 2160 Length: 6 Document Type: Term Paper

Genetic screening is one of the most controversial topics in the scientific arena today. The advent of the Human Genome Project, which maps the complete human genetic code, has brought this issue to the forefront. This paper will discuss the basic science that underlies genetic screening, applications of genetic screening, and investigate some of the common misconceptions and ethical questions about its use. Genetic screening itself is simply "the systematic search

Genetic Engineering the Objective of
Words: 2594 Length: 6 Document Type: Term Paper

(Biohazards: The Next Generation? There is a wide variety of such products and that includes salmon which grow twice as fast as the regular salmon, but nobody is thinking of the consequences when these fish will escape from the farms where they are being cultivated. Some types of plants like poplar, eucalyptus and pine are being modified so that their rate of growth increases and are able to stop reacting

Genetic Engineering the Process of Altering Genes,
Words: 646 Length: 2 Document Type: Term Paper

Genetic Engineering The process of altering genes, or genetic engineering, has become a more heated subject as science and technology continue to evolve. In fact, with DNA technology, genetic modifications within plants and other organisms has become a major development, especially in the world of agriculture and medicine. However, there is still the possibility of the inability to contain the spreading and somewhat "tainting" of non-genetically modified organisms, which seem to

Genetic Engineering for the Perfect Child
Words: 2515 Length: 8 Document Type: Term Paper

Genetic Engineering The alteration of the genetic structure of any organism is done by means of Genetic engineering that provides characters beneficial or pleasing to the individual performing the alternation. In other words it is a treatment of the DNA or RNA pool (Sarah. 2002). For instance, the most greatly well-known example of genetic engineering is the sheep Dolly that was cloned in the year 1996. Here, in order to create

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now