home :: features :: article page 1 :: article page 2 :: article page 3

Unbounded opportunities

In ways large and not so large, researchers are leveraging high-performance computing.

From mapping sunspots to mapping genomes to optimizing search engines, high-performance computing is uniquely positioned to illuminate the inner workings of natural phenomena and of human endeavors.

High-performance computing makes it possible to take systems more complex than we can imagine, model them mathematically, and analyze or improve these systems, often by solving countless equations in a second’s time.

HPC, as it’s called, lets car manufacturers run crash tests virtually, reliably and cheaply. It helps biologists simulate the activities of a cell and it enables physicists to model the flows of plasma in a nuclear fusion reactor. UPS and FedEx use HPC to select the best way, among millions of options, of routing thousands of drivers to their destinations.

HPC also brings rocket science to everyday life. One manufacturer of household appliances upgraded its computer cluster to solve the intertwined demands of product safety, supply chain management and protective packaging. A coffee maker used finite element analysis to model and solve problems caused by gas buildup when it switched from metal to plastic containers.

Because of its very nature, says Ted Ralphs, associate professor of industrial and systems engineering, HPC is becoming more accessible. HPC tackles a large task by dividing it into subtasks and assigning these to processors that work in parallel to solve them. As these processors grow smaller and more affordable, says Ralphs, every workstation and desktop PC becomes a potential contributor to an HPC infrastructure.

“One big trend in HPC is that hardware is becoming more commoditized,” says Ralphs, who led Lehigh’s HPC steering committee for eight years. “It used to be that your PC was good only for basic functions and that you had to switch to a large machine to do a big job.

“Today, you can buy a group of PCs off the shelf, link them with a fast network connection and do perfectly acceptable parallel computing. Special processors are required less and less. Off-the-shelf equipment has enough power and memory for many tasks.”

Lehigh in the past four years has greatly expanded its HPC facilities with half a dozen strategic purchases of computer clusters, workstations and storage hardware. The university has also installed the Condor Project, which marshals all campus computing power – HPC facilities as well as the capacity of several thousand PCs in Lehigh’s public labs when those PCs are idle – to run large tasks.

“Condor is in constant communication with the computers on campus,” says Ralphs. “It identifies machines with capacity that’s not being used, and sends tasks to them. Not many campuses have this level of opportunistic computing that taps into commodity hardware.”

Computing and HPC underlie most of Lehigh’s major research efforts. Mathematicians use HPC to search for strange number pairs for security codes. Electrical engineers investigate signal processing as well as the energy costs associated with data warehousing. Biologists and bioengineers model the behavior of molecules, and geologists project climate change patterns.

Computer scientists use HPC to investigate computer-vision and pattern-recognition technologies. Mechanical engineers and physicists model the dynamic flow of fluids, including the plasma in nuclear fusion reactors. Physicists run numerical simulations to calculate the atomic structures and vibrational properties of material defects in semiconductors.

Lehigh will examine the state of the art in HPC when it hosts a workshop Oct. 5–6 titled “Computational Engineering & Science/HPC: Enabling New Discoveries.”

The following articles showcase some of the uses HPC has found at Lehigh.

Dividing and conquering
Optimization problems, says Ted Ralphs, are tailor-made for HPC. Take the routing of delivery trucks. You must evaluate thousands of possible ways of assigning 100 drivers each to deliver 25 packages and identify the one solution that requires the fewest driver-miles.

“An optimization problem lends itself to a divide-and-conquer approach,” says Ralphs. “You divide a set of problems into portions and mathematically prove that certain portions will or will not yield useful information. This is a naturally parallelizable process because you give each portion of a problem to a different processor.”

Parallel processing, says Ralphs, thrives when all processors are busy all of the time doing productive work. Avoiding “down time,” however, is challenging when using a large number of processors. You do not know in advance how much work each portion of your problem will require. If half the portions are solved quickly, the computing capacity assigned to them will sit idle while the other portions are being solved.

“Idle time,” says Ralphs, “means your computing capacity is not paying its way.”

1 | 2 | 3 | Next >>
Computer clusters in Lehigh’s Computing Center (top and middle) enable research into a variety of topics, including the imaging, mapping and targeted radiation treatment of cancer (bottom).