[continued from page 1]
The answer, says Ralphs, is to shift data dynamically to underutilized processors in order to achieve efficiency. This becomes more difficult the more processors you use to solve a problem, and it makes demands on speed and bandwidth. Data must be shifted constantly, especially while solving a complicated problem. And this data management must not be allowed to consume computing resources and undermine efficiency.
Ralphs tackles these challenges by writing “scalable” algorithms that determine how to move data around so each processor is always doing something useful to contribute to the overall computation.
“My goal is to write one algorithm with many procedures that covers the entire process no matter how many processors I’m using,” he says. “I want one strategy that can be automated. If my method of shifting data changes because of the number of processors I’m using, this change should happen automatically.”
Ralphs once wrote software to manage a task that required 2,000 processors. But scale is not his primary goal. Ralphs runs Computational Infrastructure for Operations Research, or COIN-OR, a repository of open-source software tools for optimization problems. People around the world have used his tools, and Ralphs takes delight in learning how his programs are applied. One of his favorite emails came from a man who said COIN-OR’s optimization tools had helped overcome a water-delivery challenge in Africa.
“I develop fundamental tools and see what people do with them,” Ralphs says. “I’m happy when I can produce something that helps someone solve a problem.”
When time scales conflict
Life for atoms can be a contradiction in time scales. Take ceramic powders, for example. Scientists estimate their atoms vibrate as many as 1014 times per second, or one million times one million times one hundred.
When the powders are heated, or sintered, to form a solid material, says Jeff Rickman, a second, more leisurely motion results. Every 10,000 atomic vibrations or so, an atom hops from one location to another in the crystal lattice.
This hopping constitutes a phenomenon called diffusion, in which a material’s molecules intermingle by randomly migrating from a region of higher concentration to one of lower concentration.
The difference between these two time scales, between fast and incomprehensibly fast, is of great consequence to Rickman, a professor of materials science and engineering who uses HPC to build computational models of diffusion.
Rickman studies the diffusion of aluminum and oxygen ions in aluminum-oxide (alumina), which is used in the manufacture of aluminum and in advanced ceramics, catalysts, tools and engine parts. Diffusion plays a role in creep, in which a solid material deforms because of low-level stresses.
Rickman’s goal is to learn how a tiny amount of an impurity can alter diffusion and other transport properties. He has conducted tensile loading experiments to examine creep and oxidation, and he is constructing computational models to learn how impurities affect diffusion.
Because the ions in alumina are coupled, Rickman must write equations of motion for all the pairs in a system and solve the equations together. “Everything in this system is interconnected,” he says. “Each ion exerts force on its neighbor. If one moves, both are affected.”
Rickman’s equations must also take into account the vastly different speeds at which atoms hop and vibrate.
“Diffusion is a slow process compared with the vibrations of atoms. We are interested in the atoms that are hopping but we must also watch the atoms that are shaking. To integrate the equations, we have to bridge time scales. To solve the equations, we have to follow the fastest thing happening even if we’re not interested in it.
“And because we’re studying transport over distance, we have to wait for many hops to occur before we can make a meaningful calculation about diffusion.”
Rickman writes parallel codes to simulate the phenomena in the system he is studying. “We’re looking at a relatively large system. This implies the need to subdivide the system into parts, to use different processors and to do this in such a way that processes occurring almost independently can be modeled almost in parallel.”
Rickman uses Lehigh’s HPC facilities as well as those at the Pittsburgh Supercomputing Center. He collaborates with Helen Chan and Martin Harmer, professors of materials science and engineering. The group receives funding from the Office of Naval Research.
Take a second look at the cantilevered traffic signals and highway signs that you see everywhere on roads and freeways. The welded connections that support these structures are vulnerable to fatigue cracking from the cumulative effects of winds and breezes. The issue has become urgent in the U.S., especially in the West and Midwest, where signs have fallen and structures have collapsed.
Lehigh’s ATLSS (Advanced Technology for Large Structural Systems) Center is combining HPC with full-scale lab tests to develop specifications for the design and fabrication of new sign and mast structures and for the retrofitting of existing structures. The four-year project is funded by the American Association of State Highway and Transportation Officials and the Federal Highway Administration.
The ATLSS group is conducting 100 to 110 tests on 80 structures. The group has also conducted simulations of 18,000 mathematical models of the structures and the welded connections using ABAQUS, a suite of finite-element analysis software. Each model contains about 40,000 degrees of freedom, a term that refers to the number of equations that must be solved at each iteration.
The project makes use of two HPC architectures in Lehigh’s Computing Center. The SMP (Symmetrical Multi- Processor) computing facility contains a large number of processors and a shared memory in a single machine. The 40-machine Beowulf cluster enables parallel processing of full-scale structural systems by parceling parts of one large analysis out to many different machines.
The lab tests and computational simulations complement each other, says ATLSS senior research scientist Sougata Roy, who oversees the project with ATLSS director Richard Sause.