Charlie Catlett

By Louise LernerApril 3, 2013

Charlie Catlett is a senior computer scientist at Argonne National Laboratory and a senior fellow in the Computation Institute, a joint institute of Argonne and the University of Chicago. Formerly he served as Chief Information Officer and head of the lab's cyber security. He also served as director of TeraGrid, an open science computing infrastructure that connects scientists with high-performance computers and facilities around the world.

What got you interested in science?
I always liked taking things apart and seeing how they worked. In high school I started getting interested in electronics, trying to make my guitar amplifiers sound better. I learned about Ohm's Law because I got this really nice distortion sound by hooking two speakers up in parallel, which made their resistance drop in half—it sounded really good until this greenish-brown smoke started coming out of it.

So I was interested in why that happened, took an electronics class and thought I was going to go into electrical engineering. But then I took a programming class and I really liked the idea of getting computers to do things for me, by writing programs. So I didn't start out as a six-year-old wanting to be a scientist; it just sort of happened because I was curious about why things did what they do.

So how did you get into supercomputers?
I went to the University of Illinois initially to do electrical engineering. After my first programming class I switched to computer engineering, so I could learn how computers work down deep inside and how they were organized. I went to a computer conference, I think this was 1985, and saw a booth from Lawrence Livermore National Lab where they had the shell of a Cray XMP, which was a state-of-the-art supercomputer back then. It looked really like something out of Star Trek. Later that summer the local paper in Champaign-Urbana carried a picture of a Cray XMP being installed at the university, and the next day I sent my resume over to the folks who'd ordered it—the National Center for Supercomputing Applications (NCSA).

The Internet at the time was pretty small, mostly military sites, and NSFnet was to become one of the main networks interconnecting universities. As the NSFnet network got bigger, though, we had to figure out how to make the protocols work at greater scale, especially routing and monitoring and management. As the technology got better we tried to push what we could do—and really I'm still doing that.

I was at the supercomputer center until 1999, when I came here. Throughout my time working in the high-performance computing community I really liked figuring out what scientists—chemists or engineers or climate modeling scientists—needed, and thinking about how to match their needs to the computers we had.

Sounds like that was an exciting time to be in computing?
Yeah. We made two really major shifts in how we did high-performance computing in that time. One was structural. Early supercomputers had say, four processors that cost $5 million each. In the late 1980s it became clear that the speed of microprocessors was increasing at a rate that by the early '90s they would be as fast as the $5 million processors. Despite some important issues such as communication and memory speeds, we could see that the PC industry was making microprocessors that would soon catch up—and they cost a fraction of the traditional supercomputer processors. So the whole supercomputing field had to do this transition from building machines out of handful of $5 million processors to designing them with hundreds of $1,000 processors. Scientists in turn had to rethink how they wrote applications for that type of system.

By the mid '90s, PCs got powerful enough so that we started doing clusters of PCs. Today this trend continues, with more and more processors as they get less expensive. The IBM Blue Gene P supercomputer that we have here at Argonne, for instance, has over 50,000 processors.

To get to the next level of power needed—exascale computing—we have to make yet another jump; first to hundreds of thousands of processors and eventually to millions.

So you can't just scale everything up?
No, you have to sort of rethink how you tackle the problem. There's a great book by Fred Brooks called the Mythical Man Month, in which he says "It takes nine months to have a baby, no matter how many women are involved." Similarly, there are certain things that take a certain amount of time and adding more resources to the job doesn't help. So if you've got an application that's meant to do four big things, you have to figure out how to turn four tasks into 4,000 smaller tasks that can happen at once; so you have to restructure how you approach the problem from an application point of view.

On the hardware side, if we just took the current Blue Gene supercomputer and tried to get to exascale, which is 2,000 times faster than today's machine, you'd need a couple of new power plants just to supply electricity to it. And that's not practical. So everything, down to the materials, has to be revisited so that the machine uses less power while still getting faster.

What are you working on now?
Right now we're trying to figure out what interesting things we can do as computers get smaller, as opposed to faster. So if you have an iPhone, you're carrying a computer in your pocket that is way, way more sophisticated and powerful than the $20 million supercomputer we installed at NCSA 25 years ago. Even in the past 10 years, what used to cost $100,000 now costs a few dollars, and it's better as well. When the iPhone came out, five years ago, it cost $750. Now they're $49, and more powerful than the $750 one was. They're also getting faster and smaller.

We're doing projects with sensors in what we called embedded or pervasive computing. Imagine if you could have all of the power of the iPhone—location sensors and accelerometers that tell you movement—all for 25 cents in a package the size of a button. You could do all kinds of things with that.

What's an example?
We have one project with the Department of Energy to prototype an environmental sensor. What could you do with something the size of your thumbnail that could take temperature, light and humidity information and transmit it wirelessly? You could put five of them in an office and have software that controls building heating and cooling really efficiently. You could save a huge amount of energy. We're trying to figure out what you could do with small, cheap sensors that can communicate. This is the first area we're looking at: building controls for better energy management.

We also started a collaboration with the School of the Art Institute of Chicago, working with architects to try to explore how you would use sensors in buildings but also whole blocks in cities. Among other things, we'd like to couple sensors with computation models that would let you, say, examine the impact of putting a skyscraper here instead of a parking lot, and how that would affect everything from the air quality to the heating bills on the building next door when it doesn't get sunlight anymore, and what it does to traffic patterns, and so forth. As we add computing to infrastructure—buildings, roads, cars—it opens up a lot of possibilities for safer, better infrastructure.

What do you find is the most enjoyable thing about your work?
It's two things. In the abstract it's about solving puzzles, and I remember the first rush when I wrote my first program and made it work. It was just simple: I wanted the computer to do this thing, and I was able to wrestle with it and make it do that thing.

The other part about it, which is probably bigger for me, is to be able to make things work (or work better) in order to give people something they didn't have before. I do like to solve puzzles, but in practical ways that affect real people that get up in the morning and drive to work. That's why I gravitate to things like traffic or city planning.

What's the biggest challenge in your field?
I don't know that there is one biggest challenge, but I would say the one that I think about the most is the impact of this pervasive computing and mobile devices on our personal privacy and security. For instance, if you get the new iPhone later this year it will have something called near field communication. Right now you can already go to Starbucks and pay for your coffee with the iPhone, or scan a boarding pass at the airport. But it still requires you to do some deliberate actions, just like taking your credit card out of your wallet. Near field communications means that I only have to bring my phone NEAR a reading device—I don't even have to take it out of my pocket—and it can still be read. So that right there suggests that a pickpocket on the subway wouldn't even have to touch you!

So there's the financial part of it: as devices get more and more sophisticated we're putting more and more of our lives in there, and now there's money too. Once money is involved, it introduces lots of incentive to break into these things.

What do you hope to accomplish?
One of the reasons I'm trying to get a program to explore these technologies is because I'd like us to know enough about them to anticipate the security and privacy dangers while it is more practical and cost-effective to actually do something about them. Looking back, everyone involved in the Internet wishes that in the 1980s we'd taken security more seriously. There was a wakeup call in 1989 with the internet worm. A student at Cornell wrote a program to seek out other computers on the Internet, transfer itself to that other computer, and reproduce itself. It was just an experiment, a really neat idea of an application that propagates itself around a network. That could have many great applications—say, a program that moved around from computer to computer updating software, etc.

But he made a mistake in the code and the program replicated itself too quickly, so when it landed on a machine, instead of making one copy it made thousands and brought the machine down.

This was a really big deal; the Internet had been around a while already and until then no one had breached the sort of unwritten code of trust of the Internet community. No one had done something on their machine that made bad things happen on other computers. And even though that was 22 years ago, it was maybe already too late to think about designing the Internet so that it could be more secure. Part of the reason that people, myself included, didn't take it seriously was that we didn't realize the Internet was going to become as big as it did—this global entity. And by then, it was pretty late to make those kinds of fundamental changes.

So in terms of security for mobile devices and pervasive computing, I have this perhaps naïve idea that if we can anticipate some privacy problems, then we might change how we deal with them.

Leah B. Guzowski »