Tuesday, September 11, 2007

Hardware vs Software

OK, call me biased, but the kind of physical science behind hardware advances seems a bit, well, harder than writing a new OS or application. In fact, if I think about the main drivers behind the information revolution of the last 20 years, I'd give much more credit to hardware advances than to the concurrent advances in software.

Think about it -- state of the art OSes aren't that different from the original BSD or Unix flavors, whereas the flash memory in my iPod is the equivalent of warp drive to someone from 1985! I don't see a factor of 10^6 improvement in anything related to software, whereas we've achieved gains of that scale in processors, storage and networking (bandwidth).

Meanwhile, funding for research in physical science has been flat in real dollars during my scientific career. Go figure!



From the Times, an article about IBM's work on "racetrack" storage, which may be key to the continued exponential growth in storage capacity.

I'm no experimentalist, but what they're doing sounds hard!

The tech world, obsessed with data density, is taking notice because Mr. Parkin has done it before. An I.B.M. research fellow largely unknown outside a small fraternity of physicists, Mr. Parkin puttered for two years in a lab in the early 1990s, trying to find a way to commercialize an odd magnetic effect of quantum mechanics he had observed at supercold temperatures. With the help of a research assistant, he was able to alter the magnetic state of tiny areas of a magnetic data storage disc, making it possible to store and retrieve information in a smaller amount of space. The huge increases in digital storage made possible by giant magnetoresistance, or GMR, made consumer audio and video iPods, as well as Google-style data centers, a reality.

Mr. Parkin’s new approach, referred to as “racetrack memory,” could outpace both solid-state flash memory chips as well as computer hard disks, making it a technology that could transform not only the storage business but the entire computing industry.

“Finally, after all these years, we’re reaching fundamental physics limits,” he said. “Racetrack says we’re going to break those scaling rules by going into the third dimension.”

His idea is to stand billions of ultrafine wire loops around the edge of a silicon chip — hence the name racetrack — and use electric current to slide infinitesimally small magnets up and down along each of the wires to be read and written as digital ones and zeros.

His research group is able to slide the tiny magnets along notched nanowires at speeds greater than 100 meters a second. Since the tiny magnetic domains have to travel only submolecular distances, it is possible to read and write magnetic regions with different polarization as quickly as a single nanosecond, or one billionth of a second — far faster than existing storage technologies.

If the racetrack idea can be made commercial, he will have done what has so far proved impossible — to take microelectronics completely into the third dimension and thus explode the two-dimensional limits of Moore’s Law, the 1965 observation by Gordon E. Moore, a co-founder of Intel, that decrees that the number of transistors on a silicon chip doubles roughly every 18 months.


8 comments:

Anonymous said...

Sorry but I don't see how having a bunch of wire whiskers standing up on a chip with a Rube Goldberg arrangement of magnets whisking up and down at METERS per second consitutes the rape of dimension three. Additionally, you don't really think a modern semiconductor is a two dimensional construct. DO YOU? The third dimension is thin but absolutely essential. That's what we mean when we speak of multi-layer circuits.

I think this guy's idea of little magnets on elevators is going to be about as commercially interesting as "bubble memories" some decades back.

Steve Hsu said...

I don't know much about racetrack memory in particular, but my point is that the continued Moore's law advance of processor speed, storage and bandwidth requires a lot of ingenious applied physics breakthroughs that the general public (or perhaps even the computer science community) has no awareness of.

JJ2000426 said...

Steve said:

"10^6 improvement..Meanwhile, funding for research in physical science has been flat in real dollars during my scientific career. Go figure!"

I guess it is fair that physical funding in real dollars should be flat instead of increase by 10^6. Exponential growth is just not possible. The GDP, the personal income, etc all remain flat when meansured using real purchase power.

One reason the funding is flat is because there are some areas which sucks out EXPONENTIALLY growing fundings. Such as giant particle accelerator research. They grow from desktop devices, to a few football field in size, to a few kilometers, and now more than the diameter of a big city. If they have their way they will next building something that circular the earth, or even bigger. Because they need to keep pushing the energy higher exponentially. Why would you have money left for anything else if something is already sucking away exponentially growing amount of money?

For example how about Cold Fusion? Not a penny is spent on Cold Fusion, although for 18 years there have been mountains of evidences that cold fusion, one assisted by a metal called palladium, is completely real! Each and every cold fusion researchers started as a skepticist, and became convinced only after they saw the actual experiments themselves.

The cold fusion research is being supressed because if cold fusion is successful, there is no need to spend billions of dollars on losers like ITER, or on money suckers like LHC. And a lot of particle physicists will go unemployed.

Cold fusion, if successful, could solve the energy crisis of humanity. On this alone, there should be enough funding to support the research to answer a definite question: Is cold fusion phenomena REAL? Why so many scientists, risking their own career and getting no funding, continue to research cold fusion on their own dimes?

Anonymous said...

It warms the physics part of my heart to open up the New York times and see that picture of a lab.

Unknown said...

Hi Steve,

I think the software developers will admit that taking full advantage of the hardware is one of their larger continuing challenges--but it's also true that most developers don't have much knowledge of all the interesting materials science that has made those hardware advances possible.

wrt anonymous' comment, the NYT article botched the explanation--what moves are magnetic domains, not physical magnets. The magnetic domains are recorded and read using essentially the same techniques as on GMR disk platters, with the domains shifted along the wire via current pulses. Since the heads are stationary wrt the media when read/write operations are performed, the domains can be a lot smaller than on disk media. The 3D claims are kind of funny, since it is essentially a 1D medium coiled in 3D for packing density.

Anonymous said...

One of my favorite classes ever up then road at OSU was materials science - with a nice long section on semiconductors. Loooong time ago. Anyway, back to software, what you have seen is a slow steady progress in managing complexity. One of the things that strikes me today is how we have enabled a lot of folks who would have been mechanics or machinists be productive in this field. They would have never had a chance 15 or 20 years ago - it was just too hard.

MrLoftcraft said...

I hope the "racetrack" becomes available at the same time with the quantum CPU. That would really change the world as we know it today. Imagine having almost infinite storage space and almost infinite cpu speed:)) That would surely be the next technological revolution.

James Hedman said...

This is because the industry is upside down and ass-backwards with processor technology being driven by hardware engineers rather than chips being designed to support conceptual ideas of how to solve computable problems. I blame Intel. How else do we end up with most of the transistors on their CPU's devoted to out of order execution and "super-scalar" parallel execution just because the computer doesn't know which branch will be taken from the result of a boolean comparison or when a loop will end?

Just because is always one step ahead of everyone else in shrinking lithography we are stuck with this ugly awful architecture. Luckily, Moore's Law is coming to its physically limited end. Perhaps some research money can now be diverted into promising but abandoned areas of engineering research in clockless computing, optical neural networks, stack oriented processors, non von Neumann, and even non-Turing machines such as optical networks that directly perform set operations on waves of input.



There is no evidence that the universe is a Turing machine let alone the human brain. Why should computers necessarily be modeled that way? What we want to compute should drive hardware design not just some blind race for faster switches.

Blog Archive

Labels