Tuesday, June 12, 2007

Heterogeneous cores disrupt the PCI market

The trend in CPU development is toward multiple cores on a single chip. Following Moore’s law, CPU die masks will shrink, and we will have the room to fit 8, 16, or 32 cores on a single chip.

Much of today’s software will have to be re-written to take full advantage of a large number of cores. I think the multi-core CPU will be also be a disruptive force on hardware outside of the CPU.

In the early days of the x86 processor, floating point math was very slow. It could take many clock cycles to complete a float divide, let alone square roots or trig functions. I actually worked on a video game that did all of its 3D physics and collisions as fixed point real numbers simply so they could use only the integer registers on the 386 CPU.

Slow floating point performance created the market for the separate floating point unit or coprocessor. They were sold as an optional add on.

Then Intel disrupted the market by shrinking the CPU mask image enough to fit the FPU onto the same die as the CPU. This was great for customers, but not so good for companies like Cyrix and Weitek who had been selling FPU. Then they were in the position of selling wagons wheels in Detroit when the car came on the market.

Multi-cores offer the opportunity for another disruption. The current Pentium generation architectures have offloaded several important tasks to PCI cards. These are network IO, graphics, and sound processing.

Each of these tasks are characterized by IO bound processing, lots of memory management, and device-oriented integration. The task is standardized and well defined so that specialized hardware can make the processing go faster.

In parallel processing, there is always a drop in utilization of cores as the number of cores increase because some of the task is inherently serial or requires communication and coordination.

Each additional general purpose core becomes less valuable to add to the CPU.

In the multi-core generation, there is more room on the chip to devote some of the cores to special tasks. A heterogeneous chip will have a mix of general purpose cores and specialized cores. Common IO bound tasks -- networking, sound, and graphics -- can be moved from the PCI card to the CPU.

For hardware system integrators and PC vendors, the number of subsystems they have to manage drops radically. The big win is in laptop and mobile devices. There is a dollar premium attached to smallness. Simplifying all the boards, drivers, submemory, and cards is a real economic success story.

Customers will benefit as the production cost of PC systems continues to drop radically.

Heterogeneous chips can have different mixes of cores specialized for different markets.

The losers might be the PCI card vendors who watch as the bottom end of their market gets taken away.

You can see some of the fallout already. AMD started a heterogeneous core strategy when it acquired ATI. AMD got the graphics technology that ATI had developed. Now they can roll that into future generations of heterogeneous core chips. Some of the cores will be general purpose cores, some of the cores will be built to implement Direct X 10 in hardware and support low end Vista graphics.

Intel, on the other hand, seems to be more conservative and is currently favoring more homogenous cores.

Market pressure may change that over time. It will be interesting to watch this shake out over the next few years.

A final question to ponder: What common tasks will we dedicate hardware to once we have more cores to play with?

Monday, June 11, 2007

I want a revolution (every twenty years)

When I was in school, I was in the midst of a revolution.

Software was engineering, and they were going to do it right. Doing software “right” had been the Top Down Software Process model, and it was Dogma. The revolutionaries wanted the Water Fall Process model.

The very first process model was the Heroic model. The “heroes” were math PhD’s and electrical engineers brainstorming how to make computers work for the first time. Computers came out of the effort to win World War II. Computers were labor saving tools for mathematicians working on trigonometric artillery tables. Later, computer technology began to incorporate control theory from aviation.

AT&T, the heavily regulated telephone monopoly, used computers to compress even more customer calls into its widely installed base of wires and call centers, thus driving down their cost and increasing profits without having to get an Act of Congress.

But the Heroic process model did not scale to larger problems.

By the 1960’s, computers were a big business. Software to control the computers was being written to solve large, complex problems. There was a lot of money, equipment, or lives at stake. There were spectacular project failures. Rockets flew off course. Phone calls were noisy or not there at all. Software was delivered late, buggy, or not at all.

A new generation of wise men of computers devised a revolution. They replaced the Heroic process model with the Top Down Process Model:

1) gather requirements from customers

2) decompose the problem into functionality to meet the requirements

3) design modular systems to meet the functionality

4) implement the design

5) test

6) document

7) maintenance

The Top Down Process scaled to larger problems than the Heroic model of just putting a bunch of smart guys in a room and hoping for the best.

As time went on, the space race drove software to new heights with the need for solving problems dealing with complex orbital math, control, and communications. Also, computers started to be used in commerce in a very bid way. Banks and stock markets realized that computers could record large volumes of transactions quickly, cheaply, and accurately.

The apex of the Top Down model was CASE (Computer Aided Software Engineering) in the early 1980’s. The idea was that you could have a design that was so good, so clear, and so coherent that you could capture it all in “The Tool,” push a button, and a working system would pop out the back end. This idea was very popular with large software customers like the Defense Department, which had spent hundreds of millions of dollars on systems that, in the end, didn’t really do everything that the DoD wanted.

Twenty years had passed. There were spectacular failures. Banks lost track of money. Radar systems would not talk to the alert system, which would not talk to the missile systems. Software was delivered late, buggy, or not at all.

In the 1980’s, a new generation of wise men of computers devised a revolution, the Waterfall Process model:

1) gather requirements from customers

2) decompose the problem into functionality to meet the requirements

3) design modular systems to meet the functionality

4) implement the design

5) test.

6) iterate: if not done goto 1

7) document

8) maintenance

Step 6, planned iteration, was a big deal. They were dealing with the scaling problem by admitting that problems are large and complex and you could not just be “smart enough” and design it right the first pass. Plan to iterate. Design, build, test, repeat.

The RAD (Rapid Application Development) was the vanguard of the model. If you do your software in this rapid iteration, you would arrive at a working solution in an economically feasible amount of time.

Huge and important systems were developed. A personal computer with word processing, spread sheets, and email went onto the desk of nearly every office worker.

The internet with email and the web revolutionized commerce and communication. Google organized and indexed the internet and effectively made everyone smarter.

Twenty years passed. The problems scaled larger. There were spectacular failures. The internet is full of security threats from crooks. The FBI could not get its case files into the hands of the agents and law enforcement. The Vista operating system was delivered years late and with many of its original features removed. Software was delivered late, buggy, or not at all.

In the 2000’s, another generation of wise men of computers had a revolution, Agile Development:

1) gather requirements from customers

2) re-factor the problem into functionality to meet the requirements

3) write automatic tests

4) implement the design

5) iterate: if not done, goto 1

6) document

7) maintenance

The software design phase was compressed even further. Testing moved from step 5 to step 3. The iteration cycles are even shorter than RAD.

Good software is being written this way. Automatic testing is a very powerful tool for writing modular code with good interfaces. It allows our projects to be robust in the face of change.

So every twenty years a new generation of wise men have advanced the state of the art with a revolution:

Heroes of computers in the 1940’s

Top Down model in the 1960’s

Waterfall in the 1980’s

Agile in 2000’s

Each revolution had to live with and accommodate the old way of doing things because they worked for a given class and size of problems.

But the problems will continue to grow in size and complexity. Agile programming makes testing a core principle, and that is good.

However, I think that in fifteen to twenty years, a new generation of wise men of computing will stage a revolution. A new process model will replace Agile as we get to the limits of what it can do. Unit testing code, even with 100% code coverage, does not mean you test 100%, 50%, or even 10% of the states and interactions of the code. Multi-threading and distributed computing will make the systems larger and more diffuse, and the complexity of the interactions will dominate the complexity of a single component. We want to be able to change parts of large distributed systems while they are still running - how do you test that completely? Software will get delivered late, buggy, or not at all.

Around 2020 or so, a new generation of wise men of computing will have to have a revolution. The problems will have scaled and we will have to do something new.

Viva La Revolution!

Monday, May 28, 2007

Computer Science: Hello World

Hello, World.

This is my blog. I am your host, Karl Meissner. My web site is here .

I am a professional software developer with an interest in good, solid code.
I wanted to blog about some of my areas of interest.

These include programming video games, automation, controllers, compilers, parsers, 3D graphics, Artificial Intelligence, and mobile platforms.

I will review technologies and tools that I use when developing. Good tools make us smarter.

Finally, I wanted to share some reflections on being a software contractor in the post-dotcom world and being successful at it.

For my first topic, I wanted to talk about the computer science classic program “Hello, World.”

In most colleges and universities, fresh young Computer Science majors take some course like Programming 101. It is an informal tradition that the first program taught is called “Hello, World.” It is a short program that prints the text “Hello, World” to the screen.

When I took the course, the programming language was in Modula-2, a derivate of Pascal. These days, it could be in Java. Other courses might teach C++, Simula, ADA, Lisp, COBOL, Perl, Ruby, Python, or Fortran. Here is a list of many “Hello, World” programs in many languages. Look at all the different ways of doing the same thing!
These are all different languages, all with different purposes, strengths, and weaknesses. However, “Hello, World” is usually the first thing taught.

With so many languages that work in so many ways, why do classes start with learning “Hello, World?”

Any general purpose programming language has to be able to generate some output that a person can read. The early computers were text-only, on a single console. We are still rooted in that tradition even after decades of GUI’s and the internet. Just about all programming languages can output the text “Hello, World.”

Programming is largely an exercise of design and craftsmanship. There is an entire publishing industry made up of Manuals, Guides, and “Introduction To’s.” But to really learn a craft, start with a simple example and then try it yourself. You become a better craftsman when you practice your skill. We start with the example, “Hello, World,” then we try to do it.

The program generates some output. If you can generate output, you can see it working. The programmer can test his work. This is one the fundamental principles of good development:
Do some work,
Test it.
Does the program generate the output at all? Can the student make that happen? Is the text correct? Did it show up in the right place?

Inevitably, our craftsmanship falls short of the task. The world is exceptional, so the code has bugs. For some reason, the output is wrong. The text is wrong, the database entry is not correct, the sum is not correct, the web page does not load. There are multiple reasons that the code can be in error. If we understood our creation perfectly and had the capacity to hold it all in our mind, we could simply analyze the code and solve the mistake that way.

Most programs are many times too big and do too many complicated things. We illuminate the workings of our code with outputs. Output messages are sent to the screen or to a log so we can trace the execution of the code and see what happened. It is one the fundamental tools we can use as programmers.

So all programs courses start with teaching Hello World, not because it is a silly tradition but because sending text to the screen is one of the fundamental debugging tools of any programmer.

So Hello, World!

Karl