The Naked Scientists

The Naked Scientists Forum

Author Topic: How do they keep squeezing more and more transisitors into computer chips?  (Read 7831 times)

Gregg

  • Guest
Gregg asked the Naked Scientists:
   
How do they keep squeezing more and more transistors into computer chips?

Where are we going with this in the future?
 
Have a very happy holidays,
 
Gregg (from Montreal, Canada).

What do you think?
« Last Edit: 06/01/2010 12:30:02 by _system »


 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
They get more transistors into computer chips by making them smaller.  This has two benefits; obviously, you can get more transistors into the same area of silicon, but the smaller feature sizes also mean that less energy is required, and heat produced, and which then needs to be dissipated if the chip isn't going to overheat and destroy itself.  This reduction in the heat that's generated also allows hot parts of the chip to be placed closer together than is possible with larger feature sizes and greater heat generation.

The current limits are more due to being able to accurately project the features (I believe that it's necessary to use UV light to project the features as visible light has too long a wavelength for current feature sizes) and fabrication chemistry; you need to be able to accurately etch features down to 22-32 microns atm.

If we're able to get around these issues to continue shrinking the feature sizes we'll eventually run into quantum problems due to the small number of atoms that make up the features.

It is largely because of these factors that multiple core processors have clearly become mainstream now.  It's far easier to increase the power of a processor by duplicating it than by trying to make it run faster.  The Intel 'Netburst' architecture of the Pentium 4 chips was the last real attempt to increase processing power by just increasing the speed at which the chip ran; it wasn't a great success.
« Last Edit: 06/01/2010 15:40:00 by LeeE »
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
All the major semiconductor companies are trying to keep up with "Moore's Law" - really Moore's observation and extrapolation (see http://en.wikipedia.org/wiki/Moore's_law ). Even Gordon Moore realises it has to stop sometime in that exponential growth must have a limit. The problem now is that to keep in the race the costs are becoming astronomic. Intel invest in many blue sky programmes to try to achieve the next generation of scaling; in fact they try to optimise their return on investment and don't try to do this continually but skip a step. Each time they do this they are unsure of what route will be successful and the investments are truly enormous. I saw an Intel presentation recently that justifies this expense and it was very convincing. Many companies (and many countries) can simply not afford the expense and the risk, which will probably be the ultimate limitation.

As Lee said most photolithograpy is done using deep UV. Mask design is quite complicated now with the shapes required to be etched predistorted to compensate for the effects of the processing and the interaction between nearby shapes. Most fine geometry chips are now laid out with regular arrays beacuse the transistor's characteristics are only matched to another's if they have similar surroundings. Even so the matching, which is quite importent, is getting much worse as geometries shrink.

UV lithography has survived as it was thought it might be superseded by electron beam lithography (or even X-ray lithography) and/or direct write techniques, but it's proved more successful. It would not have been so clear which horse to back in the past and it may all change again at some point.

 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
I completely agree with Graham and Lee. Just to add a little perspective:

It's interesting that many, if not all, the complexity and size barriers that were proposed in the early days of integrated circuits have not merely been knocked down, they have been blown to smithereens. For example, not all that long ago, people were saying that "serious"  CPUs (central processing units) could never be put on a single chip.

If you had a time machine, and you could take a processor or memory chip from the PC in front of you back and show it to an engineer of forty years ago, he/she would probably think you were completely insane, or from another planet!

The progress in this field has been nothing less than astounding.
 

Offline Bored chemist

  • Neilep Level Member
  • ******
  • Posts: 8667
  • Thanked: 42 times
    • View Profile
LeeE, you might want to check the units on this "you need to be able to accurately etch features down to 22-32 microns atm."
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
Good catch. I think he meant nanometers.
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Oops! - indeed.  Umm... it was because I was standing on my head when I typed that bit :D
 

Offline Nizzle

  • Hero Member
  • *****
  • Posts: 964
  • Thanked: 1 times
  • Extropian by choice!
    • View Profile
    • Carnivorous Plants
If we're able to get around these issues to continue shrinking the feature sizes we'll eventually run into quantum problems due to the small number of atoms that make up the features.

That's when we'll have to switch to Photonic Computing
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
The problem of increasing variation (transistor to transistor) is partly to do with feature precision (horizontal dimensions) but already more to do with the reduction in numbers of atoms. The threshold voltage of a MOS transistor is dependent on a number of physical parameters that at very fine geometries have significant statistical variation because they are, in turn, dependent on numbers of atoms (say of the implanted dopant in the substrate) or the variation in the thickness of the gate oxide, already only a few atoms thick. There are many other problems too though it is interesting to look back at papers over the years predicting limits to scaling that have since been superseded. The gate oxide in fine geometries is now usually a higher dielectric material to have the same electrical qualities as SiO2 but able to be thicker and have higher breakdown for example.

I think optical (photonic) computing is a long way off challenging more conventional processes except in special applications, like Fibre Optic communications where it can be used directly. But there is so much investment in so many areas you can never be sure where the next breakthrough may come.

I do think we are getting close to the limits of scaling for conventional silicon CMOS. 25nm is a huge challenge and I can believe there could be one or two more steps beyond that with exponentially rising development costs but it will have to stop some time. One way is to build vertically as well as horizontally. This is done already, with relatively low-tech processing, in making MEMS devices and has also been done (in a limited way) for some years in making DRAMs with deep trenches to get the storage capacitors big enough. Building devices on amorphous silicon (deposited rather than grown) is a large area of research which could also lead in this direction by allowing transistors to be placed and connected over other transistors.

Generally, though, I feel there will be evolution rather than revolution.
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
I think optonics is where we're headed towards, but agree with graham.d that it's quite some way off right now.  The next 'thing' has already started; MPP, or Massively Parallel Processing.  However, the idea behind MPP is almost as old as the concept of digital computing itself, so I'd agree that it's evolutionary, rather than revolutionary, at least as far as the hardware is concerned.

From the software side though, something of a revolution is still needed, so that the problems that seem to be essentially serial by nature may be tackled in parallel, otherwise MPP isn't going to bring any real benefits.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile

From the software side though, something of a revolution is still needed, so that the problems that seem to be essentially serial by nature may be tackled in parallel, otherwise MPP isn't going to bring any real benefits.


I think it's ironic that digital hardware design is now far more like writing software than designing hardware! I've always been of the opinion that software should be designed in a graphic form, but I never could convince anyone that was the way to go.
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Well, stuff like CPUs and ASICs are so complex now that it needs software to keep track of everything and where it is (and look after those heating and power supply issues), but as I've mentioned elsewhere recently, I think that all software will eventually be written by AIs that will be able to generate and compare many different algorithms to find optimum solutions, much more quickly than any human based software development team could ever possibly do, but that's probably some way off too.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
It certainly requires lots of computation to design a chip, so, yes, there will always be massive amounts of software involved.

I was referring to HDLs - Verilog for example.
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Hmm... not familiar with HDLs or Verilog.  I'll look it up sometime.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
Sorry Lee. I should have provided a bit more info. HDL is "Hardware Definition Language."

Engineers no longer design chips by drawing a schematic with gates, flip-flops, registers and the like. They define the chip design in HDL which looks a lot more like computer language than anything else.

Personally, I think it's a retrograde step, and I've seen it lead to some gigantic foul-ups, but it's the way the industry has been headed for quite a while.
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
As someone who has been in the industry a long time, I needed to be convinced regarding use of HDLs (Hardware Description Languages). However, once engineers get used to the idea, it seems to have advantages over schematics. Newer methodologies have been developed to extend the HDL (usually specifically Verilog or VHDL) to aid verification of the written HDL against the initial specification requirements. OVM (Open Verification Methodology) is an example. Intermediate languages, which may include SystemVerilog or even C++ can be used at this stage too. An engineer can produce a written, and detailed description of requirements and then verify his HDL against this. These methods extend the HDL concept into areas where many problems have arisen (e.g where the engineer makes a complete design but has failed to translate all the detail of the initial requirements).

Except in small analog designs, schematics are now of little use except to give some visualisation at a very high level. I have to admit, it helps me to see schematics at a block level but I am also aware that >1 million gate devices are quite opaque however they are defined. Quite high level structures, of any number of gates, can more easily (certainly just as easily) be built hierarchically in statements of how they are to work (in HDL) than constructed graphically. The completed blocks of HDL are in a form described as RTL - Register Transfer Level. Verification of the RTL against the initial specification requirements and getting it right is usually the longest task. [On the other hand, for analog and RF design, nothing has replaced schematics yet (except for very well defined circuits that with non-challenging requirements)].
 
The design can then be turned into a gate level description by "synthesis" based on the targeted fundamental cell library available to be utilised in constructing the finished chip. Simulations can verify the design, top-down, against the initial specification requirements. Parameters that will guide the synthesis tool will include specific timing constraints and, if chasing the fastest speed of operation, can also take many iterations and tweaks to optimise. The synthesis also will perform logic reduction and try to minimise redundant logic paths. It is often astounding how much the tool can improve efficiency over a human constructed design that appears, at first sight, very efficient. The synthesised design is usually checked by "formal verification" to ensure it is logically identical to its description form. Further verification is usually carried out by simulation, which will usually be a mix of directed vectors and randomly generated vectors, at a number of stages. It often comes as a shock to designers that random vectors find bugs when they think they have thought of everything! With very large designs, time rarely allows for complete verification by simulation and often there is a situation of diminishing returns in finding bugs. Some level of bug content is likley (whatever is claimed) and workarounds have to be found or else an expensive redesign is needed.

When a complete design is synthesised satisfactorily, it is then placed and routed onto silicon using further sophisticated software tools (800 page instruction manuals :-) ) There are quite a few stages involved here. Balancing clock inputs (to avoid races) is done by synthesis of clock trees and gates are sized to control the various delays involved in routing. Cross-talk between tracks is taken into account as are voltage drops in any supply lines (which need to be minimised). This task can involve many iterations on a powerful computer, each one lasting anything from hours to days.

Physical verification of the design has then to be done, although the backend tools are fairly good about not breaking design rules.

I missed out lots of steps in here, including things like scan-path insertion to enable production testing. Generally a 1 million gate design will take anything from 1 to 2 years to first silicon (though it is hard to be prescriptive - some designs are much more challenging than others). The tools and methodologies are sufficiently good that first time success is a realistic goal, although it is often that an accumulation of minor bugs will demand a re-design at some point and often the bugs are a misunderstanding of requirements rather than a fault in the process.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
No disagreement Graham. I just happen to have a personal preference for graphic representations. I'm sure if enough energy was put into developing the tools, it would be possible to view an asic design from a very high level, then "drill down" to progressively lower levels exposing more detail at each level. However, I don't think it is perceived that there is any need for this, and it may only be an exercise in "cosmetics".

To be a bit thought provoking, I used to tell people that software should be designed in graphic form. I'd say,

"Suppose you wanted to design and build the Forth Bridge, but instead of drawing it, you decided to describe it in a special language you had invented for the purpose. How likely do you think it is the bridge would ever get built, and even if it was, how long would it be before it collapsed?"

There are some significant flaws in this argument of course, but it does make people think for a moment.
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Unless the definition of the Forth Bridge was flawed, then it would work and be as durable as it had been designed to be.  You might not have ended up with the same aesthetics though, which is really what makes the Forth Bridge a little bit special.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
Unless the definition of the Forth Bridge was flawed, then it would work and be as durable as it had been designed to be.  You might not have ended up with the same aesthetics though, which is really what makes the Forth Bridge a little bit special.

True, as long as everyone on the project fully understood the language and all it's subtleties. That seems less than likely to me.
 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
On the other hand it would be a better analogy if you had to design the Forth bridge with only a handful of different standard components, but which were extremely well characterised, and that the end characteristics for the bridge were already in a form akin to simple logic statements.

Anyway there is a whole edifice of software (allbeit extremely expensive software), designed to assist in the methodology for IC design, now being used, so it is not as though there is a choice :-) Believe me, as someone with a background more in analog design, I took some convincing by the "bit jockeys". However I do see huge improvements now in the speed of correct and debugged digital design. I know digital designers who were sceptical too but are now sold on the idea. Of course there may be a revolutionary change to some graphical input in the future, but for the moment HDL's rule the roost.
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
It is unlikely my opinions are going to make the slightest difference  :D

BTW, I once had an ASIC design engineer working for me who point blank refused to draw NAND gates as inverted input OR gates, or NOR gates as inverted input AND gates. He always did a little mental gymnastic to figure out the logic of the circuit. I gave up trying to persuade him to change because his designs nearly always worked first time!

 

Offline graham.d

  • Neilep Level Member
  • ******
  • Posts: 2208
    • View Profile
There seems to be a correlation between brilliance and perversity :-)

Years ago I was completely converted to using the DIN standard (used in Germany a lot) for logic gates rather than the US ones. They are MUCH more compact and, once used to them, give a better intuitive feel. Complex gates are a doddle with them too.

 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums