Shee Hong asked:
Hi Dr Chris, Dr Ben,
How does a calculator work? How can it make complex calculation within nano-seconds? Also, how does it display the result on the screen?
We posed this question to maths teacher Jeffrey Zilahee from mathgurus.info...
Jeffrey - We all know that calculators are these fast little machines that can do calculations in incredible speed and have served to make humanity more computationally exact species, but exactly, how do they work? Well, whether you're talking about a scientific, financial graphing, or even a calculator on your phone, they all work in a similar fashion. In a nutshell, calculators just like their big brother, the computer, work by understanding everything in terms of two states. We call this binary and specifically, those two states are given as either a zero or a one.
So, when we press buttons on a calculator, those buttons are connected to sensors that send electrical currents to the integrated circuitry of a calculator. This circuitry contains transistors that build up a logical framework for solving any given calculation, and the more transistors present, the more advance the functionality of the calculator is likely to be.
Transistors use electricity to be in an on-state indicated by a one and off, indicated by a zero. So when a calculator wants to add two numbers it first converts those numbers into binary. For example, a four would be represented as 1-0-0 and a two would be represented as 1-0. From there, the process of addition is dictated by each column either summingto 0, 1, or two 1s, in which case a one would go into the next column since calculators cannot comprehend a 2. Once the calculator has the answer since it is in binary, it turns on a series of lines and/or pixels to create the visual match of the number that we understand which is decimal or as mathematicians call it - base 10.
Part of the reason why calculators are so quick is because at their core, they're relying on electrical impulses which travel at the speed of light.
Diana - So, calculators, much like computers, translate everything into binary or base 2 because it allows numbers to be translated into electrical signals that are either on ‘1’ or off ‘0.’
To display an answer, it then sends this information to its LCD screen and as those of you with any sort of LCD TV monitor or clock may know that these displays work by placing a voltage across a layer of molecules which are layered between filters and the changing voltage will make these liquid crystals appear opaque or transparent.
I'm drawing a blank on this one. Geezer, Tue, 13th Sep 2011
Methinks the good doctors have been resting on their carriage return a bit too much.
If I remember correctly, calculators commonly represent numbers in some form of BCD (binary coded decimal) rather than using the binary representation used in most computers. Presumably this is to avoid the conversion of inputs and outputs between decimal and binary even though it requires more bits to represent a number.
I was actually only counting from 0 to 7 (less than one decimal), so there would be no significant difference with what I wrote and BCD.
I would also like to note that calculators generally use series approximations to calculate each significant figure for complex calculations. So for example, when you type in sin(2.345) the calculator does the Taylor series for sine which is
Sure! To be clear, a series itself is not an algorithm, but can be a part of a algorithm. Some series converge to a number while others are infinite in nature. Another clear example is
I would say that understanding an important part of what allows calculators to do what they do is that they are digital. Although "digital" and "binary" are not identical, they are synonymous in that digital is the simplest form of analog (on or off) and binary (base 2) is the lowest useful numerical representation whose values are, essentially, digital (on or off). Otherwise, a calculator would need, for example, to quantitize a value (let's say a base 10 digit) as a 4 or 5 or 6 etc every time it wanted to use it. A binary value is quantitized as only one of two states. In electronics, it's virtually error-free to distinguish between a circuit presenting no voltage and a circuit presenting maximum/saturated voltage; there is virtually no mistaking the two as the circuitry acts as differently as possible (instead of with 10 different voltages as it would with a base 10 digit).
Both the Olivetta and Friden were desktop calculators and plugged into the wall for power.
And as well I have an Intel 4004 chipset from an old calculator. Originally calculators cost thousands of dollars, and were basic 4 function machines. Now you can buy a cheap one for under $5 that outperforms that one, and runs from a single AA cell for years. SeanB, Thu, 8th Mar 2012
Another approximation: Calculators commonly use Newton's method to calculate square roots
The definition of an algorithm is one which always halts after a finite number of steps.
Converting a number between binary and decimal representations requires around a hundred calculations, which causes "rounding" errors for many values.
A test I always make when I pickup a scientific calculator is to ask it to calculate the cube root of -1 the better ones do it but many just go bleep bleep it can't be done.