Remember All. The Evolution Of Computer Memory - Alternative View

Table of contents:

Remember All. The Evolution Of Computer Memory - Alternative View
Remember All. The Evolution Of Computer Memory - Alternative View

Video: Remember All. The Evolution Of Computer Memory - Alternative View

Video: Remember All. The Evolution Of Computer Memory - Alternative View
Video: The Evolution of Computer Memory (Educational) 2024, June
Anonim

In ancient times - it was almost 80 years ago, at the dawn of computing technology - the memory of computing devices was usually divided into three types. Primary, secondary and external. Now no one uses this terminology, although the classification itself exists to this day. Only primary memory is now called operational, secondary - internal hard drives, and the external one is disguised as all kinds of optical disks and flash drives.

Before starting a journey into the past, let's understand the above classification and understand what each type of memory is for. A computer represents information in the form of a sequence of bits - binary digits with values of 1 or 0. The generally accepted universal unit of information is a byte, usually consisting of 8 bits. All data used by the computer occupies a certain number of bytes. For example, a typical music file is 40 million bits - 5 million bytes (or 4.8 megabytes). The central processor cannot function without an elementary memory device, because all its work is reduced to receiving, processing and writing back to memory. That is why the legendary John von Neumann (we have mentioned his name more than once in a series of articles about mainframes) came up with an independent structure inside the computer,where all the necessary data would be stored.

The internal memory classification also divides the media according to the speed (and energy) principle. Fast primary (random access) memory is used today to store critical information that the CPU accesses most often. This is the operating system kernel, executable files of running programs, intermediate results of calculations. Access time is minimal, just a few nanoseconds.

Primary memory communicates with a controller located either inside the processor (in the latest CPU models), or as a separate chip on the motherboard (north bridge). The price of the RAM is relatively high, besides, it is volatile: they turned off the computer or accidentally pulled the power cord out of the socket - and all the information was lost. Therefore, all files are stored in secondary memory - on hard disk platters. The information here is not erased after a power outage, and the price per megabyte is very low. The only drawback of hard drives is the low reaction speed, it is measured already in milliseconds.

By the way, an interesting fact. At the dawn of the development of computers, primary memory was not separated from secondary memory. The main processing unit was very slow, and the memory did not give a bottleneck effect. Online and persistent data were stored in the same components. Later, when the speed of computers increased, new types of storage media appeared.

Back to the past

One of the main components of the first computers were electromagnetic switches, developed by the famous American scientist Joseph Henry back in 1835, when no one even dreamed of any computers. The simple mechanism consisted of a wire-wrapped metal core, movable iron fittings, and a few contacts. Henry's development formed the basis for the electrical telegraph of Samuel Morse and Charles Whitstone.

Promotional video:

Image
Image

The first computer based on switches appeared in Germany in 1939. Engineer Konrad Süs used them to create the system logic of the Z2 device. Unfortunately, the car did not live long, and its plans and photographs were lost during the bombing of the Second World War. The next computing device Sius (under the name Z3) was released in 1941. This was the first computer controlled by the program. The main functions of the machine were realized with 2000 switches. Konrad was going to transfer the system to more modern components, but the government closed the funding, believing that Sius's ideas had no future. Like its predecessor, the Z3 was destroyed during the Allied bombing raids.

Electromagnetic switches worked very slowly, but the development of technology did not stand still. The second type of memory for early computer systems was delay lines. The information was carried by electrical impulses, which were converted into mechanical waves and at low speed moved through mercury, a piezoelectric crystal or a magnetoresistive coil. There is a wave - 1, there is no wave - 0. Hundreds and thousands of impulses could travel through the conducting material per unit time. At the end of its path, each wave was transformed back into an electrical impulse and sent to the beginning - here's the simplest update operation for you.

The delay line was developed by the American engineer John Presper Eckert. The EDVAC computer, introduced in 1946, contained two memory blocks with 64 delay lines based on mercury (5.5 KB by modern standards). At that time, this was more than enough for work. Secondary memory was also present in EDVAC - the results of calculations were recorded on magnetic tape. Another system, UNIVAC 1, which was released in 1951, used 100 blocks based on delay lines, and had a complex design with many physical elements to store data.

The delay line memory is more like a spaceship's hyperspace engine. It's hard to imagine, but such a colossus could only store a few bits of data
The delay line memory is more like a spaceship's hyperspace engine. It's hard to imagine, but such a colossus could only store a few bits of data

The delay line memory is more like a spaceship's hyperspace engine. It's hard to imagine, but such a colossus could only store a few bits of data!

Bobek's children

Two rather significant inventions in the field of data carriers remained behind the scenes of our research. Both were done by talented Bell Labs employee Andrew Bobek. The first development, the so-called twistor memory, could be an excellent alternative to magnetic core memory. She largely repeated the latter, but instead of ferrite rings for data storage she used magnetic tape. The technology had two important advantages. First, the twistor memory could simultaneously write and read information from a number of twistors. Plus, it was easy to set up automatic production. Bell Labs hoped that this would significantly reduce the price of twistor memory and occupy a promising market.

Image
Image

The development was funded by the US Air Force, and memory was to become an important functional cell of the Nike Sentinel missiles. Unfortunately, the work on the twistors took a long time, and the memory based on transistors came to the fore. Market capture did not take place.

“Bad luck the first time, so lucky the second,” thought Bell Labs. In the early 70s, Andrew Bobek introduced nonvolatile bubble memory. It was based on a thin magnetic film that held small magnetized regions (bubbles) that stored binary values. After some time, the first compact cell with a capacity of 4096 bits appeared - a device measuring one square centimeter had the capacity of a whole strip with magnetic cores.

Many companies became interested in the invention, and in the mid-70s all major market players took up the development in the field of bubble memory. The non-volatile structure made bubbles an ideal replacement for both primary and secondary memory. But even here Bell Labs' plans did not come true - cheap hard drives and transistor memory blocked the oxygen of bubble technology.

Vacuum is our everything

By the end of the 40s, the system logic of computers moved to vacuum tubes (they are also electronic tubes or thermionic shafts). Together with them, television, sound reproduction devices, analog and digital computers received a new impetus in development.

Vacuum tubes have survived in technology to this day. They are especially loved among audiophiles. It is believed that the amplifying circuit based on vacuum tubes is a cut above the modern analogs in sound quality
Vacuum tubes have survived in technology to this day. They are especially loved among audiophiles. It is believed that the amplifying circuit based on vacuum tubes is a cut above the modern analogs in sound quality

Vacuum tubes have survived in technology to this day. They are especially loved among audiophiles. It is believed that the amplifying circuit based on vacuum tubes is a cut above the modern analogs in sound quality.

Under the mysterious phrase "vacuum tube" is a rather simple element in structure. It resembles an ordinary incandescent lamp. The filament is enclosed in an airless space, and when heated, it emits electrons, which fall on a positively charged metal plate. A stream of electrons is generated inside the lamp under voltage. The vacuum tube can either pass or block (phases 1 and 0) the current passing through it, acting as an electronic component of computers. During operation, the vacuum tubes become very hot, they must be intensively cooled. But they are much faster than antediluvian switches.

Primary memory based on this technology appeared in 1946-1947, when the inventors Freddie Williams and Tom Kilburn introduced the Williams-Kilburn pipe. The data storage method was very ingenious. Under certain conditions, a point of light appeared on the tube, which slightly charged the occupied surface. The area around the point acquired a negative charge (it was called an "energy well"). A new point could be placed in the "well" or left unattended - then the original point would quickly disappear. These transformations were interpreted by the memory controller as binary phases 1 and 0. The technology was very popular. Williams-Kilburn tube memory was installed in Ferranti Mark 1, IAS, UNIVAC 1103, IBM 701, IBM 702 and Standards Western Automatic Computer (SWAC) computers.

In parallel, engineers from the Radio Corporation of America under the direction of scientist Vladimir Zvorykin were developing their own tube, called the selectron. According to the authors' idea, the selektron was supposed to contain up to 4096 bits of information, which is four times more than the Williams-Kilburn tube. It was estimated that by the end of 1946 about 200 selectrons would be produced, but production proved to be very expensive.

Until the spring of 1948, the Radio Corporation of America did not release a single selectron, but work on the concept continued. Engineers redesigned the tube and a smaller 256-bit version is now available. Mini-selectrons were faster and more reliable than Williams-Kilburn tubes, but cost $ 500 apiece. And this is in mass production! The selectrons, however, managed to get into the computing machine - in 1953 the RAND company released a computer under the funny name JOHNNIAC (in honor of John von Neumann). Reduced 256-bit selectrons were installed in the system, and the total memory was 32 bytes.

Along with vacuum tubes, some computers of the time used drum memory, invented by Gustav Tauscek in 1939. The simple design involved a large metal cylinder coated with a ferromagnetic alloy. The reading heads, unlike modern hard drives, did not move over the cylinder surface. The memory controller waited for the information to pass under the heads on its own. Drum memory was used in the Atanasov-Berry computer and some other systems. Unfortunately, its performance was very low.

The Selektron was not destined to conquer the computer market - neat-looking electronic components have remained gathering dust in the dustbin of history. And this despite the outstanding technical characteristics
The Selektron was not destined to conquer the computer market - neat-looking electronic components have remained gathering dust in the dustbin of history. And this despite the outstanding technical characteristics

The Selektron was not destined to conquer the computer market - neat-looking electronic components have remained gathering dust in the dustbin of history. And this despite the outstanding technical characteristics.

Modern tendencies

At the moment, the primary memory market is ruled by the DDR standard. More precisely, its second generation. The transition to DDR3 will take place very soon - it remains to wait for the appearance of inexpensive chipsets supporting the new standard. Widespread standardization made the memory segment too boring to describe. Manufacturers have stopped inventing new, unique products. All work comes down to increasing the operating frequency and installing a sophisticated cooling system.

Image
Image

Technological stagnation and timid evolutionary steps will continue until manufacturers reach the limit of the capabilities of silicon (it is from which integrated circuits are made). After all, the frequency of work cannot be increased indefinitely.

However, there is one catch here. The performance of the existing DDR2 chips is sufficient for most computer applications (complex scientific programs do not count). Installing DDR3 modules operating at 1066 MHz and higher does not lead to a tangible increase in speed.

Star Trek to the Future

The main drawback of memory, and of all other components based on vacuum tubes, was heat generation. The pipes had to be cooled with radiators, air, and even water. In addition, constant heating significantly reduced the operating time - the tubes degraded in the most natural way. At the end of their service life, they had to be constantly tuned and eventually changed. Can you imagine how much effort and money it cost to service computer systems ?!

Strange texture in the photo - it is a magnetic core memory. Here is a visual structure of one of the arrays with wires and ferrite rings. Can you imagine how much time you had to spend to find a non-working module among them?
Strange texture in the photo - it is a magnetic core memory. Here is a visual structure of one of the arrays with wires and ferrite rings. Can you imagine how much time you had to spend to find a non-working module among them?

Strange texture in the photo - it is a magnetic core memory. Here is a visual structure of one of the arrays with wires and ferrite rings. Can you imagine how much time you had to spend to find a non-working module among them?

Then came the time of arrays with closely spaced ferrite rings - an invention of American physicists An Wang and Wei-Dong Wu, modified by students under the direction of Jay Forrester from the Massachusetts Institute of Technology (MIT). Connecting wires ran through the centers of the rings at an angle of 45 degrees (four for each ring in early systems, two in more advanced systems). Under voltage, the wires magnetized ferrite rings, each of which could store one bit of data (magnetized - 1, demagnetized - 0).

Jay Forrester developed a system in which the control signals for multiple cores were sent over just a few wires. In 1951, a memory based on magnetic cores (a direct analogue of modern random access memory) was released. Later, it took its rightful place in many computers, including the first generations of mainframes from DEC and IBM. Compared to its predecessors, the new type of memory had practically no drawbacks. Its reliability was sufficient for functioning in military and even spacecraft. After the crash of the Space Shuttle Challenger, which led to the death of seven of its crew members, the data of the on-board computer recorded in the memory with magnetic cores remained intact and intact.

The technology was gradually improved. The ferrite beads decreased in size, the speed of work increased. The first samples operated at a frequency of about 1 MHz, the access time was 60,000 ns - by the mid-70s it had dropped to 600 ns.

Darling, I have reduced our memory

The next leap forward in the development of computer memory came when integrated circuits and transistors were invented. The industry has taken the path of miniaturizing components while increasing their performance. In the early 1970s, the semiconductor industry mastered the production of highly integrated microcircuits - tens of thousands of transistors now fit in a relatively small area. Memory chips with a capacity of 1 Kbit (1024 bits), small chips for calculators and even the first microprocessors appeared. A real revolution has happened.

Memory manufacturers these days are more concerned with the appearance of their products - all the same standards and characteristics are predetermined in commissions like JEDEC
Memory manufacturers these days are more concerned with the appearance of their products - all the same standards and characteristics are predetermined in commissions like JEDEC

Memory manufacturers these days are more concerned with the appearance of their products - all the same standards and characteristics are predetermined in commissions like JEDEC.

Dr. Robert Dennard of IBM has made a special contribution to the development of primary memory. He developed the first chip based on a transistor and a small capacitor. In 1970, the market was spurred by Intel (which had appeared just two years earlier) with the introduction of the 1Kb i1103 memory chip. Two years later, this product became the world's best-selling semiconductor memory chip.

In the days of the first Apple Macintosh, the RAM block occupied a huge bar (in the photo above), while the volume did not exceed 64 KB
In the days of the first Apple Macintosh, the RAM block occupied a huge bar (in the photo above), while the volume did not exceed 64 KB

In the days of the first Apple Macintosh, the RAM block occupied a huge bar (in the photo above), while the volume did not exceed 64 KB.

Highly integrated microcircuits quickly replaced older types of memory. With the transition to the next level of development, bulky mainframes have given way to desktop computers. The main memory at that time was finally separated from the secondary, took the form of separate microchips with a capacity of 64, 128, 256, 512 Kbit and even 1 Mbit.

Finally, the primary memory chips were moved from motherboards to separate strips, which greatly facilitated the installation and replacement of faulty components. Frequencies began to rise, access times decreased. The first synchronous dynamic SDRAM chips appeared in 1993, introduced by Samsung. New microcircuits worked at 100 MHz, access time was 10 ns.

From that moment on, the victorious march of SDRAM began, and by 2000 this type of memory had ousted all competitors. The JEDEC (Joint Electron Device Engineering Council) commission took over the definition of standards in the RAM market. Its participants have formed specifications that are uniform for all manufacturers, approved frequency and electrical characteristics.

Further evolution is not so interesting. The only significant event took place in 2000, when DDR SDRAM standard RAM appeared on the market. It provided twice the bandwidth of conventional SDRAM and set the stage for future growth. DDR was followed in 2004 by the DDR2 standard, which is still the most popular.

Patent troll

In the modern IT world, the phrase Patent Troll refers to firms that make money from lawsuits. They motivate this by the fact that other companies have violated their copyright. The Rambus memory developer falls under this definition entirely.

Since its founding in 1990, Rambus has been licensing its technology to third parties. For example, its controllers and memory chips can be found in the Nintendo 64 and PlayStation 2. Rambus's finest hour came in 1996, when Intel entered into an agreement with Intel to use RDRAM and RIMM slots in its products.

At first everything went according to plan. Intel got advanced technology at its disposal, and Rambus was content with a partnership with one of the largest players in the IT industry. Unfortunately, the high price of RDRAM modules and Intel chipsets put an end to the platform's popularity. Leading motherboard manufacturers used VIA chipsets and boards with connectors for regular SDRAM.

Rambus realized that at this stage it lost the memory market, and began its long game with patents. The first thing she came across was a fresh JEDEC development - DDR SDRAM memory. Rambus attacked her, accusing the creators of copyright infringement. For some time, the company received cash royalties, but the next litigation involving Infineon, Micron and Hynix put everything in its place. The court acknowledged that technological developments in the field of DDR SDRAM and SDRAM do not belong to Rambus.

Since then, the total number of claims by Rambus against leading RAM manufacturers has exceeded all imaginable limits. And it seems that this way of life suits the company quite well.