IN 1977 I did assembly language programming on a pdp 8 with a 4 K ring memory module that was about 3 feet square and 1 foot high. It was already EOL but it was usable and no one else knew about it so I had it to myself. Always make friends with lower level staff. They know where the good stuff is hidden
I had a telephone auto dialler that used a core memory to store the numbers. It had rows of cores, each storing a digit of the number to be dialled, and thus there were 11 cores in each row, the digits 1 through 0 ( 10 pulses in pulse dialling) and an "end of number"core. You had a set of front panel pushbuttons that selected the number to dial, and this enabled a wire per number, that was passed through the appropriate holes in each figure 8 shaped core, to dial the numbers in sequence, then wired through the EON core, and then to a common pin. Dialled by simply having a ring counter that ran down the columns of wires, this counter also generating the loop disconnect pulses to the phone line. Thus it would step across the column width till there was a pulse induced in the sense wire ( the programmed wire) coinciding with the correct digit, ending the dial sequence for that number, and triggering the interdigit delay. Then it would sequence again, until it either reached the end of number core ( the first core, before the 1 digit) and terminating dialling, or had an end of the core overflow. All transistor logic inside, and relays to seize the line to dial, and also a line current sense relay so that when you took the phone off the hook ( it had a speaker for call progress detection and line audio) it would connect the phone to the line, and disconnect itself and reset for the next dialling you wanted to do. Was large, but designed to fit under the telephone handset thus not taking up desk space.
@@Lucianrider Looks like somone did video of it this year. th-cam.com/video/tPT6nIRFI_I/w-d-xo.html. I am sure when they sold it it was expensive, but parts wise there wasn't much to it. Basically a counter and hard switch over buttons and you just thread the wire in or over magnets.
Brings back memories of working at IBM San Jose as an assembler of the IBM 360 computer. It had a core memory of 12 planes in an assembly of about a cubic foot. While I worked wire wrapping panels for the 360 a coworker assembled the memories. He was putting a protective end cap on and the screwdriver slipped and plunged clear through the core planes. The memory assembly was worth about $25K at that point and my coworker actually broke down in tears.
This really makes you appreciate how they brought down memory cycle times. The DEC PDP-1 (designed in 1959) had a cycle time of 5 microseconds, which is quite impressive given these demonstrations. By 1965, the much cheaper, "consumer grade" PDP-8 mini had it already down to 1.5 microseconds (which also made shorter word lengths feasible). Quite a feat.
Had to keep running a schmoo plot on ours as it would not keep running, finally found a cracked resistor in the termination of one of the 2 drive lines. Sense was OK. The current would not stay stable with a bad resistor. Took a couple weeks to find due to issues of opening the oven and waiting for it to stabilize each time. If you worked on these, it was one of the resistors on the end of the row and column drive lines. Sure felt good finding the root cause after so much time spent adjusting and re-adjusting the row and column drive currents to get the schmoo plot correct. This was in the late 1970's.
Forgot to mention the curve he shows in the graphic is actually the Hysteresis curve of the magnetic core. That shows the ability for the core to remain magnetized after the current stops. The Schmoo curve is the boundary of where a current does not change the core magnetization and where it "flips the bit" Since there is an X and Y wire through the core, one axis of the Schmoo is the x current and the Y current to flip the bit. Inside all bits flip where X and Y currents add. As X and or Y is changed, some bits start to fail to flip when addressed. The middle of the all bits flip zone is the high reliability zone and X and Y currents are set there for Writing and Reading. The 3 wires are the X and Y. In one polarity they add and Write a bit. The 3rd wire that is weaved through all bits, is the sense. It senses the bit did or did not flip. The read cycle is done by writing the bits in a row to zero one at a time. If the bit was a 1 the sense will detect the flip. then the row is written back as a read only erases the memory. Kind of brief, but a rundown of how it works. This point where it will flip is affected by temperature, so they are kept in a temperature controlled oven for reliability. Core memory was often operated at about 120-150F.
This is an absolutely brilliant video. I had heard about women hand weaving the memory core for the Apollo acg but nobody ever went in to detail. This is the first good explanation I've seen in many years of searching. Thank you!
I'm old enough to remember how to clear core, and why. I started working on mainframe computers back in 1965, when the computers I worked on had typically 16-32K of core memory. As a field engineer, I occasionally installed additional memory in a machine or had to diagnose memory problems in the field. We carried a tool box with spare parts in one hand an oscilloscope in the other. Making core memory work well was like tuning a fiddle. Adjust the read-write currents and pulse rise times, adjust the sense amps, etc. The very first program I ever wrote was a memory diagnostic. I could boot it into a specific 4K memory module and run it while optimizing another module. (Honeywell H200 with two or three character address modes)
Very interesting video! I used a maintenance/programmer console for a Westinghouse built AN/ALQ-119 ECM "pod". It used a core memory that we set a specific operational "program" for the jammer pods. This, was in 1975/1976 in Bitburg, Germany, 1977 in Holloman AFB, USA.
"The hard way"... Well, it was a lot less hard (and slow) then mercury delay lines, acoustic wire delay lines, flying spot CRT memories, drums, and other things like that.
True it was a big improvement over those old technologies. Modern multi level cell NAND flash is arguably even crazier than core under the hood with several bits being stored per cell as varying charge levels, error correction, and wear leveling algorithms being used.
Ah. Yea, I guess I shoudl read what I type. Or mayeb should. Ir maybe should. Or maybe should. I have distypeia. :-) (And yes, every one of those was an accidental typo.)
Wow... Been trying to understand Core memory for a few years now, on and off in my spare times looking at websites and TH-cam videos. But nobody has explained it as well as you did. Now I understand! Thank you :) Oh, and thank you for finding a use for Elevator music :P
And think 100 years from now (if we are not extinct) they will look at our state of the art computer science and hardware, with the same view as we look at this. No doubt humans then will be wondering how we ever managed with things so archaic,
It's amazing how IBM managed to fabricate this ferrite core memory system that was able to stand up the the vibrations,wild temperature ranges, and the intense radiation bombardment from a rocket flight into space.
@@Thisandthat8908 Depends on where you are. There are certain areas in orbit where the radiation is extremely high, and does cause reliability issues with non-radiation hardened circuitry.
My first computer was an IBM 1620 computer. We became great friends back in 1965 when I was attendng San Diego City College. The computer at SDCC had 20K of 6bit core memory although I believe it could be expanded to 60K. That may seem impossibly small but we were able to run some very interesting programs in the available memory. While I was still in high school I purchased a book titled "Programming the IBM 1620" and I still have it to this day. While attending SDCC I was hired by the San Diego Unified School District as a programmer in their IBM 360 shop and I quickly moved up the ranks. That was the beginning of a very successful career in software development that ended with my retirement in 2004 when I decided to go back to college and complete a degree. Thanks for a very interesting video. Robert
Being an old-timer (70+ yo) I've seen and heard a lot about core memory. I've always accepted at a core level (pardon the pun) that it works, but never had any idea how. Now I have a much better idea about the workings of the memory. It's amazing how much support circuitry is needed for it to work! And to imagine that reads are destructive and need to be rewritten each time. And that peoples' lives depended on core memory working, first time, every time!
I'm an engineer that works on HDDs and saw this type of memory in a video about the first space flights. I wasn't completely sure how it was written to but I figured you'd apply half the needed current to each line and flip that specific bit rather than a ton of bits at once. I couldn't figure out how it was read back though. I didn't realize they had more than 2 wires. Thanks for the video.
Best detailed explained I've seen so far as I clearly recall being surprised when Jeri Ellsworth was the first to detail clearly to me the way core memory operated.
There were some useable RAM chips in the mid 70's EG the 2114. 450ns access time but they were OK with 1MHz CPUs. Later variants got that down to 150ns. The Commodore PET introduced in 1977 used them for graphics RAM and they had already been around for a while at that point. They were old tech and therefore "cheap" by the time the ZX80 and ZX81 came around which is why they were used in those. I designed my first computer to use them in 1982 as they were the cheapest RAM solution available.
Reminds me of the Apple II that I had with 48K RAM.. did payroll in BASIC using a 360K floppy disk.. 1983. maxed that sucker out, had to watch the array size.. print out was by disk saved files.
Wow, this video was a nostalgic walk down memory lane, thanks for posting it!! In my sophomore year of high school, 1970-71, I had hands-on experience with an old IBM mainframe (I forget the model) that some benefactor donated to my school. It had both magnetic core memory and magnetic drum memory and small (for the time, 5" x 3" x 1.5" thick, punched-paper? magnetic? I forget now) tape cartridges to load in programs. To this day I remember the error code that was most common when "something went wrong". The teletype attached to it would just start printing this line, repeatedly, until you switched off the main unit..... "w000zzy". Maybe one of you history gods can backtrace that error to the model number.
Absolutely loved this! It's amazing how complicated the core memory system was, and yet it worked reliably for decades. Your explanations were concise and clear. I wish I could visit the Computer History Museum, if only to hear you and your colleagues make the incredibly complicated understandable. I look forward to more on the Apollo AGC. Thank you for taking us along for the ride.
it sounds more complicated than mopdern stuff....a miracle to me how much electronics around the memory was needed and that the electronics of that time were able to perform the task
Some Seeburg jukeboxes of that period also used core memory. Not in the planar configuration, shown here, but as one core per record side. It was a perfect match. The respective core was "set" when it was selected by the user. A simple sequencer read each core, in turn,. When it read a "set" core, that record was loaded. The read process, as shown in this video, also erased the core, which in the jukebox case, was what you wanted.
Such a good dive into how that core memory works, you can read about it, but like most things, practical use makes understanding heaps easier. I spent alot of time back in the day tweaking my RAM for the best overclocks. this does explain, at a base level, the real challenges with memory. really is the bottleneck, storing data is hard, its only from constant effort we have what we do today. Thanks a bunch!
@@CuriousMarc Understood. Hopefully metric will take over in the US given enough time. I'm always confused when visiting Britain and seeing the MPH speed and distance and everything else metric. Maybe it's something with speaking English...
@ClickThisToSubscribe A centimetre is not arbitrary. It's part of a system where everything ties together. For example 1 cubic centimetre is the same volume as 1 millilitre and (water) weighs 1 gram and takes 1 calorie to increase the temperature by 1° C. Now try that with inches, ounces, pints (Imperial or American?), BTU and ° F.
@Non,Player, Adeptus ???? No, it's not arbitrary. It's a system where the various measures are tied together in a coherent manner. On the other hand, the imperial system is very arbitrary, with the measures based on something that might have made sense, but have nothing in relation to the others. You even have situations where the same unit has different sizes, such as imperial and U.S. gallon. Also, while an imperial quart has 5/4s the number of ounces of the U.S. quart, the actual volume difference is 6/5, because of the different size ounce. There are many other inconsistencies in the Imperial system(?).
Worked for NCR in the 70s and 80s. The POS 255 was driven by the 726. It was about as big as a chest freezer. The backplane was hand wrapped. It used magnetic core memory. I think the power supply was 60 amps at 5V. It also had a Hitachi hard drive with a digital board and an analog board. The heads were FIXED like 256 of them per side per platter at 2 platters.....if my memory is correct. Yep, I'm old. All the computers I've worked on are either in the museum or on the History Channel.
They used core on AP-101B since it was more proven at the time and was more resistant to cosmic rays. It shared it's architecture with the IBM 360 mainframe and was still was reasonably fast for the time maybe a little under 2x the performance of an IBM 5150. The APA-101S went to solid state memory and was half the size and weight.
If you have the space for it, you should totally include (a ruggedized version of) this setup in the museum! It would be a very educational hands-on experience.
Superb, I've never fully understood it before. I wonder if there was ever a mechanism whereby the program could signal that the some particular temporary data once read was no longer required, so the write-back cycle time could be avoided. It would seem wasteful to be writing back data at times when the software knew full well that it was finished with.
In every core memory based computer I have seen, the heart of the system design was a full memory cycle - read then (re)write. Generally one memory cycle consisted of some number of clock cycles, while execution of instructions typically took several memory cycles. I am not aware of anybody jumping ahead to the end of a memory cycle when the contents of the word were to be zero (or ignored).
@@carlclaunch793 The only thing I can think of along these lines is the computer in the Lunar Module's Abort Guidance System, the AEA (Abort Electronics Assembly). For a lot of instructions, there were two forms -- a normal form (with writeback) and a zeroing form (without). For example, there's ADD (add) and ADZ (add and zero), SUB (subtract) and SUZ (subtract and zero), etc. These zeroing instructions didn't save time by skipping to the end of the memory cycle, but they did save power and so the programmers tried to use them wherever possible.
@@mikestewart8928 That is quite an interesting twist, one I hadn't seen before since the computers of the era were bound to outlets and didn't worry overmuch about power consumption.
Yeah, it's really interesting, isn't it? The AEA's ROM is quite strange as well. Unlike the AGC, the AEA's ROM was a regular core plane... except the inhibit line was deleted, and the X address lines only threaded through cores that were to hold zeroes. So with this scheme, no matter what you try to write back to the core, you'll still end up with the correct fixed data. The problem with this was that the same read-writeback cycle was used for this ROM as for the regular core memory. So there's some amount of time where the cores of an instruction are flipped but not written back... so it is possible (through loss of power, etc.) to leave a word of ROM in a bad state. To ensure this doesn't happen, the very first thing the flight programs do at boot is read once from every ROM location, to "prime" the cores for normal operation.
Thanks for this complete demo from scratch, with the problems of setting and reading the core. So it is now to me complete clear how it realy works with the corrections needed to get it to work propper. In basic I allready knew ofcourse but your video made it complete with the details. I still have to start repairing my DEC PDP 8/M that has a 8k x12 core mem card.
Great video. That last line says it all, “...memory done the hard way.” It amazes me people had the tenacity to make an idea like that work. One thing that really surprised me the first time I saw core memory was the size of the cores. When looking at an illustration I was imagining something at least a cm in diameter, but they’re nearly microscopic. I’d like to see how they manufactured the cores themselves. I assume they were molded, but how would the molds be produced? The other hurdle is the control circuitry to get past the destructive read. The mind boggles at the complexity. It’s hard to believe anyone would expect that to work, let alone that it was the best memory available for a decade or so. Thanks for this video. That’s the best description and demonstration of core memory I’ve ever seen.
If you look into the alternatives it replaced, you will find thornier engineering problems and often higher cost as well. Williams-Kilburn tubes used a modified cathode ray tube, where the face of the tube was a capacitor and the electron beam deposited charge at various spots in a grid across the face to represent bits. The electrons striking the inside of the CRT face would splash off in a fountain, contaminating nearby spots. The charge would drain off, too, requiring regular refreshing of the bit pattern. Programmers had to hop around with the bits they accessed as too much locality of reference produced excessive cumulative spray onto adjacent bits. These also require many kilovolts to drive the beam yet the detected pulse when reading was down in the microvolt range, requiring enormous amplification; cumulative noise was a big big problem in the amplifier chain. Mercury vat delay lines used acoustic signals that traveled through mercury. They had to be heated to a constant temperature to work. Reflections off the side walls of the tube would smear out the pulses, making reliable detection quite hard to accomplish. These, like the tubes above, took nearly continuous fiddling to achieve acceptable results. Drum memories required many read/write heads to get decent capacity. The memory was not random access, instead the program should keep track of the current rotational position of the drum so that the needed next memory word doesn't require any substantial portion of a rotation time to come back under the heads. These are both expensive and slow, limiting processor speed.
Beautiful video guys. I found your channel after several of the AGC videos and I went back to watch from the start. You all really know your stuff. I did not then know you were from the Museum. I want to tell you how satisfying every video is to the nerd in me. Clearly there's a tremendous amount of work involved. This is brilliant. Thank you!
Merci pour vos vidéos, un travail impressionnant à travers toutes vos aventures d’électroniciens paléontologue ! Vous transmettez la passion en tout cas.
Back when I was in college (early '80s) I was given 2 working old dumb terminals. They must have weighed 60lb each, and sounded like a hurricane when powered. Individual hall effect switches for every key. Built like tanks, and could probably have run constantly to date without a failure. I tore them apart for the parts. Each had a 2k core memory board. I hung them from a string and called them 'memory-belia'.
And the great thing was that the content of the memory was not volatile. Remained after power was removed. So the processor was ready for work the next morning. Just switch it on and away you went as everything was still in the core memory from the day before. The good old days.............
Wonderful video! I was born in 1971, so this is the tech that was around when this TH-cam geezer was born, fascinating to see how it evolved as I grew up, until I can sit here and watch this on my iPad. BTW, interesting engineering on that toggle switch box...I give it A "10"...I`ll let my self out.....
This is the best explanation and demonstration of core memory I have seen. Someday I may have to experiment with the core module I have. I need to repaire a few wires that have pulled off their contacts, though.
A great explanation and a very interesting video. Finally the column and row current causing the flip makes sense. More dinosaur DNA for anyone fiddling with their CAS and RAS overclocking. :)
even if I'm wrong I'm going to say that the core stack is where we got the computer programming term of "the stack" when referring to memory operations from.
Awesome video! I've known the basic theory of core memory for years but never really dived into the details. I never would have guessed that the current to flip a core was as high as 700mA. Outstanding work by those engineers back in the early days that did all of this without the high tech tools we have available today. Also, on the video it looked like at least one of your signal generators was producing ramps, which are still infinite bandwidth signals - did you try a low-pass filter on the pulse generators instead of or in addition to the slower rise times?
I just have built Jussi Kilpeläinen's Core Memory Shield for the Arduino. When not connected to the Arduino but powered anyway, the enable signal is always on and current flows through one row and one column wire constantly. I noticed some related components and my makeshift 3.3V power supply heating up due to this constant current flow. I estimated the currents from the schematic and found a value of roughly 300 mA for the row and column current. Then I started to look for a confirmation of this estimation online. This video is one of the very few sources about core memory that makes comprehensive quantitative statements on the matter. And it explains the matter just beautifully with experiments. Well done, as always from CuriousMarc.
"...and inches, for those who have not evolved yet." Hahaha, well, played! Jokes aside, this was really interesting, especially how they had to blank out every core before they could read the one they were looking to read. And then how they dealt with remembering the values of the other cores and on top of that keeping at 0 the cores that should remain 0. Really great video, super educational!
somewhat oddly, some units in the US have moved to a 10 based system, but with a "mil" being a 0.001 inches. I've encountered tape measures for surveying with feet divided by 10. Though at some point its just starts getting weird.
@@bryceforsyth8521 They may have put people on the moon. But what is more complicated: Converting from miles to inches (what is the right multiplicator?) or from kilometers to centimeters (just shift that decimal place over)? You can *measure* anything in either system, but the convenience factor when *working* with those measurements is clearly on the metric side. ;-)
The core memory at 7:48 is 32x32 bits, or 1024 bits, if i counted correctly. In modern systems where a character is usually a byte, 8 bits, (of course depending on encoding) that would mean that core could store 128 bytes or 128 characters. Less than a tweet. Of course they stored very architecturally specific computer words, of other bit sizes, and not UTF-8 or ASCII encoded characters but I digress. It really is very cool stuff. Mind blowing inventions.
The title is "Explaining ferrite core memory". But (as far as I know) the core rope memory cores used on AGC were made of permalloy. And core ropes work on a different principle of ferrite core memories. Core memories used on IBM, some ole fight simulators, etc are very different from core rope memories. At first core rope memories are read only opposed to ferrite core memories (where the reading of data is destructive).
Amazing ! I'm wondering ... If you put a magnetic field viewing paper on top of a memory core, would you actually see the magnetic field of each ferrite ? I doubt that the magnetic filed of such a small ferrite would show on the paper, but if it did, you would get a sort of primitive e-paper screen :-D
Thanks for posting this. Very cool. I bought some core memory on Amazon a couple years ago. Thankfully the ones I have still have the solder tabs around the border of the unit. They aren't as large but definitely good enough for practical experiments. Cheers!
This video was super great but I needed a bit of clarification an one part, that was the reading of the bit part. Maybe I am daft but that part was still confusing due to the choice of words. This led me down a rabbit hole to be able to fully grasp the concept. I hope it helps someone else stumbling on the basics like I did. The way it was described in the video made it seem that reading a bit, was always about reversing the electric current. That would have meant that a one would turn into a zero and also that a zero would turn into a one, i.e. the bit would always flip. This confused me greatly since then it would not exist a way to actually retrieve the data. In actuality they send the zero current, i.e. reading is not "reversing the polarity" but always "sending the zero polarity direction of current". Reading therefore is setting it to 0 and watching to see if it changed. Right? A more apt way of describing it might be something like "reading the bit is a matter of sending the current representing the '0 polarity', meaning a 1 would be flipped but a 0 would not and sensing if this change occurred or not".
I've got the Core Memory card out of a DEC PDP-8E. 12 bit "word", so 12 mats on the card with 8,192 bits per mat. Each mat has 128 rows by 64 columns of ferrite beads. Read time of 1.2 uSec, write time of 1.4uSec. Part # 30-10654-2, DEC P/N H-212 might have been manufactured in Nov of 1974.
One of these days I'd like to make my own core memory module. I have a lot to learn about them before I can do that, though. This is definitely a good start.
Consider if you will core memory used magnetic properties so in essence were inductors. Modern RAM chips memory is essentially stored in an electrical field or capacitors. And refresh circuitry. And recall static vs. dynamic RAM. And if you really want to talk about old school memory, Bell Labs used flying spot, barrier grid etc.
I wonder if in theory core memory could be massively scaled down using modern lithographic processes. I suppose the key would be finding an appropriate material with a suitable B-H curve that can both be deposited and etched. Would be a fun experiment for someone with access to a foundry.
Thin film core memory was already invented by the early 60's, and commercially available as early as 1962: www.computerhistory.org/storageengine/thin-film-memory-commercially-available/ . But it was not cost competitive, particularly when silicon memory became available. Like many other memory technology attempts, some magnetic - bubble memory comes to mind - finding new memories that can compete with silicon on price/bit, density and access time has proven to be a tall order.
I'd be curious to have a more detailed explanation for the memory driver/sensor circuitry. I understand the principle of how core memory works, but how that interacts with a computer like the AGC is interesting. Presumably the change to merge the sense/inhibit wires came long after AGC, and ditto for the ability for the controller to increment the memory location as part of the reload cycle after read.
Brilliant. I have a single core "teaching tool" I've used with some special needs kids for 22 years now. I am ditching it in the FIBuckit. I will use this video instead. Many thanks.
Excellent video, thank you again! I wonder how the more recent FRAM works compared to core memory and if it took some of its inspiration from this novel technology?
I came into the video expecting RO core rope memory, and learned something new :-D this is awesome! Thanks a lot! EDIT: Still curious about core rope memory, but I'm sure you have a video about it in there somewhere!
Not yet. Rope is even more complicated. However Ken made a great article to explain it, on his blog at www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html
I would love to have a Core Memory Module to play with. These are so cool how they work. Also will look great with my J11 CPU. Courtesy of Chris from Play With Junk YT Channel. Great Intel Out-Side Demo. God Bless. 🙏
From a time when memory was memory and not that modern stuff that loses it's contents when you only remove supply voltage :-) Most love the Apple switch box and the power supply left to it.
IN 1977 I did assembly language programming on a pdp 8 with a 4 K ring memory module that was about 3 feet square and 1 foot high. It was already EOL but it was usable and no one else knew about it so I had it to myself. Always make friends with lower level staff. They know where the good stuff is hidden
Genius! As a self-actualising geek, that would definitely have been a cherished memory! Thank you for sharing 🙂
Lower level?
FUHQ
I had a telephone auto dialler that used a core memory to store the numbers. It had rows of cores, each storing a digit of the number to be dialled, and thus there were 11 cores in each row, the digits 1 through 0 ( 10 pulses in pulse dialling) and an "end of number"core.
You had a set of front panel pushbuttons that selected the number to dial, and this enabled a wire per number, that was passed through the appropriate holes in each figure 8 shaped core, to dial the numbers in sequence, then wired through the EON core, and then to a common pin. Dialled by simply having a ring counter that ran down the columns of wires, this counter also generating the loop disconnect pulses to the phone line. Thus it would step across the column width till there was a pulse induced in the sense wire ( the programmed wire) coinciding with the correct digit, ending the dial sequence for that number, and triggering the interdigit delay. Then it would sequence again, until it either reached the end of number core ( the first core, before the 1 digit) and terminating dialling, or had an end of the core overflow.
All transistor logic inside, and relays to seize the line to dial, and also a line current sense relay so that when you took the phone off the hook ( it had a speaker for call progress detection and line audio) it would connect the phone to the line, and disconnect itself and reset for the next dialling you wanted to do. Was large, but designed to fit under the telephone handset thus not taking up desk space.
Fascinating!! Are there any links to such a device to see the guts of it?
I bet that was horrendously expensive when made.
@@Lucianrider Looks like somone did video of it this year. th-cam.com/video/tPT6nIRFI_I/w-d-xo.html.
I am sure when they sold it it was expensive, but parts wise there wasn't much to it. Basically a counter and hard switch over buttons and you just thread the wire in or over magnets.
Look mum no computer has one in his collection! Check out his videos!
There is at least a model the soviets made like this. saw on youtube i think it could do like 8 phone numbers
Brings back memories of working at IBM San Jose as an assembler of the IBM 360 computer. It had a core memory of 12 planes in an assembly of about a cubic foot. While I worked wire wrapping panels for the 360 a coworker assembled the memories. He was putting a protective end cap on and the screwdriver slipped and plunged clear through the core planes. The memory assembly was worth about $25K at that point and my coworker actually broke down in tears.
Best description of core memory I've seen, many thanks! Forrester himself only died a couple of years ago at the grand old age of 98!
This really makes you appreciate how they brought down memory cycle times. The DEC PDP-1 (designed in 1959) had a cycle time of 5 microseconds, which is quite impressive given these demonstrations. By 1965, the much cheaper, "consumer grade" PDP-8 mini had it already down to 1.5 microseconds (which also made shorter word lengths feasible). Quite a feat.
The test bench I worked on in the Navy, the AN/USM-247, used core memory until the mid 2000s.
Had to keep running a schmoo plot on ours as it would not keep running, finally found a cracked resistor in the termination of one of the 2 drive lines. Sense was OK. The current would not stay stable with a bad resistor. Took a couple weeks to find due to issues of opening the oven and waiting for it to stabilize each time. If you worked on these, it was one of the resistors on the end of the row and column drive lines. Sure felt good finding the root cause after so much time spent adjusting and re-adjusting the row and column drive currents to get the schmoo plot correct. This was in the late 1970's.
Forgot to mention the curve he shows in the graphic is actually the Hysteresis curve of the magnetic core. That shows the ability for the core to remain magnetized after the current stops. The Schmoo curve is the boundary of where a current does not change the core magnetization and where it "flips the bit" Since there is an X and Y wire through the core, one axis of the Schmoo is the x current and the Y current to flip the bit. Inside all bits flip where X and Y currents add. As X and or Y is changed, some bits start to fail to flip when addressed. The middle of the all bits flip zone is the high reliability zone and X and Y currents are set there for Writing and Reading. The 3 wires are the X and Y. In one polarity they add and Write a bit. The 3rd wire that is weaved through all bits, is the sense. It senses the bit did or did not flip. The read cycle is done by writing the bits in a row to zero one at a time. If the bit was a 1 the sense will detect the flip. then the row is written back as a read only erases the memory. Kind of brief, but a rundown of how it works. This point where it will flip is affected by temperature, so they are kept in a temperature controlled oven for reliability. Core memory was often operated at about 120-150F.
This is an absolutely brilliant video. I had heard about women hand weaving the memory core for the Apollo acg but nobody ever went in to detail. This is the first good explanation I've seen in many years of searching. Thank you!
That's some hard core memory...
And there it is. The first dad joke of the thread :)
"Dad joke?" Well, maybe a bit... @@TrainDriver186
@@dapje2002 Thanks for the compliment. I really dig it.
@@TrainDriver186 Stubby beat me to it. I have to go take a nap now...
HA!
I'm old enough to remember how to clear core, and why. I started working on mainframe computers back in 1965, when the computers I worked on had typically 16-32K of core memory. As a field engineer, I occasionally installed additional memory in a machine or had to diagnose memory problems in the field. We carried a tool box with spare parts in one hand an oscilloscope in the other. Making core memory work well was like tuning a fiddle. Adjust the read-write currents and pulse rise times, adjust the sense amps, etc. The very first program I ever wrote was a memory diagnostic. I could boot it into a specific 4K memory module and run it while optimizing another module. (Honeywell H200 with two or three character address modes)
The man who invented this was genius. I can't imagine I would be able to come with this idea at all. :)
Peter Švančárek - The man was An Wang, although others contributed ideas pertaining to the organization of the cores.
Man.... mindblown... Just to think I hold a computer engineering MSc and none of this was subject of study at uni.
Very interesting video!
I used a maintenance/programmer console for a Westinghouse built AN/ALQ-119 ECM "pod".
It used a core memory that we set a specific operational "program" for the jammer pods.
This, was in 1975/1976 in Bitburg, Germany, 1977 in Holloman AFB, USA.
Ahhh, a fellow Bitburger!! I was there 86-88. Miss that place!!
I was at holloman in 77.
Grew up in alamo.
I miss those f 4 flying over head low and loud!
The sound of freedom.
@@improvesynthsational8466 , I liked Alamagordo, and had a pretty good time there, even if it did get hot as heck in the Summer.
"The hard way"... Well, it was a lot less hard (and slow) then mercury delay lines, acoustic wire delay lines, flying spot CRT memories, drums, and other things like that.
yeah but it was non volatile
True it was a big improvement over those old technologies.
Modern multi level cell NAND flash is arguably even crazier than core under the hood with several bits being stored per cell as varying charge levels, error correction, and wear leveling algorithms being used.
*than
Ah. Yea, I guess I shoudl read what I type. Or mayeb should. Ir maybe should. Or maybe should. I have distypeia. :-) (And yes, every one of those was an accidental typo.)
"The hard *core* way"...
Wow... Been trying to understand Core memory for a few years now, on and off in my spare times looking at websites and TH-cam videos.
But nobody has explained it as well as you did.
Now I understand! Thank you :)
Oh, and thank you for finding a use for Elevator music :P
And think 100 years from now (if we are not extinct) they will look at our state of the art computer science and hardware, with the same view as we look at this. No doubt humans then will be wondering how we ever managed with things so archaic,
It's amazing how IBM managed to fabricate this ferrite core memory system that was able to stand up the the vibrations,wild temperature ranges, and the intense radiation bombardment from a rocket flight into space.
worth noting, that the radiation aspect is usually widly exaggerated.
@@Thisandthat8908 Depends on where you are. There are certain areas in orbit where the radiation is extremely high, and does cause reliability issues with non-radiation hardened circuitry.
This was incredibly interesting. Thanks for putting this together.
My first computer was an IBM 1620 computer. We became great friends back in 1965 when I was attendng San Diego City College.
The computer at SDCC had 20K of 6bit core memory although I believe it could be expanded to 60K.
That may seem impossibly small but we were able to run some very interesting programs in the available memory.
While I was still in high school I purchased a book titled "Programming the IBM 1620" and I still have it to this day.
While attending SDCC I was hired by the San Diego Unified School District as a programmer in their IBM 360 shop and I quickly moved up the ranks.
That was the beginning of a very successful career in software development that ended with my retirement in 2004 when I decided to go back to college and complete a degree.
Thanks for a very interesting video.
Robert
Great PAL!
Great stuff. "Inches, for the ones who have not evolved yet" made me smile :) @8:13
Being an old-timer (70+ yo) I've seen and heard a lot about core memory. I've always accepted at a core level (pardon the pun) that it works, but never had any idea how. Now I have a much better idea about the workings of the memory. It's amazing how much support circuitry is needed for it to work! And to imagine that reads are destructive and need to be rewritten each time. And that peoples' lives depended on core memory working, first time, every time!
Amazing video, I've always wondered how this works in practice. Thanks!
"...and one with inches for those who have not evolved yet" 😂
I guess that's where the term stack comes from. Neat.
aaa yep that makes sense now :D
10:19 - First thing I thought on that BANG was: That's one mighty and dangerous board!
I'm an engineer that works on HDDs and saw this type of memory in a video about the first space flights. I wasn't completely sure how it was written to but I figured you'd apply half the needed current to each line and flip that specific bit rather than a ton of bits at once. I couldn't figure out how it was read back though. I didn't realize they had more than 2 wires. Thanks for the video.
Best detailed explained I've seen so far as I clearly recall being surprised when Jeri Ellsworth was the first to detail clearly to me the way core memory operated.
Great video! Thank you. Would love a follow up with a look at the AGC rope memory, and how it differed to the regular core memory.
There were some useable RAM chips in the mid 70's EG the 2114. 450ns access time but they were OK with 1MHz CPUs. Later variants got that down to 150ns. The Commodore PET introduced in 1977 used them for graphics RAM and they had already been around for a while at that point. They were old tech and therefore "cheap" by the time the ZX80 and ZX81 came around which is why they were used in those. I designed my first computer to use them in 1982 as they were the cheapest RAM solution available.
Reminds me of the Apple II that I had with 48K RAM.. did payroll in BASIC using a 360K floppy disk.. 1983. maxed that sucker out, had to watch the array size.. print out was by disk saved files.
Wow, this video was a nostalgic walk down memory lane, thanks for posting it!! In my sophomore year of high school, 1970-71, I had hands-on experience with an old IBM mainframe (I forget the model) that some benefactor donated to my school. It had both magnetic core memory and magnetic drum memory and small (for the time, 5" x 3" x 1.5" thick, punched-paper? magnetic? I forget now) tape cartridges to load in programs. To this day I remember the error code that was most common when "something went wrong". The teletype attached to it would just start printing this line, repeatedly, until you switched off the main unit..... "w000zzy". Maybe one of you history gods can backtrace that error to the model number.
"walk down memory lane" >_>
Full respect to the genius and ingenuity of the pioneers of this old technology!
Absolutely loved this! It's amazing how complicated the core memory system was, and yet it worked reliably for decades.
Your explanations were concise and clear. I wish I could visit the Computer History Museum, if only to hear you and your colleagues make the incredibly complicated understandable.
I look forward to more on the Apollo AGC. Thank you for taking us along for the ride.
it sounds more complicated than mopdern stuff....a miracle to me how much electronics around the memory was needed and that the electronics of that time were able to perform the task
Excellent demonstration. Really interesting to see the steps necessary to get a clean signal out of it.
Great demonstration of how core memory works. It is one thing to read about it, but another thing to see traces on the scope. Very nice work.
Some Seeburg jukeboxes of that period also used core memory. Not in the planar configuration, shown here, but as one core per record side. It was a perfect match. The respective core was "set" when it was selected by the user. A simple sequencer read each core, in turn,. When it read a "set" core, that record was loaded. The read process, as shown in this video, also erased the core, which in the jukebox case, was what you wanted.
Such a good dive into how that core memory works, you can read about it, but like most things, practical use makes understanding heaps easier. I spent alot of time back in the day tweaking my RAM for the best overclocks. this does explain, at a base level, the real challenges with memory. really is the bottleneck, storing data is hard, its only from constant effort we have what we do today. Thanks a bunch!
"and inches for those not evolved yet" ... Proceeds to reference diameters of ferrites in mils 😁. Habitual units are hard to change.
I am a metric “native”. But the US was dominating the early computer industry and the cores were specified and manufactured in round number of mils.
@@CuriousMarc Understood. Hopefully metric will take over in the US given enough time. I'm always confused when visiting Britain and seeing the MPH speed and distance and everything else metric. Maybe it's something with speaking English...
..I mean... wire is measured in mils too so... it stands to reason
@ClickThisToSubscribe A centimetre is not arbitrary. It's part of a system where everything ties together. For example 1 cubic centimetre is the same volume as 1 millilitre and (water) weighs 1 gram and takes 1 calorie to increase the temperature by 1° C. Now try that with inches, ounces, pints (Imperial or American?), BTU and ° F.
@Non,Player, Adeptus ????
No, it's not arbitrary. It's a system where the various measures are tied together in a coherent manner. On the other hand, the imperial system is very arbitrary, with the measures based on something that might have made sense, but have nothing in relation to the others. You even have situations where the same unit has different sizes, such as imperial and U.S. gallon. Also, while an imperial quart has 5/4s the number of ounces of the U.S. quart, the actual volume difference is 6/5, because of the different size ounce. There are many other inconsistencies in the Imperial system(?).
Great Video! Finally i understand core memory. Thank you!
Worked for NCR in the 70s and 80s. The POS 255 was driven by the 726. It was about as big as a chest freezer. The backplane was hand wrapped. It used magnetic core memory. I think the power supply was 60 amps at 5V. It also had a Hitachi hard drive with a digital board and an analog board. The heads were FIXED like 256 of them per side per platter at 2 platters.....if my memory is correct.
Yep, I'm old. All the computers I've worked on are either in the museum or on the History Channel.
I was only thinking this morning how greait it would be see your current progress with AGC! Thanks for another wonderful update.
lol and to think the Space Shuttle had magnetic core memory on the first shuttles but I believe it was upgraded in later years to solid state memory.
They used core on AP-101B since it was more proven at the time and was more resistant to cosmic rays.
It shared it's architecture with the IBM 360 mainframe and was still was reasonably fast for the time maybe a little under 2x the performance of an IBM 5150.
The APA-101S went to solid state memory and was half the size and weight.
@@Membrane556Yes quite true, would be interesting if any of AP-101B units are still in service anywhere!
@@nzoomed I think the air force still uses derivative of the 4pi in a couple of applications.
@@Membrane556 Does not surprise me considering they are still using 8 inch floppy disks to launch missiles!
At last ! After all these years, it now makes some sense ! Bravo & many thanks for another great video Marc
If you have the space for it, you should totally include (a ruggedized version of) this setup in the museum! It would be a very educational hands-on experience.
Superb, I've never fully understood it before. I wonder if there was ever a mechanism whereby the program could signal that the some particular temporary data once read was no longer required, so the write-back cycle time could be avoided. It would seem wasteful to be writing back data at times when the software knew full well that it was finished with.
In every core memory based computer I have seen, the heart of the system design was a full memory cycle - read then (re)write. Generally one memory cycle consisted of some number of clock cycles, while execution of instructions typically took several memory cycles.
I am not aware of anybody jumping ahead to the end of a memory cycle when the contents of the word were to be zero (or ignored).
@@carlclaunch793 The only thing I can think of along these lines is the computer in the Lunar Module's Abort Guidance System, the AEA (Abort Electronics Assembly). For a lot of instructions, there were two forms -- a normal form (with writeback) and a zeroing form (without). For example, there's ADD (add) and ADZ (add and zero), SUB (subtract) and SUZ (subtract and zero), etc. These zeroing instructions didn't save time by skipping to the end of the memory cycle, but they did save power and so the programmers tried to use them wherever possible.
@@mikestewart8928 That is quite an interesting twist, one I hadn't seen before since the computers of the era were bound to outlets and didn't worry overmuch about power consumption.
Yeah, it's really interesting, isn't it? The AEA's ROM is quite strange as well. Unlike the AGC, the AEA's ROM was a regular core plane... except the inhibit line was deleted, and the X address lines only threaded through cores that were to hold zeroes. So with this scheme, no matter what you try to write back to the core, you'll still end up with the correct fixed data.
The problem with this was that the same read-writeback cycle was used for this ROM as for the regular core memory. So there's some amount of time where the cores of an instruction are flipped but not written back... so it is possible (through loss of power, etc.) to leave a word of ROM in a bad state. To ensure this doesn't happen, the very first thing the flight programs do at boot is read once from every ROM location, to "prime" the cores for normal operation.
@@mikestewart8928 Super answer, thanks. So yes there was such a mechanism, but it was done for power saving rather than time. Thanks for that.
Thanks for this complete demo from scratch, with the problems of setting and reading the core. So it is now to me complete clear how it realy works with the corrections needed to get it to work propper. In basic I allready knew ofcourse but your video made it complete with the details. I still have to start repairing my DEC PDP 8/M that has a 8k x12 core mem card.
Great video. That last line says it all, “...memory done the hard way.” It amazes me people had the tenacity to make an idea like that work. One thing that really surprised me the first time I saw core memory was the size of the cores. When looking at an illustration I was imagining something at least a cm in diameter, but they’re nearly microscopic. I’d like to see how they manufactured the cores themselves. I assume they were molded, but how would the molds be produced? The other hurdle is the control circuitry to get past the destructive read. The mind boggles at the complexity. It’s hard to believe anyone would expect that to work, let alone that it was the best memory available for a decade or so. Thanks for this video. That’s the best description and demonstration of core memory I’ve ever seen.
If you look into the alternatives it replaced, you will find thornier engineering problems and often higher cost as well.
Williams-Kilburn tubes used a modified cathode ray tube, where the face of the tube was a capacitor and the electron beam deposited charge at various spots in a grid across the face to represent bits. The electrons striking the inside of the CRT face would splash off in a fountain, contaminating nearby spots. The charge would drain off, too, requiring regular refreshing of the bit pattern. Programmers had to hop around with the bits they accessed as too much locality of reference produced excessive cumulative spray onto adjacent bits. These also require many kilovolts to drive the beam yet the detected pulse when reading was down in the microvolt range, requiring enormous amplification; cumulative noise was a big big problem in the amplifier chain.
Mercury vat delay lines used acoustic signals that traveled through mercury. They had to be heated to a constant temperature to work. Reflections off the side walls of the tube would smear out the pulses, making reliable detection quite hard to accomplish. These, like the tubes above, took nearly continuous fiddling to achieve acceptable results.
Drum memories required many read/write heads to get decent capacity. The memory was not random access, instead the program should keep track of the current rotational position of the drum so that the needed next memory word doesn't require any substantial portion of a rotation time to come back under the heads. These are both expensive and slow, limiting processor speed.
This is madness. Gorgeous old technology.
Thank you Marc for sharing all this with the world. The diagrams were very appreciated.
Beautiful video guys. I found your channel after several of the AGC videos and I went back to watch from the start. You all really know your stuff. I did not then know you were from the Museum. I want to tell you how satisfying every video is to the nerd in me. Clearly there's a tremendous amount of work involved. This is brilliant. Thank you!
Merci pour vos vidéos, un travail impressionnant à travers toutes vos aventures d’électroniciens paléontologue ! Vous transmettez la passion en tout cas.
Back when I was in college (early '80s) I was given 2 working old dumb terminals. They must have weighed 60lb each, and sounded like a hurricane when powered.
Individual hall effect switches for every key. Built like tanks, and could probably have run constantly to date without a failure.
I tore them apart for the parts. Each had a 2k core memory board. I hung them from a string and called them 'memory-belia'.
Release the schmoo...
Cliff Burridge That Louis Rossmann vibe
rkan2
Louis and AvE! 😃
Any AvE fans here?
yes!
Yep big fan
And the great thing was that the content of the memory was not volatile. Remained after power was removed. So the processor was ready for work the next morning. Just switch it on and away you went as everything was still in the core memory from the day before. The good old days.............
I always like these old types of computing components so much more interesting
Wonderful video! I was born in 1971, so this is the tech that was around when this TH-cam geezer was born, fascinating to see how it evolved as I grew up, until I can sit here and watch this on my iPad. BTW, interesting engineering on that toggle switch box...I give it A "10"...I`ll let my self out.....
This is the best explanation and demonstration of core memory I have seen. Someday I may have to experiment with the core module I have. I need to repaire a few wires that have pulled off their contacts, though.
DEC word lengths were *never* 8 bits. They were 12 bits (PDP-8), 16 bits (PDP-11), 18 bits (multiple), 32 bits (VAX) and 36 bits (PDP-6 & PDP-10).
A great explanation and a very interesting video. Finally the column and row current causing the flip makes sense. More dinosaur DNA for anyone fiddling with their CAS and RAS overclocking. :)
even if I'm wrong I'm going to say that the core stack is where we got the computer programming term of "the stack" when referring to memory operations from.
"this was memory done the hard way": Truer words have seldomly been spoken :-). Amazing video and a great explanation!
Awesome video! I've known the basic theory of core memory for years but never really dived into the details. I never would have guessed that the current to flip a core was as high as 700mA. Outstanding work by those engineers back in the early days that did all of this without the high tech tools we have available today. Also, on the video it looked like at least one of your signal generators was producing ramps, which are still infinite bandwidth signals - did you try a low-pass filter on the pulse generators instead of or in addition to the slower rise times?
I just have built Jussi Kilpeläinen's Core Memory Shield for the Arduino. When not connected to the Arduino but powered anyway, the enable signal is always on and current flows through one row and one column wire constantly. I noticed some related components and my makeshift 3.3V power supply heating up due to this constant current flow. I estimated the currents from the schematic and found a value of roughly 300 mA for the row and column current. Then I started to look for a confirmation of this estimation online. This video is one of the very few sources about core memory that makes comprehensive quantitative statements on the matter. And it explains the matter just beautifully with experiments.
Well done, as always from CuriousMarc.
It depends on the size of the cores, and one of the reasons they kept making them smaller.
"...and inches, for those who have not evolved yet." Hahaha, well, played! Jokes aside, this was really interesting, especially how they had to blank out every core before they could read the one they were looking to read. And then how they dealt with remembering the values of the other cores and on top of that keeping at 0 the cores that should remain 0. Really great video, super educational!
That's what you get for allowing Frenchmen into the country.
somewhat oddly, some units in the US have moved to a 10 based system, but with a "mil" being a 0.001 inches. I've encountered tape measures for surveying with feet divided by 10. Though at some point its just starts getting weird.
those caveman freedom units put a man on the moon 50 years ago! but we're not evolved.. lol
@@paulkaygmailcom That's always going to be my argument for the US Standard and Imperial systems not being inferior.
@@bryceforsyth8521 They may have put people on the moon. But what is more complicated: Converting from miles to inches (what is the right multiplicator?) or from kilometers to centimeters (just shift that decimal place over)? You can *measure* anything in either system, but the convenience factor when *working* with those measurements is clearly on the metric side. ;-)
Another awesome video. Thanks Marc!
Great Vid! Really nice to see and understand these (elegant!) early electronics solutions.
It amazing someone or something made that back in the 60s can hardly see the rings without the microscope
"Release the Schmoo" ... said AvE, while flipping the bit in a core...
this is amazing and this was from the 50s its like magic
Absolutely incredible! Great video.
The core memory at 7:48 is 32x32 bits, or 1024 bits, if i counted correctly. In modern systems where a character is usually a byte, 8 bits, (of course depending on encoding) that would mean that core could store 128 bytes or 128 characters. Less than a tweet. Of course they stored very architecturally specific computer words, of other bit sizes, and not UTF-8 or ASCII encoded characters but I digress.
It really is very cool stuff. Mind blowing inventions.
Superb video. I’ve always wondered how these memories actually worked. Now I have a sense of how it is done. Very cool!
Excellent explanation, nicely illustrated with tools on the bench. Great video, keep it up!
watching this on my phone is a magic moment because I realize how far we got in this short time...
The title is "Explaining ferrite core memory". But (as far as I know) the core rope memory cores used on AGC were made of permalloy. And core ropes work on a different principle of ferrite core memories.
Core memories used on IBM, some ole fight simulators, etc are very different from core rope memories. At first core rope memories are read only opposed to ferrite core memories (where the reading of data is destructive).
8:05 is priceless (or 1/25.3937 priceless)! Thank you for the laugh.
Damn! Extremely interesting stuff...The way they put sense wire and inhibit wire in conjunction with half currents! It's just genius.
Amazing ! I'm wondering ... If you put a magnetic field viewing paper on top of a memory core, would you actually see the magnetic field of each ferrite ? I doubt that the magnetic filed of such a small ferrite would show on the paper, but if it did, you would get a sort of primitive e-paper screen :-D
No. As there is no air gap in a ring core the external field is very low.
If I can find the paper you’re talking about, Ill try.
Thanks for posting this. Very cool. I bought some core memory on Amazon a couple years ago. Thankfully the ones I have still have the solder tabs around the border of the unit. They aren't as large but definitely good enough for practical experiments. Cheers!
This video was super great but I needed a bit of clarification an one part, that was the reading of the bit part. Maybe I am daft but that part was still confusing due to the choice of words. This led me down a rabbit hole to be able to fully grasp the concept. I hope it helps someone else stumbling on the basics like I did.
The way it was described in the video made it seem that reading a bit, was always about reversing the electric current. That would have meant that a one would turn into a zero and also that a zero would turn into a one, i.e. the bit would always flip. This confused me greatly since then it would not exist a way to actually retrieve the data. In actuality they send the zero current, i.e. reading is not "reversing the polarity" but always "sending the zero polarity direction of current". Reading therefore is setting it to 0 and watching to see if it changed.
Right?
A more apt way of describing it might be something like "reading the bit is a matter of sending the current representing the '0 polarity', meaning a 1 would be flipped but a 0 would not and sensing if this change occurred or not".
I've got the Core Memory card out of a DEC PDP-8E. 12 bit "word", so 12 mats on the card with 8,192 bits per mat. Each mat has 128 rows by 64 columns of ferrite beads. Read time of 1.2 uSec, write time of 1.4uSec. Part # 30-10654-2, DEC P/N H-212 might have been manufactured in Nov of 1974.
Wow I never understood core memory until now, great explanation!
One of these days I'd like to make my own core memory module. I have a lot to learn about them before I can do that, though. This is definitely a good start.
And you thought your threadripper had alot of CORES.... BTW that is the BEST use for iPhone boxes I have ever seen!
Excellent presentation, and loved the gear seen in the museum.
Consider if you will core memory used magnetic properties so in essence were inductors. Modern RAM chips memory is essentially stored in an electrical field or capacitors. And refresh circuitry. And recall static vs. dynamic RAM. And if you really want to talk about old school memory, Bell Labs used flying spot, barrier grid etc.
I wonder if in theory core memory could be massively scaled down using modern lithographic processes. I suppose the key would be finding an appropriate material with a suitable B-H curve that can both be deposited and etched. Would be a fun experiment for someone with access to a foundry.
Thin film core memory was already invented by the early 60's, and commercially available as early as 1962: www.computerhistory.org/storageengine/thin-film-memory-commercially-available/ . But it was not cost competitive, particularly when silicon memory became available. Like many other memory technology attempts, some magnetic - bubble memory comes to mind - finding new memories that can compete with silicon on price/bit, density and access time has proven to be a tall order.
WAY TO GO! Excellent presentation and WOW teaching understanding how computer memory works
Thank you for this great video. It helped me definitely to understand better this historical memory. 👍🏻
Watching this was an absolute blast! Thank you for this video, it was amazing!
I'd be curious to have a more detailed explanation for the memory driver/sensor circuitry. I understand the principle of how core memory works, but how that interacts with a computer like the AGC is interesting. Presumably the change to merge the sense/inhibit wires came long after AGC, and ditto for the ability for the controller to increment the memory location as part of the reload cycle after read.
Brilliant. I have a single core "teaching tool" I've used with some special needs kids for 22 years now. I am ditching it in the FIBuckit. I will use this video instead. Many thanks.
The Clickspring of electrocity is back! :oD
Ace! ^^)
How could I not have known about this awesome channel before? Subscribed!
Excellent video, thank you again!
I wonder how the more recent FRAM works compared to core memory and if it took some of its inspiration from this novel technology?
Amazing stuff, well done guys.
There is a guy from Finland named Jussi Kilpelainen that made a Core Memory Shield for Arduino. Really impressive.
Oh wow! I will have to look at it because this is mighty impressive!
A brilliant production, thank you!
Excelente video. El canal es espectacular. Gracias por éstos videos y contenido! 👏🏻👏🏻👏🏻
I came into the video expecting RO core rope memory, and learned something new :-D this is awesome! Thanks a lot!
EDIT: Still curious about core rope memory, but I'm sure you have a video about it in there somewhere!
Not yet. Rope is even more complicated. However Ken made a great article to explain it, on his blog at www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html
SRAM uses Sense amplifiers
I would love to have a Core Memory Module to play with. These are so cool how they work.
Also will look great with my J11 CPU. Courtesy of Chris from Play With Junk YT Channel.
Great Intel Out-Side Demo.
God Bless.
🙏
From a time when memory was memory and not that modern stuff that loses it's contents when you only remove supply voltage :-)
Most love the Apple switch box and the power supply left to it.