The first ssd I ever bought back in 2010 is still functioning perfectly. I have another newer ssd with around 1PB of writes and it reports no problems. I've had one ssd die and it happened in the first week or two, got a free warranty replacement a week later. Overall ssd's are crazy robust. Edited for clarity
So close... you almost covered the caveat. RAM can still fail due to wear-out like SSDs, however, this usually only occurs in datacenter settings where the DRAM modules are kept at a high utilization. In these cases, the failure rate is still quite low, but it is a problem that companies like Google and Meta are currently trying to overcome (and no, they are not letting their RAM modules get too hot or overvolting them). For the general consumer or enthusiast, the chance of them encountering such a failure is very low. Also, you can wear out a SSD in read-only mode as well. There is a phenomena called read disturbance, where after continually reading from adjacent cells (floating gates), the voltage levels of that gate can diverge from the expected thresholds. To combat this, if a page is read too many times, the SSD controller will typically either relocate the page to another location in the flash or rewrite the current page (effectively refreshing it).
@@user-rm5ye5xc7f I'm not sure if there is a good method - it would likely be vendor dependent. Samsung for example has a health utility (program) that I believe can communicate with the firmware on the SSD. Any decent SSD (that probably even includes the cheaper brands like ADATA) should internally manage health, so what you will see is that the SSD will decrease in size as it ages. If you ensure that you always have free space (which might mean slowly removing files as the size shrinks), then it should be okay (if you keep pushing it though, it may only be a 2GB drive in the end, but those 2GB should still be good). The problem will occur when it runs out of space and then also tries to shrink. That said, if there is anything important on it, you should always have a backup somewhere else (on another drive, or in the cloud). Also, don't defragment it. That's only meaningful for spinning drives that need to seek between "fragments". SSDs have a fixed read cost regardless of where the file is located in the drive (the sequential read speedup happens because of the drive cache and not because of how the files are stored).
Just to add a thing, there's two basic types of RAM, the DRAM described on the video, and SRAM, that is a type of memory that use 6-8 transistors to logically store the bits. SRAM is insanely fast as it's basically made out of logic, but it use 6-8 times the space of a DRAM per bit, which makes it 6-8 times more expensive. Most modern computers use SRAM for caches like the L1/L2/L3 etc.. and DRAM for everything else.
@@Rudxain Nope. SRAM is just faster because its pure digital logic. it's 8 nand gates arranged in a form that store data. it's the thing that caches and CPU registers etc are made of. DRAM on the other hand are microscopic capacitors that store the bits by charging and discharging em. so it takes a bit more time than pure logic gating. But a dram capacitor take the space of one transistor (or even less), and with some grids etc.. it's basically what you get with DRAM, an entire byte stored on the space of a single bit in SRAM. SRAM chips sold separately are probably slower than the DRAM chips however because it's not very common to have this kind of memory outside of a chip nowadays.
I keep 500 gb on the main drive, a 1 TB SSD on the side for games and other large things that benefit from fast load, and a 2 tb HDD for large stuff like OBS files. I like the 500 gb limitation it keeps me diligent in pruning my storage space.
2 ปีที่แล้ว +39
Pfft I write 30TB in a day.
2 ปีที่แล้ว +60
Now that's not to be taken seriously, there was just a gap for a wanker style comment and I had to fill it.
SSD's wear level depends greatly on data movement rather then keeping data. movement means not by moving stuff around but actually putting new data in, deleting and replacing it.
2 ปีที่แล้ว +7
@@m-w-y7325 So what you're saying is a good big one is always better than a good little one.
my very first SSD was a Samsung 840 EVO in 2013. I remember being so worried at that time about it wearing out (they were expensive AF that time) that I actually calculated its estimated life/cost vs. just getting another HDD. Said SSD is still in use in my current PC as a cache drive for my game HDD and is the only thing i've carried forward from that old build. it's lasted three whole system rebuilds (Q8400, 3770k, 3700x) and i'll probably still bring it to my next one.
2 ปีที่แล้ว +11
I did the same with a 30gb SSD i got from my brother-in-law. Due to its storage size i end up using it as a cache drive to keep my primary SSD last longer. I couldn't think of a better use for it with that 30gb. Still running with no issues.😊
My 840 Evo just died on me a few days ago out of the blue. It was my boot drive, luckily I noticed something weird and backed it up last minute. It definitely didn't do the read-only lock thing mentioned in the video, it just died.
M.2 is a form factor NAND is a type of flash memory used in SSDs NVMe is the connection protocol used to transfer data (PCIe bus rather than SATA bus) The M.2 form factor you are referring to is still an SSD, just a smaller size using the above two technologies TOGETHER which makes it superior (a new generation of SSD) when compared to the old ones you are referencing (using SATA). They are lightning quick due to the way data is transported, this also allows larger amounts of data to pass through and be processed, so naturally when something is faster and more efficient we call it an upgrade so yeah this would be the "pinnacle" of SSD technology as we know so far until it gets improved even more.
wrong, the way to workaround errors has improved and there is a technique called multicell that can make it impossible to access the data after the flash gone dead. They are more fragile than you think.
@@Jameslawz Well if we're discussing the "pinnacle" of ssd's then rather than NAND, 3D XPoint in the form of Intel's Optane would definitely take the crown.
thats actually not accurate, single layer (SLC) was most reliable, but as the companies increase the amount of layers to increase density and therefore capacity, it reduces overall writes. A common example would be comparing the total writes available on something like a Samsung pro vs something like a QVO that uses quad bit.
I worked as laptop service engineer and one customer asked why all laptops parts cover replacement warrenty except RAM I told RAM is the last dying part in laptop
2 ปีที่แล้ว +9
It's a terrible job really, surrounded by disaster and failure, occasionally having to tell people their business is dead.
@@Lunamana With Laptops, you have all of that circuitry crammed into a tiny little box thinner than an average book. All of the warmth from that circuitry is nearly pressed up against each other, and even with fans and stuff to keep it cool, the RAM is probably warmer than the RAM in, say, a PC that is out in the open air with tons of air flowing over it, and modern RAM also usually comes with metal covers with small heatsinks attached to it to keep it cool. You don't get that with a laptop and I would assume that's why laptop RAM can sometimes fail faster.
@Brian-Fong's--comment/post "SSDs have electricity "water dams" and the electricity breaks down the "water dams" over time. Ram has no dams.": Damn :-(
I love when Professor James makes a visit. Funny as much as I feel like I’ve learned I still don’t really understand how it all works. But I get the basic concept SSDs die whereas RAM doesn’t for obvious reasons.
@@MasterGeekMX Nah,like I said it would just go over my head anyway. I mean I appreciate it but the minutia of the ins and outs of tech don’t really do much for me it’s more than practical applications. Which I’m sure they probably do go into somewhere in those videos but still just not my cup of tea.
@@JohnDoe-kv3kd Except, both do die. And with RAM chips it's not just from overclocking/over-voltage. Depends entirely on usage patterns. My main computers are running 24/7, and I've lost tons of RAM to chip failures over the decades, it's something I come to expect. The RAM is in constant use, and just like everything else in the computer, sooner or later it will wear out and fail.
I had my boot SSD die on me about a year ago, and as I was recovering the data, I noticed it was read-only, but I had no idea why... until now! Thanks, James!
@Telleva except if/when the controller dies then you are screwed. Early SSD's had cheap controllers that were notorious for dying. I had a Crucial SSD die after 3 months because of the controller chip.
@@Josiuh it was an HP EX900. It had good reviews when I bought it, but later when it died on me, I checked back and a surprising number of people had the exact same experience with it.
@@Josiuh yep! If you're looking for reliable SSDs, you can't go wrong with Samsung, Intel, or WD. Been rocking an Intel 660p drive for almost four years and a WD SN750 since the HP drive died, and both have been fantastic. Both extremely fast and reliable, even for gen 3 drives.
Hello guys, I love Techquickie and it was refreshing and fun to see James all studious for once. Suggestion for a future TQ : could you go over positive/negative air pressure in a case? I believe you covered this topic several times in the past but I still get confused regularly. I just wish wasn't so puzzled when trying to figure out how many fans I need or how I should place them. Great work and thanks!
Both LMG and Jay made videos on the topic aaages ago, and LTT has a video of year long experiment with different setups. In short, it usually does not even matter, unless you run something incredibly imbalanced, especially with obstructed airflow (SFF or huge tower cooler). After that, not much you can even do - you might be able to cut down a couple degrees from the neutral configuration, but it's almost too much bother at this point. Besides, there's no single answer, and most surely not a quick one. The case itself is a huge determining factor, and if they've screwed up, no fans would ever amend it. Case in point: GN's "12 fans, 0 airflow" review. Frankly, if your case is truly bad, it's probably worth it to bite the bullet and get a better one over getting 4x case fans for half the price of a new case.
You also neglected to mention that most SSD failures come from its controller (most times due to overheating) and not the flash chips themselves. That's got nothing to do with degrading but bad manufacturing.
That tends to happen with click-bait videos like this one. Over the years I've had dozens of sticks of RAM fail on me, but so far haven't lost a single SSD.
@@looneyburgmusic Yeah, you're absolutely doing things wrong if you were using DDR RAM in the mid-80s :P RAM from the 80s is nothing comparable to RAM of the last 20 or so years, it's like saying cars are unreliable because you can't ride a bike, two entirely different things. If you're breaking multiple DDR RAM sticks (DDR1-5) you're also doing something extremely wrong, especially if they run 24/7 meaning you're not touching them and they still break. You either are using extremely cheap power supplies, constantly touching the RAM without any ESD protection, hitting the RAM with a hammer or mis-diagnosing broken RAM. Either way, time to figure out why you keep breaking RAM (or why you think it's the RAM that is broken)...
This is so timely, I recently helped my mom out with her laptop slowing down. I found quickly that her SSD was running dangerously low for an extended period of time so I suggested some well overdue cleanup. I gave her a high-level about SSDs wearing faster due to low space, but now I'll be sharing this video to her as well :P
Also, the capacitors in RAM lose data within milliseconds and have to be refreshed many times a second. They are like flash memory that's extremely degraded. Reading from (dynamic) RAM clears the data as well, so it has to rewrite it each time the CPU reads it. It's just that this is expected behavior, and the constant refreshing doesn't degrade it appreciably.
@@paulmichaelfreedman8334 not just "transistors" but triggers/flip-flops/latches, because DRAM also uses transistors as capacitors. "In electronics, flip-flops and latches are circuits that have two stable states that can store state information - a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will output its state (often along with its logical complement too). It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems."
Sadly dataprocesing in cpus is way way more complex... We have registers holding address of procesed stuff, stacks and much much more things. Might be wiser to watch basic PC creations in minecraft, couse you can distance yourself from electric details and understand the logic behind it. (The usage of edges of signal to change memory. etc) Then add real live components.
There's a kid in... I want to say New Jersey? Who's been playing around with making his own processors specifically to better learn these concepts. I guess he's probably not a kid anymore x'D Time flies. I think he was over 400Mhz, which is pretty awesome for a home made processor.
2 ปีที่แล้ว +4
You could do say a 6502 or Z80 level CPU but it's like a log cabin compared to New York city when you look at current CPUs. They're not just bigger there are whole ecosystems (techosystems?) of extra stuff. To use the city analogy again you don't really need an underground or bus system in a log cabin. I'm saying there's irreducible complexity.
Overclocking ram has to be the most pointless thing I've experienced, at the end of the day tightening the timings worked far better and didn't require me to increase the voltage at all.
Generally I've had the same experience and I just go with tightest timings on normal voltage. In the past, however, any time that I've used an integrated GPU that shared system RAM I always got best performance by using highest RAM clock speed.
@@RCmaniac667 for me i bought what i wanted and then just pushed the multiplier till it stopped working correctly and dropped it and have been running the same settings going on 4 years and not had a memory related crash outside of my initial mess around
@@RCmaniac667 Yeah, I don't know how different it is for Ryzen, but from my experience overclocking ram gains are mostly negated by the need to increase the timings just to keep the system stable. So you end up forcing your RAM with higher voltage for very diminishing returns in actual performance.
This is one of the most helpful Techquickie videos I've ever watched. I never understood why SSDs failed and RAM doesn't, but this was very easy to understand.
3:18 You probably don't know what "lifetime warranty" means. If the manufacturer produces exactly the same model of the device, the warranty never ends, but as soon as the production ends, you have, for example, 12 months of warranty.
Also notable, if you have a smaller 120GB or 240GB SSD as your main drive you should consider disabling pagefile on it altogether and/or moving the pagefile space to a secondary HDD. I've killed a few value branded ADATA/Kingston drives and suspect this was the cause as the Pagefile is taking all of the limited write-life these budget drives have.
Weird that the video never uses the terms "NAND", "DRAM" and "endurance". As far as I know, DRAM endurance is orders of magnitude higher than that of NAND, but then again, DRAM is constantly being written to when it's being used.
Ram is a rabbit hole.... Just look for stuff like FPM. Zip-modules and so on. But even worse, check out 8-Bit IDE, RLL and MFM, or going deeper and read up on every SCSI-1 busses that were ever created. Now THAT is some solid bedtime reading. 😁
It was a few years ago that I thought I understood RAM better, as in "retains data as long as power is there" but then read about refresh cycles. I think this would make a useful follow-up topic. Also what's the deal with PMEM or Optane, in this explanation?
Intel's brand name Optane has two different technologies. One is just a fast normal SSD. However, the Optane memory plugs into a DIMM socket is interesting. It works like DRAM. It is a ten times slower DRAM but with ten times density. Neither Intel or Micron reveals details of the Optane DIMM. Some very basic information is that it is not floating gate as usual flash and thus SSD.
He didn't even explain it well since he talked about capacitors (which are in no way required for RAM or other volatile memory) So he doesn't get at the underlying reason why volatile memory is volatile. Here's a better explanation: The most basic RAM is nothing more than multiplexed latches. These latches can be designed many ways, so we will keep it very basic with a set/reset latch. Even this could be designed to work slightly differently but here's the basic truth table: S | R | Q | 'Q 0 | 0 | hold | hold 1 | 0 | 1 | 0 0 | 1 | 0 | 1 1 | 1 | undefined So if we have set, we set output to 1, reset sets output to 0, both set and reset at 0 results in "saving" our state, and trying to set and reset at the same time is undefined. So now we must ask, "how do we know if a line is 0 or 1?" The answer is that the transistor is designed such that when power is applied, it is able to rudimentary conparison of voltages comparing them to the ground and high lines providing power to the transistor. This can only work when the transistor is powered sufficiently, so once power is removed it is no longer able to properly and predictably compare these states. The end result is that when we apply power to a circuit, we do not know what state those transistors will be in. It takes using a different medium (like a magnetic disk) or engineering specific solutions to this unknown-state issue (like a flash EEPROM) to overcome this problem. But it is ultimately caused by the technology we are using to build the memory, the transistors. It is NOT because of capacitors. Edit: In case my explanation was not clear enough, it is the transistors saving the state information. E.g. if we keep power and set the set bit to 1 and then back to 0. As long as we don't set the reset bit to 1 or lose power, then the output Q will remain at 1.
@@kiraPh1234k I think capacitors were an analogy. Anyway, as a non-EE your truth tables and letters went way over my head. +1 again to James and this video.
@@spotted0wl. The truth tables are supplementary. The main point is simply that its the storage medium (transistors) that is responsible for the volatility of RAM, and the specially engineered storage medium of an SSD that exposes it to more issues with wear/deteriorating to combat that issue with volatility.
@@meatbleed Ram takes a small slice of time between regular system clocks to "Recharge" any capacitors that represent a 1. Essentially, the capacitor's current value is read, and if it is a 1, the capacitor is fed electrical power to bring it back to full. You can think of a capacitor at these timescales like a leaky bucket. A 1 is represented by a bucket that is more than half full, a 0 by a mostly empty bucket. Before the bucket has time to leak from more than half full to mostly empty, the ram refreshes the value, topping up the bucket.
I have worn out quite a few SSDs. At least to the point of SMART warning triggers. Wear leveling won't stand a chance if you: 1. Run software that regularly writes/overwrites/deletes *large* numbers of *tiny* files. Like for example scheduled metadata refreshing on database/media servers. AND 2. Run on SSDs filled almost to capacity (92%+) over extended periods of time (days or weeks).
A little extra manual overprovisioning might expand the life span of an SSD. I usually create an empty partition of 10% of total size for this. This also prevents the performance degradation of an almost full SSD. th-cam.com/video/Q15wN8JC2L4/w-d-xo.html
You're covering very important and interesting topics on this channel. Maybe you don't do it in an enitrely scientific fashion, but then again, that's the point, so that most people understand it. In these terms, you did a perfect job here.
I'd like to see tech quickie do a series or a companion piece to an LTT deep dive on data recovery options for various storage mediums and perhaps a rundown on which solutions are best for certain needs like accessibility, longevity, durability, speed etc.
Got a 500 gb Samsung plus ssd. Nearly 1.5 years later, only 10 tbw out of 300. Using a ram disk is very useful for short term tasks if you got the ram. Then you can store projects you wanna keep.
I love topics like this, and the level of explanation was perfect. If you already know about electron hole combination you get it, but people who don't still learned something. More videos covering semiconductor applications and manufacturing news please!
Great video. I’m a mental void as soon as it comes to hardware and circuits. It always just goes in one ear and out the other. Kudos for making it understandable, even for me.
2:50 - It actually does do this during Shut Down to a degree, which is why "Restart" is recommended if you're having driver issues. "Shut Down" still saves the system state to disk. In other words, "Shut Down" in Windows is a fancy label for "Hibernate, but close all of my apps first".
This is due to Fast Startup being enabled in Windows 10 (don't know if 11 has it), thus certain drivers issues come back when booting the system again, if I remember correctly. If you disable Fast Startup, shutting down would be similar to restart, not writing on disk to come back to a previous state, but of course instead of restarting it would actually shut down.
@@Tokisaki8815 Exactly! I thought about including that in my original comment, but I didn't want to drag on and on. I usually have Fast Startup disabled. Mainly because I like to have a fresh boot up each time.
The thing with SSDs is that to increase capacities, they increase data per cell in recent years which reduces write life per data which is a worrisome trend. While you can't really wear down a SLC drive, a QLC drive can test those limits with intense data writing over years. Like a 1TB QLC SSD having about 200TB write limit which is just 200 total capacity writes. While it's still mostly safe for the average user, if they try to cram in even more over the same area with this trend, this will be problematic. They're already working on PLC designs, so..
Eventually they're just going to have the make the drives bigger and put more cells on the drive if they want to get higher capacity drives out. Of course, this means they will be more expensive, but I'd rather pay $300 for a 4TB drive that has double the circuitry than pay $150 for a 4TB that's going to konk out after a few years because the write limits are so low. But, if you're using the drive to store stuff long-term, and it's not going to be a system drive, like, you wanna put all of your high filesize games that aren't updated often on it, or store videos or what-not on it, then that would be perfectly fine to go with the limited write version.
Exactly, no one is mentioning this. In a market full of competition they will cram more density in each cell because it's cheaper and lower cost at the expense of lifespan. The majority of consumers won't know any better. Eventually they might end up failing as fast as HDDs would fail.
I static damaged a ram chip years ago. I sparked to it like you do a door knob in the winter while it was laying on the case. Case was plugged up so it was grounded. That chip would now stop the computer at the post. It was working too before that almost inaudible SNAP,
I remember reading how sketchy the early MLC SSDs were. Basically a roulette wheel, get some speed, lose your data. Then the X-25M came out and that was all over instantly. It still works. I recorded video to it. It has weird caching issues where it hangs for a second though, always did. Then I picked up some Samsung disks. About 30TB on them now, they're fine. Now to be fair, IIRC I've never had a hard drive fail. Of course they will wear out - I just never wore one out in spite of years of use. So FWIW I don't necessarily find SSDs more or less reliable than mechanical drives.
That first part was a great way to explain quantum tunneling, the same thing was said but without intimidating monikers. As always, i appreciate your content and thank you.
I'll have to watch this later as I'm rather tired right now, but to be fair, one of my RAM sticks did end up going faulty and I had to have it replaced under warranty. That was the second stick I put in my original build (each at 4GB, and I started with 4GB to save money as it was $44 for Kingston ValueRAM DDR3 sticks around that time) that I got for Christmas 2014, so once it failed I was down to 12GB. It was really fun diagnosing the issue with Memtest86+ and swapping sticks into each of the slots individually /s, but at least it only occasionally gave me issues when using a lot of memory before noticing it was a memory issue. That's the last time I had major Bluescreens that weren't my fault (so unlike when I didn't uninstall ASUS and Intel stuff before moving to an ASROCK and Ryzen setup and had to clean up in Safe Mode). Funny enough, one of those BSoD's happened when I was watching an LTT video and Linus was saying something about crashing.
I've only ever seen one SSD fail personally and it was an Intel 240GB 530 series SATA unit back in 2015 when 240GB was a reasonably high end SSD capacity and Intel was the reliable drive standard before Samsung took over the market. The rest of my SSDs, even the cheap Kingston/PNY ones have been flawless but I havent had any of them long enough or used them hard enough to comment on their long term reliability (with the exception of my 960 EVO 250GB that was just now replaced after 5 years of service when I finally decided I needed a boot SSD big enough to hold a few games.
Great video. I wished you would have mentioned to never defragment an SSD. You've discussed it before and it amazes me that some people still haven't got the memo. Keep up the great work.
not really an issue....once its defragged then what???? When you defrag a hard drive, what happens the second time??? A message pops up and says it isnt needed..... Also Windows 10 and up wont allow you to defrag an SSD anyway
thanks, for explaining the basics again... I'm an IT Dinosaur (first computer was a Apple IIe) being in the business for 30 years... but now I got a reference for newbies to point to
For history sake, I still have this Creative Labs Zen Stone 1GB mp3 pod. Whatever tech they used put them out of business because I've had the same files on it for over 10 years, not used it, and the files are still there with no degradation. That little thing is amazing.
Capacitors can hold a charge for nearly forever if there is no load, the reason they can't hold a charge forever in RAM is that the RAM is loading it down which will discharge the cap. Capacitors can be very dangerous and even lethal if they are holding large voltages and are not discharged before touching them, so I think this distinction is important. This video is well done, nice work.
Depends on type of capacitor. I haven't seen details, but the caps embedded in a RAM chip are designed to be refreshed every few milliseconds, and thus don't have to be engineered for low leakage current.
Also, the dangerous capacitors typically hold more voltage than the ones in RAM sticks. You're comparing caps that have a diameter of a pencil to caps smaller than the eye can see, that hold a tiny fraction of the voltage.
2:43 "take away power and the capacitors can't hold a charge anymore." I know you're trying to keep things simple here, but that is a very misleading statement. When connected to a transistor inside a memory cell for a RAM chip - yes, the capacitor will leak. But if a capacitor (especially a large one) is truly disconnected from power, it can hold its charge for days or years. That's why you have to be careful around power supplies even when they're disconnected and it's why you have to leave your router unplugged for 30 seconds before plugging it back in again. capacitors DO hold charge.
Huh, I actually never asked myself that question, so it's nice to hear both an interesting question and the answer at once. Though I should've guessed that the non-volatility was the main reason, being the main stand-out difference between RAM and SSDs.
Yeah so as u asked for a feedback about what content you can make next. I am a PC enthusiast and have been building PCs lately and with experience I understood this that with Ryzen APUs there lies a slight problem with gen 4 ssds because even though the motherboards may support gen 4 ssd the Processor I guess allots the PCIE gen 4 to the intregrated graphics and thus disabling PCIE gen 4 support for any components. This problem especially arises in Laptops which are ought to be an APU for efficiency and do compromise on gen 4 ssd. Also in case of Ryzen APUs the performance of the Integrated Graphics depend hugely on the frequency of RAM and latency... thus most of the times in a laptop that completely depends on the IGPU and doesn't have a dedicated Graphics it makes a lot of sense to use a DDR4 instead of the newer DDR5. A lot of people are unaware of this and thus end up blaming the manufacturers for not providing gen 4 ssds and DDR5 RAMs. I request you to make this clear to the incognizant people.
This year I've had to deal with 4 separate hard drive failures with friends, family, and at work, and 2 of them were SSDs that were less than 5 years old. None of them had come close to their TBW estimates.
Even with the best manufacturing and quality assurance procedures, manufacturing defects can happen. When it comes to solid state components, the only way to detect them before use is to destroy them in testing.
I had one of the RAM sticks in my computer fail a week ago. I guess I shouldn't be surprised I got recommended this considering I've been doing a lot of searches related to that as a result.
I retired a Crucial M500 960 GB SSD in Dec that was purchased for my daily driver back in Feb 2014. According to Trim Enabler, the SSD health was still 100% with less than 20% of its expected lifetime spent. Another daily driver has an Intel 320 Series 600 GB SSD still in service. That SSD was bought in 2012 and, yes, the Intel SSD Toolkit still shows the drive at 100% health. That Intel drive cost roughly $860 back in the day, but a decade of daily driving has, IMO, definitely gotten a reasonable ROI.
RAM does die sometimes, just generally not as often as SSDs. Although frankly I have a lot of SSDs and flash drives that I've had for years and so far I haven't had one die on me yet.
SRAM uses flip-flops, DRAM uses capacitance that gets passively drained by the read/write circuitry (and has to be refreshed to maintain its state while powered)
DRAM uses capacitance and yes, the data is kept for a while after the power loss but not for long, so using DRAM as storage does not make sense. Wikipedia: "DIMM memory modules gradually lose data over time as they lose power, but do not immediately lose all data when power is lost. Depending on temperature and environmental conditions, memory modules can potentially retain, at least, some data for up to 90 minutes after power loss."
@@ShinyQuagsire You are right DRAM uses capacitance . The video implies that capacitors need constant power to hold their charge which is not true (2:34) The reason DRAM cells require refreshing is current leakage which is partly due to the very small distance between capacitor plates. Normal capacitors can hold charge without needing constant power.
@@metty7528 The DRAM cells aren't supplied with constant power to hold the charge. The power is only needed to refresh the cells (i.e. to prevent the charge from leaking beyond the point of no return). The recommended interval is around 64ms for DDR4. Charge will remain within the DRAM cells for a longer period of time (it decays exponentially), but once it falls below the threshold that the sense amplifiers can detect, it might as well be 0 volts. Larger capacitors can have longer decay times, but you have to consider how miniscule the DRAM capacitance is (If I recall correctly, it's on the order of pico-F).
@@ShinyQuagsire SRAM does not use flip-flops. SRAM uses pairs of inverters (the most common structure is the 6T cell). Registers within a CPU typically use flip-flops.
The animation you used for the flash cell was awesome and informative, why didn't y'all use such an animation for the DRAM? I'm having a hard time picturing it without such an animation.
RAM doesn't lose all of the information at once, and it loses it in a predictable pattern. The data is lost slower at lower temperatures, so it's possible for someone with direct access to a computer to cool the ram with an air duster, load a special OS that doesn't write to a ton of RAM, and recover enormous chunks of data. It's rare but possible, and is referred to as a "cold boot attack"
Though very interesting attacks, these are just theoretical as in real world scenario's, any computer that has data worth that much effort (and the hacker can get physical access) will have a TPM and locked BIOS/uEFI preventing the computer from booting anything else. And while the TPM can be read, or at least the LPC bus or SPI bus that most TPMs connect over, that requires at least one full reboot of the system, clearing the DRAM allready.
Electrons, capacitors, floating gates, 1 and 0. All of this computer stuff is literally magic. I will never understand how its possible for things like my smartphone to work as it does. Too much for my brain. How did they even discover how to make them?
Absolutely, trading investment is a safer and surest way to make more money now because I have made up to $132,000 recently from investment trading and management
I have SSDs from 2007 that are still working. I use them in portable drives and don't use them that often but every time I boot them they boot and run just fine.
Thanks for this! I'm an old guy, have always dabbled in electronics, short wave radio, computers, etc. In 1963 and 1964 I took a HS vocational electronics class. Vacuum tubes, I easily got. Simple transistors, not even. Wait, electrons move this way (no different from tubes, filament to plate,) but then we have "holes" moving the other way? Couldn't get my head around that. Still can't.
The only SSD I've ever had failures of was OCZ brand, two of them died within hours. I replaced with Intel in 2008 and no further problems. Since then, I've installed 3 dozen SSD drives. None have failed.
Yeah, same here, though it was a few months, I think. I bet yours also used the infamous SandForce controller which was incredibly faulty and made the drives more and more unusable... ☠️ Funnily enough OCZ RAM was also the only one that failed for me, so I swore to NEVER ever get anything from them ever again! 😤
@@Gaboou the thing about OCZ was their tech support attitude was akin to if a 10 year old was running it. When I complained about their drives, they sent an emoticon of a toungue sticking out. How childish. I never will do business with them ever.
@@basspig I feel for you... 😔 I was lucky enough to get replacements from my vendor - and was able to request an SSD from a different company (Crucial) so that this wouldn't repeat...
What I have seen with SSDs is the following: 1. A lot of consumer SSDs will die with age. So say your 'cheapy' consumer SSD is 6 years old and has a wear level count of 19. I have seen a number of consumer SSDs die like this even though that wear level count is really low. What I have been seeing with enterprise class SSDs and some of the higher end prosumer class SSDs is they seem to be basically invincible. Lesson being these more costly SSDs get you a whole lot in terms of endurance and reliability. They say all SSDs age, but from what I have seen is some age a lot better than others. On this note I have seen some DRAM less SSDs fail after just 3 years. That DRAM is important. 2. A lot of consumer SSDs will lose your data when they fail. I have seen things like a chip goes bad and the data is striped across the chips, so you have a swiss cheese drive, which is basically unusable. Hopefully you have a backup because that SSD is going to be extremely hard to recover anything from even with just one flash chip dead. Another thing I see a lot is the data just won't read back the same as it was written. Talk about leaking electrons, well what I have seen is the sectors don't get marked as bad, it is just the bits don't stay the same and you get garbage out. It is important to note that computers are highly reliant on perfection, so these kinds of problems quickly turn into if you don't have a backup, you are in a world of hurt if you actually cared about that data even if the drive is technically only partly bad. At this, it can get into bit-rot where there is silent corruption and then you backup that corrupt data, so then your backups are corrupt. 3. This gets into another point RAID and ECC RAM. If you have data you care about, RAID, ECC RAM, and backups are a must. The data might corrupt on the drive. The data might corrupt in RAM silently and then get written to the drive. The backup reads off of the drive, so if it is corrupted by one of the above, it goes to your backup corrupted, the thing I was talking about as bit-rot. There is some nuance here I am not getting into, but basically you need the integrity in RAM and the integrity on your main storage device, or else what that backup can do to preserve your data is limited. At this most ways to do RAID does not properly support TRIM or TRIM at all, which is a key factor in your SSD's life in terms of wear and RAIDs are hard on SSDs as it is, so most people doing RAID really need enterprise class drives in order for the drives to not get destroyed to quickly with too much writing. The only RAID I know of that properly supports TRIM in real world usage is ZFS RAIDZ. ZFS is a dream come true for anyone who cares about their data and an important file system to consider when storing data you care about on SSDs. However ZFS is something that exists in the Solaris, Linux, and BSD UNIX realm, not in the Windows realm. So for an SSD rundown based on what kind of user you are: 1. Gamer - Especially if you primarily play through Steam, probably not too concerned about your SSD as if the one you have fails, you stick in a new one, go through the annoyance of re-installing and getting Windows up to patch, and re-download the games you are actively playing on Steam. So in a couple of days you are back up and running. If you are a really avid gamer, you may have a bunch of customizations that make this harder to do quickly and may think more about a quality SSD in your computer to reduce the chance of this annoyance. 2. Student - You may game some, but if you are a successful student you have the discipline to get your work done. At this you don't want to lose that term paper a week before finals. You also don't have a lot of money. So what you probably need is a reasonable quality SSD, a good virus scanner as schools are breading grounds for viruses, and either a cloud backup of your school laptop or an external USB hard drive you backup to. Really, you probably want the cloud backup of your data or at least stick it in Onedrive or something. Especially with Onedrive, it will remap the places where you usually stick your files to be in Onedrive. Most places you go is going to have WiFi, but you can also get a cellular hotspot or may tether to your cell phone, so cloud is definitely an option for that term paper. 3. Family computer - Basically the same guidelines as the student. Maybe you also run a NAS device to stick your files on and use that NAS device for other things. The NAS device is not a backup and so you also have to have a backup solution for that. I have seen a number of NAS devices lose everything over the years, often times due to some glitch where you wouldn't think the data should be lost, but it did get lost, so these are not really all that great of a place to stick data and backups of these are very important if you have one and data you care about. You may also see some bit-rot with these NAS devices as the hardware in them tends to be on the cheaped out side. Not to say these are a bad solution, but instead it is not the highest grade solution. 4. Business - The first thing is you don't want a bunch of data on desktops and laptops. Either it is on a server with ECC RAM, RAID, and a good backup solution inside the company or going out to the cloud somewhere, which can take a number of different forms. Then you have a good deployment scheme to quickly image systems to corporate standards. So this reduces the need to have a quality SSD in the desktops and laptops, but you still don't want the headache of the cheapy SSDs failing, so it is a good idea to invest a little to get above the bare bones SSDs. You also really want backups of everything, especially in the days of ransomware and at this BMR (bare metal recovery) is rarer than you might think, but is possible. Any internal servers are going to have enterprise class SSDs of course and depending on the shop, you may have access to ZFS or you may only be able to do hardware RAID. Often times in a company the data stored on a laptop or desktop ends up gone when an employee leaves and the hardware is recycled to be used by another employee, but then somebody wants / needs to access the data by the ex-employee, but it is nowhere to be found, so another reason to not stick data on a desktop or laptop. 5. Enthusiast / Power user - When you get into this realm, you are talking workstation hardware most probably. You will beat on SSDs a lot, so they need to be enterprise grade or at least high end consumer as it does matter. You may try software RAID, but this is a bad idea unless of course you are using ZFS, but usually you will first try some other software / firmware RAID and end up regretting it. So then you either end up with ZFS or a hardware RAID controller. At this a hardware RAID controller will need to be modified to have a fan on it as these are designed for high airflow servers while a workstation does not meet this metric in most cases. If you have lots of data, you may think about a SAS expander, but really SSDs need a direct attachment to a RAID controller for performance, though mechanical drives not so much. You also need to consider what backup solution you are going to use. This gets more constrained as if you have a lot of data, some of the options can get to be really expensive while other options are a lot more cost effective. You probably also want a BMR (bare metal recovery) solution or however close you can get to it in order to minimize down time.
I imagine there’s kids out there frying their computers tinkering with overclocking, and a “basic things to avoid” video may help avoid costly mistakes.
I have personally had ram go bad during a memory test. Testing used modules for a friend, passed on first cycle/pass, let it continue to run then forgot about it. For a month. (It was in an unused office at work). 122 passes in memory addresses starting failing and kept failing. Shut down are restarted the test, failed at same place on first pass and every subsequent pass. When I first saw the failed address I got excited that I had seen a cosmic rays flip a bit, but if it was a cosmic ray, it didn't just flip the bit, it trashed the whole stick of ram.
Also note the mechanism of CMOS (complimentary metal oxide semiconductor) or any sort of MOS. They actually are moving parts however at a sub-atomic level when the electrons pass through. Also remember that many SSD's and RAM are a MOS where they need to remember that the "oxide" is actually "rust". Yes rust is in your circuitry however these integrated circuits or chips are hermetically sealed however over time oxygen does get in and they then increase that oxide layer therefore making the transistor useless for charging and discharging. Also static discharge is a common occurrence which leads to faulty storage devices including RAM.
I had to RMA my RAM twice on my current PC. Once because it was DoA, and once because my prior mobo was bad and caused electrical damage to the replacement RAM. Diagnostics to determine a bad mobo are pretty much impossible when you don't have a replacement ready to test. So I actually had to take it to a PC and phone repair shop...and it took them two weeks to determine the failure after they ran a 12 hour test, told me everything was fine, and I immediately got repeated BSODs again...and it was the second time when they told me that my RAM was dead because of a physical defect in two of the RAM slots (both slots in the same channel).
I'm still using my crucial SSD that I got when I build a computer for Witcher 3. After all these years that SSD still works fine, I use it to store my games for my newer builds.
FYI to those tearing apart PSUs and other things, Capacitors DO hold their charge when unplugged! Just that over time they will lose their charge, please don't poke at mains filter caps
I have several SSDs I bought about 8 years ago (The 180GB Mushkin I have as my boot/Apps drive is the oldest), and have been in multiple PCs all are still about 90% health.
I understood the main differences, but this fills in the missing gaps in an easy to comprehend manner. Thanks for this outstanding video. Great work.
Yea. A nice video even for people who kinda know a lot about tech.
I knew the reason -I was just watching the video for a friend.
Really love the company Linus made and what it does and means for the people who watch!
You had no idea lol. Who you trying to impress??
As goofy as these guys are, they have a way to make outstanding videos the greatly aid this community! And I agree.
The first ssd I ever bought back in 2010 is still functioning perfectly. I have another newer ssd with around 1PB of writes and it reports no problems. I've had one ssd die and it happened in the first week or two, got a free warranty replacement a week later. Overall ssd's are crazy robust.
Edited for clarity
The first SSD's were notorious for failing. It happened three times to me
I have an OCZ SSD (the most notorious for failing) that I've been using as my boot drive since 2011 that's still going.
Good to know!
What's the capacity of your 1PB Writes SSD?
@@kristmadsen You got lucky for sure!
So close... you almost covered the caveat. RAM can still fail due to wear-out like SSDs, however, this usually only occurs in datacenter settings where the DRAM modules are kept at a high utilization. In these cases, the failure rate is still quite low, but it is a problem that companies like Google and Meta are currently trying to overcome (and no, they are not letting their RAM modules get too hot or overvolting them). For the general consumer or enthusiast, the chance of them encountering such a failure is very low.
Also, you can wear out a SSD in read-only mode as well. There is a phenomena called read disturbance, where after continually reading from adjacent cells (floating gates), the voltage levels of that gate can diverge from the expected thresholds. To combat this, if a page is read too many times, the SSD controller will typically either relocate the page to another location in the flash or rewrite the current page (effectively refreshing it).
thanks for explaining. 👍🏻
I'm wondering what is the "best" method to know if my SSD is close to its death? (Asking as normal user)
@@user-rm5ye5xc7f I'm not sure if there is a good method - it would likely be vendor dependent. Samsung for example has a health utility (program) that I believe can communicate with the firmware on the SSD.
Any decent SSD (that probably even includes the cheaper brands like ADATA) should internally manage health, so what you will see is that the SSD will decrease in size as it ages. If you ensure that you always have free space (which might mean slowly removing files as the size shrinks), then it should be okay (if you keep pushing it though, it may only be a 2GB drive in the end, but those 2GB should still be good). The problem will occur when it runs out of space and then also tries to shrink.
That said, if there is anything important on it, you should always have a backup somewhere else (on another drive, or in the cloud).
Also, don't defragment it. That's only meaningful for spinning drives that need to seek between "fragments". SSDs have a fixed read cost regardless of where the file is located in the drive (the sequential read speedup happens because of the drive cache and not because of how the files are stored).
@@user-rm5ye5xc7f Defragment it ;)
@@user-rm5ye5xc7f you can use harddisk sentinel to see the health
Just to add a thing, there's two basic types of RAM, the DRAM described on the video, and SRAM, that is a type of memory that use 6-8 transistors to logically store the bits.
SRAM is insanely fast as it's basically made out of logic, but it use 6-8 times the space of a DRAM per bit, which makes it 6-8 times more expensive.
Most modern computers use SRAM for caches like the L1/L2/L3 etc.. and DRAM for everything else.
does SRAM use latching gates?
@@__8120 yes
I prefer Shimano over SRAM
I thought DRAM was faster than SRAM. Did you mean SRAM is faster than NAND-flash?
@@Rudxain Nope. SRAM is just faster because its pure digital logic. it's 8 nand gates arranged in a form that store data. it's the thing that caches and CPU registers etc are made of. DRAM on the other hand are microscopic capacitors that store the bits by charging and discharging em. so it takes a bit more time than pure logic gating.
But a dram capacitor take the space of one transistor (or even less), and with some grids etc.. it's basically what you get with DRAM, an entire byte stored on the space of a single bit in SRAM.
SRAM chips sold separately are probably slower than the DRAM chips however because it's not very common to have this kind of memory outside of a chip nowadays.
I've written 30TB on my main SSD over like, 3-4 years. Doing pretty good so far, no need to replace it yet.
I keep 500 gb on the main drive, a 1 TB SSD on the side for games and other large things that benefit from fast load, and a 2 tb HDD for large stuff like OBS files. I like the 500 gb limitation it keeps me diligent in pruning my storage space.
Pfft I write 30TB in a day.
Now that's not to be taken seriously, there was just a gap for a wanker style comment and I had to fill it.
SSD's wear level depends greatly on data movement rather then keeping data.
movement means not by moving stuff around but actually putting new data in, deleting and replacing it.
@@m-w-y7325 So what you're saying is a good big one is always better than a good little one.
my very first SSD was a Samsung 840 EVO in 2013. I remember being so worried at that time about it wearing out (they were expensive AF that time) that I actually calculated its estimated life/cost vs. just getting another HDD.
Said SSD is still in use in my current PC as a cache drive for my game HDD and is the only thing i've carried forward from that old build. it's lasted three whole system rebuilds (Q8400, 3770k, 3700x) and i'll probably still bring it to my next one.
I did the same with a 30gb SSD i got from my brother-in-law. Due to its storage size i end up using it as a cache drive to keep my primary SSD last longer. I couldn't think of a better use for it with that 30gb. Still running with no issues.😊
I got an 850 Evo in 2015. It spent a year in my PS4 and is now one half of a RAID array I keep my games on. Stripe by the way.
I still have a Q8400 and I7 3770 :) what a coincidence
My 840 Evo just died on me a few days ago out of the blue. It was my boot drive, luckily I noticed something weird and backed it up last minute. It definitely didn't do the read-only lock thing mentioned in the video, it just died.
I got my Samsung 950 Pro 256GB M.2 6 years ago for 177.99. Still running as my windows drive after all this time.
SSDs have been improved over time too since the early days, they were known to be a lot more fragile and easy to break down, and long before came M.2
M.2 is a form factor not a ssd type
M.2 is a form factor
NAND is a type of flash memory used in SSDs
NVMe is the connection protocol used to transfer data (PCIe bus rather than SATA bus)
The M.2 form factor you are referring to is still an SSD, just a smaller size using the above two technologies TOGETHER which makes it superior (a new generation of SSD) when compared to the old ones you are referencing (using SATA).
They are lightning quick due to the way data is transported, this also allows larger amounts of data to pass through and be processed, so naturally when something is faster and more efficient we call it an upgrade so yeah this would be the "pinnacle" of SSD technology as we know so far until it gets improved even more.
wrong, the way to workaround errors has improved and there is a technique called multicell that can make it impossible to access the data after the flash gone dead. They are more fragile than you think.
@@Jameslawz Well if we're discussing the "pinnacle" of ssd's then rather than NAND, 3D XPoint in the form of Intel's Optane would definitely take the crown.
thats actually not accurate, single layer (SLC) was most reliable, but as the companies increase the amount of layers to increase density and therefore capacity, it reduces overall writes. A common example would be comparing the total writes available on something like a Samsung pro vs something like a QVO that uses quad bit.
I worked as laptop service engineer and one customer asked why all laptops parts cover replacement warrenty except RAM I told RAM is the last dying part in laptop
It's a terrible job really, surrounded by disaster and failure, occasionally having to tell people their business is dead.
My SSD is still alive but my RAM died last year, i feel like people are overrating RAM durability by a lot :o
@@Lunamana pepole are also underestimating ssds lifespan a lot. You really have to write a lot of data to kill your ssd.
@@KrolPawi i write terabytes of data everyday
@@Lunamana With Laptops, you have all of that circuitry crammed into a tiny little box thinner than an average book. All of the warmth from that circuitry is nearly pressed up against each other, and even with fans and stuff to keep it cool, the RAM is probably warmer than the RAM in, say, a PC that is out in the open air with tons of air flowing over it, and modern RAM also usually comes with metal covers with small heatsinks attached to it to keep it cool. You don't get that with a laptop and I would assume that's why laptop RAM can sometimes fail faster.
SSDs have electricity "water dams" and the electricity breaks down the "water dams" over time.
Ram has no dams.
Dam, that's nice
RAM doesn’t give a dam
nvSRAM
@Brian-Fong's--comment/post "SSDs have electricity "water dams" and the electricity breaks down the "water dams" over time. Ram has no dams.":
Damn :-(
Well this dam comment section become a dam pun.
That was a great. Not only did you answer initial question of the video, but you somewhat described how SSDs and RAM work. Good stuff
I love when Professor James makes a visit. Funny as much as I feel like I’ve learned I still don’t really understand how it all works. But I get the basic concept SSDs die whereas RAM doesn’t for obvious reasons.
If you want a deep dive: here the first episode of a series of 4 videos about how an SSD works: th-cam.com/video/5Mh3o886qpg/w-d-xo.html
@@MasterGeekMX
Nah,like I said it would just go over my head anyway.
I mean I appreciate it but the minutia of the ins and outs of tech don’t really do much for me it’s more than practical applications. Which I’m sure they probably do go into somewhere in those videos but still just not my cup of tea.
@@JohnDoe-kv3kd Except, both do die. And with RAM chips it's not just from overclocking/over-voltage. Depends entirely on usage patterns. My main computers are running 24/7, and I've lost tons of RAM to chip failures over the decades, it's something I come to expect. The RAM is in constant use, and just like everything else in the computer, sooner or later it will wear out and fail.
I had my boot SSD die on me about a year ago, and as I was recovering the data, I noticed it was read-only, but I had no idea why... until now! Thanks, James!
@Telleva except if/when the controller dies then you are screwed. Early SSD's had cheap controllers that were notorious for dying. I had a Crucial SSD die after 3 months because of the controller chip.
@Pulsar Gaming what was the make and model of the SSD you had that died?
@@Josiuh it was an HP EX900. It had good reviews when I bought it, but later when it died on me, I checked back and a surprising number of people had the exact same experience with it.
@@phiwolgast noted. Thank you
@@Josiuh yep! If you're looking for reliable SSDs, you can't go wrong with Samsung, Intel, or WD. Been rocking an Intel 660p drive for almost four years and a WD SN750 since the HP drive died, and both have been fantastic. Both extremely fast and reliable, even for gen 3 drives.
James is getting ever more professional in these videos. It's a joy to see him progress. The glasses are a nice touch too!
Hello guys, I love Techquickie and it was refreshing and fun to see James all studious for once.
Suggestion for a future TQ : could you go over positive/negative air pressure in a case? I believe you covered this topic several times in the past but I still get confused regularly. I just wish wasn't so puzzled when trying to figure out how many fans I need or how I should place them.
Great work and thanks!
Both LMG and Jay made videos on the topic aaages ago, and LTT has a video of year long experiment with different setups.
In short, it usually does not even matter, unless you run something incredibly imbalanced, especially with obstructed airflow (SFF or huge tower cooler). After that, not much you can even do - you might be able to cut down a couple degrees from the neutral configuration, but it's almost too much bother at this point.
Besides, there's no single answer, and most surely not a quick one. The case itself is a huge determining factor, and if they've screwed up, no fans would ever amend it. Case in point: GN's "12 fans, 0 airflow" review. Frankly, if your case is truly bad, it's probably worth it to bite the bullet and get a better one over getting 4x case fans for half the price of a new case.
You also neglected to mention that most SSD failures come from its controller (most times due to overheating) and not the flash chips themselves. That's got nothing to do with degrading but bad manufacturing.
That tends to happen with click-bait videos like this one. Over the years I've had dozens of sticks of RAM fail on me, but so far haven't lost a single SSD.
OK go make your own educational video then
@@looneyburgmusic If you're losing dozens of sticks of RAM, you're doing something very wrong.
@@someguy4915 not with the number of systems I run 24/7 I'm not.
And by "years", I mean since the mid-80s
@@looneyburgmusic Yeah, you're absolutely doing things wrong if you were using DDR RAM in the mid-80s :P
RAM from the 80s is nothing comparable to RAM of the last 20 or so years, it's like saying cars are unreliable because you can't ride a bike, two entirely different things.
If you're breaking multiple DDR RAM sticks (DDR1-5) you're also doing something extremely wrong, especially if they run 24/7 meaning you're not touching them and they still break.
You either are using extremely cheap power supplies, constantly touching the RAM without any ESD protection, hitting the RAM with a hammer or mis-diagnosing broken RAM.
Either way, time to figure out why you keep breaking RAM (or why you think it's the RAM that is broken)...
This is so timely, I recently helped my mom out with her laptop slowing down. I found quickly that her SSD was running dangerously low for an extended period of time so I suggested some well overdue cleanup. I gave her a high-level about SSDs wearing faster due to low space, but now I'll be sharing this video to her as well :P
15 to 20 percent free space is the threshold before performance takes a hit
@@joshuaguenin9507 Yeah, it was hovering around 5% for not sure how long, but I guess at least a month or so.
Also, the capacitors in RAM lose data within milliseconds and have to be refreshed many times a second. They are like flash memory that's extremely degraded. Reading from (dynamic) RAM clears the data as well, so it has to rewrite it each time the CPU reads it. It's just that this is expected behavior, and the constant refreshing doesn't degrade it appreciably.
this is an absolutely must for understanding how DRAM works and totally missed in the video.
In extension, SRAM uses transistors to hold the bits (actual flip-flop circuits), and never need refreshing.
@@paulmichaelfreedman8334 not just "transistors" but triggers/flip-flops/latches, because DRAM also uses transistors as capacitors.
"In electronics, flip-flops and latches are circuits that have two stable states that can store state information - a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will output its state (often along with its logical complement too). It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems."
@@paulmichaelfreedman8334 And this is the main reason DRAM is preferred: SRAM takes 4-6 transistors per bit, whereas DRAM takes one.
@gblargg It's much cheaper and more compact, bit for bit.
I would find it quite interesting to see a simple CPU scaled up so you can show how i processes data, if that's something you could do a video on.
Sadly dataprocesing in cpus is way way more complex... We have registers holding address of procesed stuff, stacks and much much more things. Might be wiser to watch basic PC creations in minecraft, couse you can distance yourself from electric details and understand the logic behind it. (The usage of edges of signal to change memory. etc) Then add real live components.
Funnily enough, there is a similar thing in Cambridge (GB) which is located in the Computing Museum.
There's a kid in... I want to say New Jersey? Who's been playing around with making his own processors specifically to better learn these concepts.
I guess he's probably not a kid anymore x'D Time flies.
I think he was over 400Mhz, which is pretty awesome for a home made processor.
You could do say a 6502 or Z80 level CPU but it's like a log cabin compared to New York city when you look at current CPUs. They're not just bigger there are whole ecosystems (techosystems?) of extra stuff. To use the city analogy again you don't really need an underground or bus system in a log cabin. I'm saying there's irreducible complexity.
There are minecraft redstone CPUs...
Still got my 2.5“ HDDs, some from 2003 💪🙏 they have seen ENDLESS amounts of data and still work fine as day1 💪
Thats alot of pokemon in the homework folder and mental scarring
Overclocking ram has to be the most pointless thing I've experienced, at the end of the day tightening the timings worked far better and didn't require me to increase the voltage at all.
For Intel that is correct
For Ryzen however even 50MHz extra in the RAM gives a noticeable difference
i find it pointless because no matter how many time i spend on testing and tuning it is still gonna bluescreen somewhere anyway
Generally I've had the same experience and I just go with tightest timings on normal voltage. In the past, however, any time that I've used an integrated GPU that shared system RAM I always got best performance by using highest RAM clock speed.
@@RCmaniac667 for me i bought what i wanted and then just pushed the multiplier till it stopped working correctly and dropped it and have been running the same settings going on 4 years and not had a memory related crash outside of my initial mess around
@@RCmaniac667 Yeah, I don't know how different it is for Ryzen, but from my experience overclocking ram gains are mostly negated by the need to increase the timings just to keep the system stable.
So you end up forcing your RAM with higher voltage for very diminishing returns in actual performance.
"What are the differences between PDF and PDF/A? And which one should you use?" sound like a good topic to explain in Techquickie.
This is one of the most helpful Techquickie videos I've ever watched. I never understood why SSDs failed and RAM doesn't, but this was very easy to understand.
Fantastic video, makes an extremely technical subject trivially easy to understand - great job!
3:18 You probably don't know what "lifetime warranty" means.
If the manufacturer produces exactly the same model of the device, the warranty never ends, but as soon as the production ends, you have, for example, 12 months of warranty.
Also notable, if you have a smaller 120GB or 240GB SSD as your main drive you should consider disabling pagefile on it altogether and/or moving the pagefile space to a secondary HDD. I've killed a few value branded ADATA/Kingston drives and suspect this was the cause as the Pagefile is taking all of the limited write-life these budget drives have.
Pagefile on HDD will make your computer slower than molasses in Siberia. Better to use a secondary SSD.
@@fat_pigeon or just get more RAM instead of a secondary SSD so you're a lot less likely to hit the pagefile in the first place
@@kingdom5500 or just not buy a cheap SSD in the first place
"Your family will still love you. And so will I."
That was wholesome
Always wondered about this. Thanks for the run down.
I like when James-with-glasses explains things to me. It's oddly comforting.
Weird that the video never uses the terms "NAND", "DRAM" and "endurance".
As far as I know, DRAM endurance is orders of magnitude higher than that of NAND, but then again, DRAM is constantly being written to when it's being used.
Ram is a rabbit hole.... Just look for stuff like FPM. Zip-modules and so on. But even worse, check out 8-Bit IDE, RLL and MFM, or going deeper and read up on every SCSI-1 busses that were ever created.
Now THAT is some solid bedtime reading. 😁
This guy's reviews and videos are always good. He has a knack at it. Quick witted, knowledgeable, two thumbs up! 👍👍
It was a few years ago that I thought I understood RAM better, as in "retains data as long as power is there" but then read about refresh cycles. I think this would make a useful follow-up topic.
Also what's the deal with PMEM or Optane, in this explanation?
Fantastic question. There are so many ways to do flash storage. More explanations please!
Intel's brand name Optane has two different technologies. One is just a fast normal SSD. However, the Optane memory plugs into a DIMM socket is interesting. It works like DRAM. It is a ten times slower DRAM but with ten times density. Neither Intel or Micron reveals details of the Optane DIMM. Some very basic information is that it is not floating gate as usual flash and thus SSD.
He didn't even explain it well since he talked about capacitors (which are in no way required for RAM or other volatile memory)
So he doesn't get at the underlying reason why volatile memory is volatile.
Here's a better explanation:
The most basic RAM is nothing more than multiplexed latches. These latches can be designed many ways, so we will keep it very basic with a set/reset latch. Even this could be designed to work slightly differently but here's the basic truth table:
S | R | Q | 'Q
0 | 0 | hold | hold
1 | 0 | 1 | 0
0 | 1 | 0 | 1
1 | 1 | undefined
So if we have set, we set output to 1, reset sets output to 0, both set and reset at 0 results in "saving" our state, and trying to set and reset at the same time is undefined.
So now we must ask, "how do we know if a line is 0 or 1?"
The answer is that the transistor is designed such that when power is applied, it is able to rudimentary conparison of voltages comparing them to the ground and high lines providing power to the transistor.
This can only work when the transistor is powered sufficiently, so once power is removed it is no longer able to properly and predictably compare these states.
The end result is that when we apply power to a circuit, we do not know what state those transistors will be in.
It takes using a different medium (like a magnetic disk) or engineering specific solutions to this unknown-state issue (like a flash EEPROM) to overcome this problem. But it is ultimately caused by the technology we are using to build the memory, the transistors. It is NOT because of capacitors.
Edit: In case my explanation was not clear enough, it is the transistors saving the state information. E.g. if we keep power and set the set bit to 1 and then back to 0. As long as we don't set the reset bit to 1 or lose power, then the output Q will remain at 1.
@@kiraPh1234k I think capacitors were an analogy. Anyway, as a non-EE your truth tables and letters went way over my head. +1 again to James and this video.
@@spotted0wl. The truth tables are supplementary.
The main point is simply that its the storage medium (transistors) that is responsible for the volatility of RAM, and the specially engineered storage medium of an SSD that exposes it to more issues with wear/deteriorating to combat that issue with volatility.
The very short version: storing data without power is a lot harder than with power.
The critical difference is that the capacitors in RAM (in this case: DRAM) are refreshed regularly, which doesn't happen when the PC is not powered.
what does it mean to refresh a capacitor?
@@meatbleed Ram takes a small slice of time between regular system clocks to "Recharge" any capacitors that represent a 1. Essentially, the capacitor's current value is read, and if it is a 1, the capacitor is fed electrical power to bring it back to full.
You can think of a capacitor at these timescales like a leaky bucket. A 1 is represented by a bucket that is more than half full, a 0 by a mostly empty bucket. Before the bucket has time to leak from more than half full to mostly empty, the ram refreshes the value, topping up the bucket.
@@trapfethen Oh that's awesome. Thanks for the bucket analogy, that did it for me lol
@@meatbleed It's also pretty cool that it can refresh an entire row at once, so no need to check each individual bucket but just one row of buckets.
Well, ActUALLy, there are no capacitors in RAM!
What is used is the parasitic body capacitance that is inherent to silicon on insulator transistors!
I have worn out quite a few SSDs. At least to the point of SMART warning triggers.
Wear leveling won't stand a chance if you:
1. Run software that regularly writes/overwrites/deletes *large* numbers of *tiny* files.
Like for example scheduled metadata refreshing on database/media servers.
AND
2. Run on SSDs filled almost to capacity (92%+) over extended periods of time (days or weeks).
A little extra manual overprovisioning might expand the life span of an SSD. I usually create an empty partition of 10% of total size for this. This also prevents the performance degradation of an almost full SSD.
th-cam.com/video/Q15wN8JC2L4/w-d-xo.html
You're covering very important and interesting topics on this channel. Maybe you don't do it in an enitrely scientific fashion, but then again, that's the point, so that most people understand it.
In these terms, you did a perfect job here.
@enrique amaya how much does he love me? does he masturbate thinking about me?
I'd like to see tech quickie do a series or a companion piece to an LTT deep dive on data recovery options for various storage mediums and perhaps a rundown on which solutions are best for certain needs like accessibility, longevity, durability, speed etc.
Cool, I always wondered how SSDs store data even while powered off. Glad to know it's not something silly like a really tiny battery in every cell.
now you know baby girl now you know🤣🤣🤣
@@SaraMorgan-ym6ue hmm, weird comment
Feels good to know that James Strieb still loves me even without my overclocked ram
I would love an episode deep diving into ram sub timings.
Got a 500 gb Samsung plus ssd. Nearly 1.5 years later, only 10 tbw out of 300. Using a ram disk is very useful for short term tasks if you got the ram. Then you can store projects you wanna keep.
I love topics like this, and the level of explanation was perfect. If you already know about electron hole combination you get it, but people who don't still learned something. More videos covering semiconductor applications and manufacturing news please!
Nooo! This guy is a liar!
My SSD will NEVER die OK!
Great video. I’m a mental void as soon as it comes to hardware and circuits. It always just goes in one ear and out the other. Kudos for making it understandable, even for me.
2:50 - It actually does do this during Shut Down to a degree, which is why "Restart" is recommended if you're having driver issues. "Shut Down" still saves the system state to disk. In other words, "Shut Down" in Windows is a fancy label for "Hibernate, but close all of my apps first".
This is due to Fast Startup being enabled in Windows 10 (don't know if 11 has it), thus certain drivers issues come back when booting the system again, if I remember correctly. If you disable Fast Startup, shutting down would be similar to restart, not writing on disk to come back to a previous state, but of course instead of restarting it would actually shut down.
@@Tokisaki8815 Exactly! I thought about including that in my original comment, but I didn't want to drag on and on. I usually have Fast Startup disabled. Mainly because I like to have a fresh boot up each time.
This channel is way chiller than others of yours.
The thing with SSDs is that to increase capacities, they increase data per cell in recent years which reduces write life per data which is a worrisome trend. While you can't really wear down a SLC drive, a QLC drive can test those limits with intense data writing over years. Like a 1TB QLC SSD having about 200TB write limit which is just 200 total capacity writes. While it's still mostly safe for the average user, if they try to cram in even more over the same area with this trend, this will be problematic. They're already working on PLC designs, so..
Eventually they're just going to have the make the drives bigger and put more cells on the drive if they want to get higher capacity drives out. Of course, this means they will be more expensive, but I'd rather pay $300 for a 4TB drive that has double the circuitry than pay $150 for a 4TB that's going to konk out after a few years because the write limits are so low. But, if you're using the drive to store stuff long-term, and it's not going to be a system drive, like, you wanna put all of your high filesize games that aren't updated often on it, or store videos or what-not on it, then that would be perfectly fine to go with the limited write version.
Exactly, no one is mentioning this.
In a market full of competition they will cram more density in each cell because it's cheaper and lower cost at the expense of lifespan.
The majority of consumers won't know any better.
Eventually they might end up failing as fast as HDDs would fail.
I static damaged a ram chip years ago. I sparked to it like you do a door knob in the winter while it was laying on the case. Case was plugged up so it was grounded. That chip would now stop the computer at the post. It was working too before that almost inaudible SNAP,
Well, afaik capacitors do have an insulating layer, it's just that electrons are not forced through it. Thta's why capacitors in RAM don't degrade.
I remember reading how sketchy the early MLC SSDs were. Basically a roulette wheel, get some speed, lose your data. Then the X-25M came out and that was all over instantly. It still works. I recorded video to it. It has weird caching issues where it hangs for a second though, always did.
Then I picked up some Samsung disks. About 30TB on them now, they're fine.
Now to be fair, IIRC I've never had a hard drive fail. Of course they will wear out - I just never wore one out in spite of years of use. So FWIW I don't necessarily find SSDs more or less reliable than mechanical drives.
That first part was a great way to explain quantum tunneling, the same thing was said but without intimidating monikers. As always, i appreciate your content and thank you.
I'll have to watch this later as I'm rather tired right now, but to be fair, one of my RAM sticks did end up going faulty and I had to have it replaced under warranty. That was the second stick I put in my original build (each at 4GB, and I started with 4GB to save money as it was $44 for Kingston ValueRAM DDR3 sticks around that time) that I got for Christmas 2014, so once it failed I was down to 12GB. It was really fun diagnosing the issue with Memtest86+ and swapping sticks into each of the slots individually /s, but at least it only occasionally gave me issues when using a lot of memory before noticing it was a memory issue. That's the last time I had major Bluescreens that weren't my fault (so unlike when I didn't uninstall ASUS and Intel stuff before moving to an ASROCK and Ryzen setup and had to clean up in Safe Mode). Funny enough, one of those BSoD's happened when I was watching an LTT video and Linus was saying something about crashing.
I've only ever seen one SSD fail personally and it was an Intel 240GB 530 series SATA unit back in 2015 when 240GB was a reasonably high end SSD capacity and Intel was the reliable drive standard before Samsung took over the market. The rest of my SSDs, even the cheap Kingston/PNY ones have been flawless but I havent had any of them long enough or used them hard enough to comment on their long term reliability (with the exception of my 960 EVO 250GB that was just now replaced after 5 years of service when I finally decided I needed a boot SSD big enough to hold a few games.
Samsung 970 EVO here , 1TB NVMe, been running nearly 3 years non-stop and health is at 98%. So it should be worn out by 2150😂
Great video. I wished you would have mentioned to never defragment an SSD. You've discussed it before and it amazes me that some people still haven't got the memo. Keep up the great work.
not really an issue....once its defragged then what???? When you defrag a hard drive, what happens the second time??? A message pops up and says it isnt needed..... Also Windows 10 and up wont allow you to defrag an SSD anyway
@@joshuaguenin9507 exactly
thanks, for explaining the basics again... I'm an IT Dinosaur (first computer was a Apple IIe) being in the business for 30 years... but now I got a reference for newbies to point to
For history sake, I still have this Creative Labs Zen Stone 1GB mp3 pod. Whatever tech they used put them out of business because I've had the same files on it for over 10 years, not used it, and the files are still there with no degradation. That little thing is amazing.
Capacitors can hold a charge for nearly forever if there is no load, the reason they can't hold a charge forever in RAM is that the RAM is loading it down which will discharge the cap. Capacitors can be very dangerous and even lethal if they are holding large voltages and are not discharged before touching them, so I think this distinction is important. This video is well done, nice work.
Depends on type of capacitor. I haven't seen details, but the caps embedded in a RAM chip are designed to be refreshed every few milliseconds, and thus don't have to be engineered for low leakage current.
Also, the dangerous capacitors typically hold more voltage than the ones in RAM sticks. You're comparing caps that have a diameter of a pencil to caps smaller than the eye can see, that hold a tiny fraction of the voltage.
In a perfect world that is true, in the real world no capacitor is perfect and thus always has a leakage current.
Veritasium did a video about a company using floating gate transistors to create analog computers and chips. Kinda revolutionary.
how about a ram stick with a battery?
Then your weakest link is now the battery.. those wear out too
I mean like isn't ram of 16gb cost as much as 1tb ssd
2:43 "take away power and the capacitors can't hold a charge anymore." I know you're trying to keep things simple here, but that is a very misleading statement. When connected to a transistor inside a memory cell for a RAM chip - yes, the capacitor will leak. But if a capacitor (especially a large one) is truly disconnected from power, it can hold its charge for days or years.
That's why you have to be careful around power supplies even when they're disconnected and it's why you have to leave your router unplugged for 30 seconds before plugging it back in again. capacitors DO hold charge.
Huh, I actually never asked myself that question, so it's nice to hear both an interesting question and the answer at once. Though I should've guessed that the non-volatility was the main reason, being the main stand-out difference between RAM and SSDs.
Yeah so as u asked for a feedback about what content you can make next.
I am a PC enthusiast and have been building PCs lately and with experience I understood this that with Ryzen APUs there lies a slight problem with gen 4 ssds because even though the motherboards may support gen 4 ssd the Processor I guess allots the PCIE gen 4 to the intregrated graphics and thus disabling PCIE gen 4 support for any components.
This problem especially arises in Laptops which are ought to be an APU for efficiency and do compromise on gen 4 ssd.
Also in case of Ryzen APUs the performance of the Integrated Graphics depend hugely on the frequency of RAM and latency... thus most of the times in a laptop that completely depends on the IGPU and doesn't have a dedicated Graphics it makes a lot of sense to use a DDR4 instead of the newer DDR5.
A lot of people are unaware of this and thus end up blaming the manufacturers for not providing gen 4 ssds and DDR5 RAMs.
I request you to make this clear to the incognizant people.
This year I've had to deal with 4 separate hard drive failures with friends, family, and at work, and 2 of them were SSDs that were less than 5 years old. None of them had come close to their TBW estimates.
Even with the best manufacturing and quality assurance procedures, manufacturing defects can happen. When it comes to solid state components, the only way to detect them before use is to destroy them in testing.
I had one of the RAM sticks in my computer fail a week ago. I guess I shouldn't be surprised I got recommended this considering I've been doing a lot of searches related to that as a result.
Great question that I had never thought about much. Thanks for the free knowledge
I retired a Crucial M500 960 GB SSD in Dec that was purchased for my daily driver back in Feb 2014. According to Trim Enabler, the SSD health was still 100% with less than 20% of its expected lifetime spent. Another daily driver has an Intel 320 Series 600 GB SSD still in service. That SSD was bought in 2012 and, yes, the Intel SSD Toolkit still shows the drive at 100% health. That Intel drive cost roughly $860 back in the day, but a decade of daily driving has, IMO, definitely gotten a reasonable ROI.
RAM does die sometimes, just generally not as often as SSDs. Although frankly I have a lot of SSDs and flash drives that I've had for years and so far I haven't had one die on me yet.
Anything can fail, but the cause of failure is important.
In future. Hopefully there will be drives that lives (works) forever
Capacitors actually can hold charge when no voltage is applied.
RAM uses flip-flops to store bits.
SRAM uses flip-flops, DRAM uses capacitance that gets passively drained by the read/write circuitry (and has to be refreshed to maintain its state while powered)
DRAM uses capacitance and yes, the data is kept for a while after the power loss but not for long, so using DRAM as storage does not make sense.
Wikipedia:
"DIMM memory modules gradually lose data over time as they lose power, but do not immediately lose all data when power is lost. Depending on temperature and environmental conditions, memory modules can potentially retain, at least, some data for up to 90 minutes after power loss."
@@ShinyQuagsire You are right DRAM uses capacitance . The video implies that capacitors need constant power to hold their charge which is not true (2:34) The reason DRAM cells require refreshing is current leakage which is partly due to the very small distance between capacitor plates. Normal capacitors can hold charge without needing constant power.
@@metty7528 The DRAM cells aren't supplied with constant power to hold the charge. The power is only needed to refresh the cells (i.e. to prevent the charge from leaking beyond the point of no return). The recommended interval is around 64ms for DDR4.
Charge will remain within the DRAM cells for a longer period of time (it decays exponentially), but once it falls below the threshold that the sense amplifiers can detect, it might as well be 0 volts. Larger capacitors can have longer decay times, but you have to consider how miniscule the DRAM capacitance is (If I recall correctly, it's on the order of pico-F).
@@ShinyQuagsire SRAM does not use flip-flops. SRAM uses pairs of inverters (the most common structure is the 6T cell). Registers within a CPU typically use flip-flops.
fantastic illustration, but please also cover 2 more techs in the same space, 1 intel optane and 2 not so much famous memristors
The animation you used for the flash cell was awesome and informative, why didn't y'all use such an animation for the DRAM? I'm having a hard time picturing it without such an animation.
RAM doesn't lose all of the information at once, and it loses it in a predictable pattern. The data is lost slower at lower temperatures, so it's possible for someone with direct access to a computer to cool the ram with an air duster, load a special OS that doesn't write to a ton of RAM, and recover enormous chunks of data. It's rare but possible, and is referred to as a "cold boot attack"
Though very interesting attacks, these are just theoretical as in real world scenario's, any computer that has data worth that much effort (and the hacker can get physical access) will have a TPM and locked BIOS/uEFI preventing the computer from booting anything else.
And while the TPM can be read, or at least the LPC bus or SPI bus that most TPMs connect over, that requires at least one full reboot of the system, clearing the DRAM allready.
I want a liquid state drive
Is Florida the liquid state?
interesting, I never heard before that SSDs block to read-only before they fail completely, that's very cool and useful feature
Electrons, capacitors, floating gates, 1 and 0.
All of this computer stuff is literally magic.
I will never understand how its possible for things like my smartphone to work as it does. Too much for my brain. How did they even discover how to make them?
Another informative entry in a great series. Thank you.
Despite the economic downturn,I'm so happy☺️. I have been earning $ 60,000 returns from my $7,000 investment every 13days
Absolutely, trading investment is a safer and surest way to make more money now because I have made up to $132,000 recently from investment trading and management
This is the kind of content we need!.. Getting tired of the commercial crap.
I have SSDs from 2007 that are still working. I use them in portable drives and don't use them that often but every time I boot them they boot and run just fine.
Thanks for this!
I'm an old guy, have always dabbled in electronics, short wave radio, computers, etc. In 1963 and 1964 I took a HS vocational electronics class. Vacuum tubes, I easily got. Simple transistors, not even. Wait, electrons move this way (no different from tubes, filament to plate,) but then we have "holes" moving the other way?
Couldn't get my head around that. Still can't.
In short: SSD keeps a charge when PC is off, RAM doesn't. Charge = hardware degradation.
The only SSD I've ever had failures of was OCZ brand, two of them died within hours. I replaced with Intel in 2008 and no further problems. Since then, I've installed 3 dozen SSD drives. None have failed.
Yeah, same here, though it was a few months, I think.
I bet yours also used the infamous SandForce controller which was incredibly faulty and made the drives more and more unusable... ☠️
Funnily enough OCZ RAM was also the only one that failed for me, so I swore to NEVER ever get anything from them ever again! 😤
@@Gaboou the thing about OCZ was their tech support attitude was akin to if a 10 year old was running it. When I complained about their drives, they sent an emoticon of a toungue sticking out. How childish. I never will do business with them ever.
@@basspig I feel for you... 😔
I was lucky enough to get replacements from my vendor - and was able to request an SSD from a different company (Crucial) so that this wouldn't repeat...
@@Gaboou I returned mine to B&H and bought different brands. It all worked out and it was educational for me.
What I have seen with SSDs is the following:
1. A lot of consumer SSDs will die with age. So say your 'cheapy' consumer SSD is 6 years old and has a wear level count of 19. I have seen a number of consumer SSDs die like this even though that wear level count is really low. What I have been seeing with enterprise class SSDs and some of the higher end prosumer class SSDs is they seem to be basically invincible. Lesson being these more costly SSDs get you a whole lot in terms of endurance and reliability. They say all SSDs age, but from what I have seen is some age a lot better than others. On this note I have seen some DRAM less SSDs fail after just 3 years. That DRAM is important.
2. A lot of consumer SSDs will lose your data when they fail. I have seen things like a chip goes bad and the data is striped across the chips, so you have a swiss cheese drive, which is basically unusable. Hopefully you have a backup because that SSD is going to be extremely hard to recover anything from even with just one flash chip dead. Another thing I see a lot is the data just won't read back the same as it was written. Talk about leaking electrons, well what I have seen is the sectors don't get marked as bad, it is just the bits don't stay the same and you get garbage out. It is important to note that computers are highly reliant on perfection, so these kinds of problems quickly turn into if you don't have a backup, you are in a world of hurt if you actually cared about that data even if the drive is technically only partly bad. At this, it can get into bit-rot where there is silent corruption and then you backup that corrupt data, so then your backups are corrupt.
3. This gets into another point RAID and ECC RAM. If you have data you care about, RAID, ECC RAM, and backups are a must. The data might corrupt on the drive. The data might corrupt in RAM silently and then get written to the drive. The backup reads off of the drive, so if it is corrupted by one of the above, it goes to your backup corrupted, the thing I was talking about as bit-rot. There is some nuance here I am not getting into, but basically you need the integrity in RAM and the integrity on your main storage device, or else what that backup can do to preserve your data is limited. At this most ways to do RAID does not properly support TRIM or TRIM at all, which is a key factor in your SSD's life in terms of wear and RAIDs are hard on SSDs as it is, so most people doing RAID really need enterprise class drives in order for the drives to not get destroyed to quickly with too much writing. The only RAID I know of that properly supports TRIM in real world usage is ZFS RAIDZ. ZFS is a dream come true for anyone who cares about their data and an important file system to consider when storing data you care about on SSDs. However ZFS is something that exists in the Solaris, Linux, and BSD UNIX realm, not in the Windows realm.
So for an SSD rundown based on what kind of user you are:
1. Gamer - Especially if you primarily play through Steam, probably not too concerned about your SSD as if the one you have fails, you stick in a new one, go through the annoyance of re-installing and getting Windows up to patch, and re-download the games you are actively playing on Steam. So in a couple of days you are back up and running. If you are a really avid gamer, you may have a bunch of customizations that make this harder to do quickly and may think more about a quality SSD in your computer to reduce the chance of this annoyance.
2. Student - You may game some, but if you are a successful student you have the discipline to get your work done. At this you don't want to lose that term paper a week before finals. You also don't have a lot of money. So what you probably need is a reasonable quality SSD, a good virus scanner as schools are breading grounds for viruses, and either a cloud backup of your school laptop or an external USB hard drive you backup to. Really, you probably want the cloud backup of your data or at least stick it in Onedrive or something. Especially with Onedrive, it will remap the places where you usually stick your files to be in Onedrive. Most places you go is going to have WiFi, but you can also get a cellular hotspot or may tether to your cell phone, so cloud is definitely an option for that term paper.
3. Family computer - Basically the same guidelines as the student. Maybe you also run a NAS device to stick your files on and use that NAS device for other things. The NAS device is not a backup and so you also have to have a backup solution for that. I have seen a number of NAS devices lose everything over the years, often times due to some glitch where you wouldn't think the data should be lost, but it did get lost, so these are not really all that great of a place to stick data and backups of these are very important if you have one and data you care about. You may also see some bit-rot with these NAS devices as the hardware in them tends to be on the cheaped out side. Not to say these are a bad solution, but instead it is not the highest grade solution.
4. Business - The first thing is you don't want a bunch of data on desktops and laptops. Either it is on a server with ECC RAM, RAID, and a good backup solution inside the company or going out to the cloud somewhere, which can take a number of different forms. Then you have a good deployment scheme to quickly image systems to corporate standards. So this reduces the need to have a quality SSD in the desktops and laptops, but you still don't want the headache of the cheapy SSDs failing, so it is a good idea to invest a little to get above the bare bones SSDs. You also really want backups of everything, especially in the days of ransomware and at this BMR (bare metal recovery) is rarer than you might think, but is possible. Any internal servers are going to have enterprise class SSDs of course and depending on the shop, you may have access to ZFS or you may only be able to do hardware RAID. Often times in a company the data stored on a laptop or desktop ends up gone when an employee leaves and the hardware is recycled to be used by another employee, but then somebody wants / needs to access the data by the ex-employee, but it is nowhere to be found, so another reason to not stick data on a desktop or laptop.
5. Enthusiast / Power user - When you get into this realm, you are talking workstation hardware most probably. You will beat on SSDs a lot, so they need to be enterprise grade or at least high end consumer as it does matter. You may try software RAID, but this is a bad idea unless of course you are using ZFS, but usually you will first try some other software / firmware RAID and end up regretting it. So then you either end up with ZFS or a hardware RAID controller. At this a hardware RAID controller will need to be modified to have a fan on it as these are designed for high airflow servers while a workstation does not meet this metric in most cases. If you have lots of data, you may think about a SAS expander, but really SSDs need a direct attachment to a RAID controller for performance, though mechanical drives not so much. You also need to consider what backup solution you are going to use. This gets more constrained as if you have a lot of data, some of the options can get to be really expensive while other options are a lot more cost effective. You probably also want a BMR (bare metal recovery) solution or however close you can get to it in order to minimize down time.
Cool info Thank you.
Long ago I watched a video that said SSD memory could not be reused. It is cool that you can erase and rewrite on them now.
I imagine there’s kids out there frying their computers tinkering with overclocking, and a “basic things to avoid” video may help avoid costly mistakes.
I have personally had ram go bad during a memory test. Testing used modules for a friend, passed on first cycle/pass, let it continue to run then forgot about it. For a month. (It was in an unused office at work). 122 passes in memory addresses starting failing and kept failing. Shut down are restarted the test, failed at same place on first pass and every subsequent pass. When I first saw the failed address I got excited that I had seen a cosmic rays flip a bit, but if it was a cosmic ray, it didn't just flip the bit, it trashed the whole stick of ram.
Also note the mechanism of CMOS (complimentary metal oxide semiconductor) or any sort of MOS. They actually are moving parts however at a sub-atomic level when the electrons pass through.
Also remember that many SSD's and RAM are a MOS where they need to remember that the "oxide" is actually "rust". Yes rust is in your circuitry however these integrated circuits or chips are hermetically sealed however over time oxygen does get in and they then increase that oxide layer therefore making the transistor useless for charging and discharging.
Also static discharge is a common occurrence which leads to faulty storage devices including RAM.
4:05 "You family will still love you, and so will I"
Lol true
This is something I always wanted more details on, thanks guys!
I had to RMA my RAM twice on my current PC. Once because it was DoA, and once because my prior mobo was bad and caused electrical damage to the replacement RAM. Diagnostics to determine a bad mobo are pretty much impossible when you don't have a replacement ready to test. So I actually had to take it to a PC and phone repair shop...and it took them two weeks to determine the failure after they ran a 12 hour test, told me everything was fine, and I immediately got repeated BSODs again...and it was the second time when they told me that my RAM was dead because of a physical defect in two of the RAM slots (both slots in the same channel).
I'm still using my crucial SSD that I got when I build a computer for Witcher 3. After all these years that SSD still works fine, I use it to store my games for my newer builds.
FYI to those tearing apart PSUs and other things, Capacitors DO hold their charge when unplugged! Just that over time they will lose their charge, please don't poke at mains filter caps
I love 5 minute videos with 1 minute sponsors 😃
Great question & simple answer...thank you🙏🙏
Idk, over the past 15 years of building PCs I’ve only had one SSD die on me, while I have had many sticks of ram die on me
I generally knew this, but the deeper detail info is very nice. Thank you.
You guys are making this channel absolutely great :)
I've wondered this for soooooo long. Thank you for making this video!
I have several SSDs I bought about 8 years ago (The 180GB Mushkin I have as my boot/Apps drive is the oldest), and have been in multiple PCs all are still about 90% health.
I bought an SSD in 2012, its still in use today, albeit as a backup drive that doesn't get written to often
Gloriously more informative than any of my computer science professors!
I’d never even thought of this topic, but now I’m glad I know it!