I didn’t get a new camera, actually just got some new lights, improved the lighting arrangement, and messed around with some camera settings. And of course cranked up the sharpen effect in premiere pro ( ͡° ͜ʖ ͡°)
Sector size is a function of read/write head size. Block size can affect read/write speeds. In other words, it's faster to read ten 10 byte blocks than it is to read one hundred 1 byte blocks.
Head size has nothing to do with sector size. Heads fly over 1 bit at a time (it's size is related to bit density only). The higher-level electronics (microcontroller) divide those bits into logical sector sizes. For each "sector", it also has to store headers (which are factory written, mentioned in video) and ECC data (error checking and correction); which all take space. Raising sector size from 512 to 4K means eliminating ECC and headers of 3 sectors, while slightly increasing ECC data for the 4th. But creates compatibility problems (mainly slowing down) with partitioning of old SW (up to, including, XP). PS: those 3.5" 1.44MB (1440MB) floppies were also advertised as 2MB unformatted. Wikipedia mentions formatted size for Amiga as 1.760KB.
That would be the reason why the file transfer dialogue box drops in Bytes per second when you are transfering multiple folders rather that videos, becuase the folders may have a lot of small files that don't fill the allocted units size but videos would.
Hi Joe! You might remember me as the author of Task Manager, but I'm also the author of the Format dialog! I have a video about it's history on my channel (Dave's Garage). Would have been a good topic to do a collab on! Cheers and the new Apature lights (I can only assume), looks good!
A trim command (known as TRIM in the ATA command set, and UNMAP in the SCSI command set) allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally. Trim was introduced soon after SSDs were introduced.
Yes, I've been using IBM PC's since the 80's, at that time the original hard drive controllers actually sat in a plug-in card attached to the original ISA bus (great-grandfather of all of the current generation of PCI buses). The control logic sat outside the hard drives themselves on this plug-in card. Now back then there used to be two different types of hard drive formats available, MFM & RLL, and so you had to get the appropriate type of controller card for the type of HDD you were getting. The low-level formatter was a piece of code within the controller card that you could execute by using a special program in DOS that initiated that portion of the controller's logic. That all went away once the IDE HDD's came, the controller logic all lay inside the HDD's themselves, and the smart "controller cards" actually just became simpler dumb interface cards. Low-level formatting also became inaccessible by that point, as the LLF was done at the factory, and never needed refreshing.
Before Vista a "full" format was just a deep scan over the drive for bad sectors & not actually writing zeros. Starting with Vista a full format does do the zero write.
back in the day there use to be format without erase command..this would read a sector then format that sector the re-write the data back into that sector..this took hours to do but was helpful in recovering drives that were starting to have issues
*finally finds a video where someone goes into detail about something no one ever goes into detail about so I can finally learn 100%* TH-camr: I’m not going to get into that, I’m going to keep it simple for this video GOD DAMMIT 😫
LL formatting was back in the day when a 10 megabyte drive was _something..._ back when most personal computers ran on floppys or tape and the user was the device driver for everything.
This is a pretty good piece, I was already familiar with a lot of the stuff in isolation but you did a nice job putting it all together into an easily understandable package.
I had almost forgotten about low-level formatting. Seems that it took forever to low-level format my 30MB hard disk. I had the option to upgrade my Compaq 286 to the 30MB and all my friends thought I was nuts for paying extra for such a huge drive. "You will never be able to use 30MB on a home computer even 30 or 40 years from now."
Wow lol. I remember when I had the MASSIVE 40MB Hard Drive installed in a Tandy 1000. Crazy to think that only about 25 1.44 MB Floppy disks would fill the entire drive. Yet that was considered more than enough space at the time.
Bro you need to like assemble the stuff into like a playlist teach everything about computers because you explain everything so well. You're literally a villain turned hero. Thank you so much for what you do
As a tech guy who helps people with their computer and is constantly into tech stuff, I was always curious about this and even though you said you went extra, always remember there are people out there who understand everything you said and I learned a lot from the video! Thanks for the education and I always love your content. 🤘🏼🤙🏼
It's nice when a TH-camr explains somewhat complex technical stuff in terms I can understand. Though with "zone allocation" it might be irrelevant but, I am curious how big is "Big files"( like, "text document big", "mp3 big" "Cd Iso" big, "DVD iso" big or such. )
Thanks for the explanation of hard disc and SSD style storage. The MVME on both the motherboard and on the SSD looks like a better strategy. brilliant.
Back in the day, low-level formatting wasn't just an "apparently you could" thing. It was a "you must do this first" thing. My memory is fading a bit, but in MS-DOS, you had to run the DEBUG command (built-in to the OS) to access a piece of microcode to do the low-level format. The code was likely on the controller. Once running, you would put in the drive geometry which was printing on the drive from the factory. It said how many heads, cylinders, sectors, etc. there were on the drive. In addition to that, the drive often came with a defect map you had to key into the low-level formatting utility to remove those parts of the drive from visibility from the operating system. Isn't that something, a drive coming from the factory with known defects you had to work around. I don't miss those days. Once IDE drives came around, the controller was on the drive itself and the computer had a "host bus adapter" to be able to connect to them. I remember back in the day wondering why I'd want all these things integrated on the motherboard. What if one component fails? I'd need to replace the whole board! That's expensive and inconvenient! Now, it's not a big deal. The reliability of components is really good at this point with all the mistakes being made decades ago and manufacturers learning from them. With the advent of IDE drives, Zone Bit Recording became a thing so your drive geometry changed from a physical feature to a logical one. When the operating system asks for some data on this (cylinder, head, sector), the drive knows where it actually is and reports it back. Most spinning drives today have only one physical platter with one or two heads. Not like back in the day when it was common to have a drive with many platters and two heads each. My first hard drive was a MFM, 5.25", full-height (3") drive holding 30MB. I would next upgrade to a RLL, 3.25", half-height (1.5") drive holding 40MB which just happened to be a slower drive than the first one.
Very informative. I enjoyed it, however, the part that got me more glued was where you stopped. I'm using more SSDs now to create content and I was interested in the Open Channel SSD thing about NVME drives.
Interesting thing: floppies and hard drives also have a sort of translation layer. It’s called sector skew. Sectors are not laid out in numeric order but so that, ideally, the next sector to be read is directly under the head when the previous action is done. This is set during low level formatting.
@@luhgarlicbread All floppies were low level formatted by the user. That's why you could select the different capacity for your needs based on your hardware.
I think you forgot to mention one very important thing about non-quick formatting, that is it will check which cells/sectors are bad when writing 0's to them, then it will mark them and isolate them in order to prevent the OS from using them.
The way I recall "Low level formatting" was in the pre ATA interface days. The PC (usually DOS based Disk File System) actually completely controlled the hard disk. This also only allowed for logical drives of 32MB (yes MB). The Hard Disk Controller was plugged into an expansion slot on your computer and then you would use the dos debug tool to access a certain memory address to invoke the setup tools to partition and low level format your drives then allocate them as logical drives. I dont recall ever having to re low level format a drive on a PC XT or AT you merely set up the drive tracks, sectors and partitions once. Then you could return to DOS and format the drives. The ATA drives introduced the actual hardware control to the drives themselves and a simplified interface for the computers. So while you may use the DOS/Windows format command afterwards what physically happened on the disk was out of the computers hands as the ATA on drive controller actually took control and allowed for automatic management of bad sectors by swappibg bad sectors with a reserve of alternative sectors.
Nice one, u bring to this platform a genuine approach sharing ur knowledge with others and create a supportive community. So long as u tell it as it is I'll continue to tune in.YES from me too
This man's explanations are all over the place. Reminds me of how a child would explain things. No structure, no pacing or pauses, and lots of 'I'm not going to get into that'.
Another way of looking at low-level formatting is that the original write head is positioned by physical location, then it writes the address that defines the location. Simply put, at the location for track 1, sector 1, it writes "track 1, sector 1" , then "track 1, sector 2", then "track1, sector3", etc. The higher-level software of the OS then relies less on the physical head position and more on the written data to confirm its position just before reading or writing subsequent data. If it didn't do it this way, it would be all to easy to make a positioning mistake and store data in the wrong place, overwriting something else.
Very interesting, especially the bit about choosing big blocks (7:40) for big files, but you forgot to tell us the important stuff, like how/why to choose FAT formatting, or FAT32, or NTFS, or exFAT, whatever all those are. Now THAT info seems more useful than knowing about sectors and cells and pages and whatnot.
Ah, those days when every floppy disk had to be formatted before it could be used. And then came pre-formatted floppies. And Norton Utilities had a Disk Editor that could recover a deleted file - mostly manually. Something really got better.
Dude these videos are so informativ! I don't know any other channel that has tech content that is so unique as this one. Most channels just have the standard top 10 best gaming laptops or something. Not ThioJoe, this video and the System32 hidden programs video are so unique and high quality. The reason I am subscribed to your channel! Keep up the good work man!
One reason why low level formatting is no longer available is because it actually needs to be very precise, especially with higher data density. The formatting still fades today, but that is offset by it being much stronger and also better manufacturing. You used to access sectors on a hard disc using a CHS tuple (cylinder, head, sector) back with IDE. In fact on very old devices you actually had to tell the BIOS how many sectors, cylinders and heads (usually platters * 2) the disk actually had. nowadays with SATA (and NVMe as well) we exclusively use LBAs, which basically just gives each sector a number. One optimisation an HDD might do is switch between heads for every sector, which further improves sequential read/write performance. Fun fact: the MBR (to the extend of still being used) still contains CHS tuples for each partition. And yes, even GPT disks usually contain an MBR, though it's mostly useless.
I remember LL Formatting (LLF) and that the reading arm mechanism was large and had density. That also meant the seek time was longer due to the read arm weight and that, speed, would wear out the reader arm. Thus, drives would fail and computer speed was limited from sluggish to not very fast. Therefore, when LLF became strictly a manufacturing process, reader arms became lighter, could move faster, reduced seek time, and thus computers got a speed boost.
My computer told me that the backup utility is no longer functioning. I tried to reload the original OS and it does not recognize it. Would the repair DVD for Windows XP from Amazon be able to help me? More and more things on my computer are looking stranger and stranger. Files are ever increasingly missing and more and more control options are missing. What should I do?
Awesome job explaining out things in detail for better understanding for all types of mind sets. You calling must have been a teacher or guide for others. Technical thinking is extremely important and very hard to find, especially these days. Never loose that moral spark you carry big dog. I definitely learned a lot from your channel and that's what it's about. GREAT JOB 👍
I needed this! I just completed my NTLite Windows 10 version and wanted to store it on my USB drive for my new build. Thank you! Edit: Do I need to non-quick format to use my flash drive for installing windows? What about BIOS updates?
this video reminded me of info tech class where I was given a computer to troubleshoot. I can't count how many times I had to manually tell the bios how many cylinders, clusters, sectors there were on the hdd before I realized the cmos battery was dead lol.
Found this video a bit late but one thing that should be noted about SSDs is that they’re organized in pretty much the exact same way that RAM also is these days. Just as SSDs are organized into pages, so too is RAM and most modern CPUs have recursive page tables to map memory addresses the same way that an SSD’s mapping table is able to translate virtual LBAs to physical addresses of SSD pages. Since operating systems always have to design their own heap allocators to map the RAM, it should go without saying that mapping an SSD should in theory be just as straightforward. I say this as someone who has studied both the OSDev Wiki and Philipp Oppermann’s tutorial on writing kernels from scratch extensively.
If I were wearing a hat it would acknowledge the fact that this short is awesome. My disc are cleaner when I format them and it takes more bytes which is what I aim for. Thank you for sharing.
No mention of the different file systems, selecting allocation unit sizes all important factors that can effect your drive's performance and data handling. So many important facts omitted.
I come from the era when dinosaurs roamed the Earth and some hard drives used hydraulic to move the heads in and out. Also, the drive weren't sealed and heads were replaceable, did it myself quite a few times. So much of this is nostalgia for me. Now the discs are smaller, and the capacities are larger. We used tools similar to OSForensics to patch deleted files back into existence if they hadn't been overwritten. Good to know that down deep that hidden world is still there, not that I doubted it.
This is also the reason you should use file or disk encryption. When you delete an encrypted files pointer, and the encryption key is not compromised, then it is functionally the same as being raw formatted.
I remember the days when you had to low level format a drive. I had a program that would allow me to inspect the data on the drive and by looking at the file names you could see what files were deleted because the extension had been changed and by editing the extension back to it original extension the file would be restored if it hadn't been overwritten. The program was PFM, no its not what your thinking, if i recall it stands for Personal File Manager.
Also when you were talking about directly communicating with the SSD about storing data, I literally was thinking, isn't that what the NVME protocol should do!
the original low level format was mainly to pair the drive with the controller. you couldn't just put a used hard disk in another machine and read the data from it, or read it reliably. this was with the old MFM / RLL / SCSI drives. when IDE drives were invented, the term "Low level format" stuck, but the low level format on these just updated the bad block allocation table in the firmware on the drive, hiding it from the OS and disk checking utilities. the same is true with modern "Low level format utilities" like seatools etc, they just perform a write / verify / write / verify on each track / sector and mark the ones that fail as not used in the drives firmware. the advantage of the IDE was that instead of the sector / track / bad / not used table being on the controller card, it was moved to the drive. so you could then move the drive to any controller card without the need to low level format the disk and lose all the data on the drive.
Another couple items on Old versus new low level formatting The old ones used to be able to change what they call the intersector Gap and the interleave. Mostly because of speed issues based on old computers versus newer computers and still being able to use the same drives. The intersector Gap was the distance between sectors on the platter in a track. The interleave happened to do with how many sectors you skipped before the numbering changed. Nowadays that number is pretty much consecutive numbers. So for instance you would get sector 1, skip a sector, sector to skip a sector. And then we got around to the beginning again you would have one plus the number of sectors right after the first sector. That was with a single interleave Gap. The reasoning for this was some of the computers were so slow that they could not read two sectors at a time reliably. So they used one sector or more to give the computer time to catch up. The other thing about low level formatting was they used to put all the tracking information on one side of one platter. Nowadays this spread out that low level formatting tracking magnetic track on multiple platters so is not to have all the data wiped because the tracking information was scrubbed off because of a head crash
By the way the ZFS idea for putting more tracks on longer tracks was actually implemented on the commodore 64 floppy file system. IIRC there were four zones from enter to outer tracks. It was also done that way because the medium itself could not physically read reliably if the magnetic zones bits were packed too closely. Kind of like the difference between an audio cassette recording and a helical scan recording the VCR does
Which makes you wonder how much they could have recorded on a standard cassette tape if they used the helical recording method. A DAT tape drive give you an idea, since I've been able to record about 2 GB on one. E.G. a 2 hour movie
Back in the day, I found some software to low level format floppy disks. I made drivers that where several mb in size, however, the reliability was awful. Hahaha I also made drives that where only a few kb, which worked well, however, because the standard format of a drive was already quite reliable, I didn't see much of an advantage in reliability. The largest stable format I got from memory was something like 2mb or something. Or was a lot of fun!
I remember at one time you could create blocks etc, can't remember the software I used but when we moved computers to another department we had to do this.
On the older hard drives that could be low level formatted, the sectors not only could magnetically "fade" over time or become "bad sectors", the sectors could 'move' by the hard drive platter(s) spindle bearing wear over time, making the drive head re-seek or re-write the data more times until it is found or written to. So the old MSDOS would cover up this mess by error correction code (ECC) in each sector until a "sector not found" error appears that kills your data. Because the head seek/write limits are reached.
Informative video. Your videos are suitable for computer class lectures. If a teacher is watching them. Can definitely play or recommend to the students. SmartTJ 😎
Thank you i finaly know what the defrag does after all this time, i used to think it just fixed broken stuff, now i understand it as organising data on a disc drive to make it quicker for the computer to locate information 😅 and that i never have to defragment again now all my drives are ssd
I didn’t get a new camera, actually just got some new lights, improved the lighting arrangement, and messed around with some camera settings. And of course cranked up the sharpen effect in premiere pro ( ͡° ͜ʖ ͡°)
What lighting can do
Nice
Yeah i saw that...
Where is the member emoji???????
this is epik
"Have you ever wondered what _actually_ is going on" YES! ABOUT EVERYTHING! KEEP MAKING THESE VIDEOS!
I agree
@Omri Hermon manufacturers are still making a computer
How come every time you format you lose more and more total disc space
@@Adrien_broner what do you mean?
One of my friends is always amused at my random knowledge.
Sector size is a function of read/write head size.
Block size can affect read/write speeds. In other words, it's faster to read ten 10 byte blocks than it is to read one hundred 1 byte blocks.
Head size has nothing to do with sector size. Heads fly over 1 bit at a time (it's size is related to bit density only). The higher-level electronics (microcontroller) divide those bits into logical sector sizes. For each "sector", it also has to store headers (which are factory written, mentioned in video) and ECC data (error checking and correction); which all take space. Raising sector size from 512 to 4K means eliminating ECC and headers of 3 sectors, while slightly increasing ECC data for the 4th. But creates compatibility problems (mainly slowing down) with partitioning of old SW (up to, including, XP).
PS: those 3.5" 1.44MB (1440MB) floppies were also advertised as 2MB unformatted. Wikipedia mentions formatted size for Amiga as 1.760KB.
That would be the reason why the file transfer dialogue box drops in Bytes per second when you are transfering multiple folders rather that videos, becuase the folders may have a lot of small files that don't fill the allocted units size but videos would.
What's the ideal sector size?
@@Anonymous-cm8jy
Depends on the use case, hopefully someone can provide some more insight on this as I’m not really sure
@Anonymous-cm8jy if your doing mostly bug files like games use big if its mostly small stuff use small
Hi Joe! You might remember me as the author of Task Manager, but I'm also the author of the Format dialog! I have a video about it's history on my channel (Dave's Garage). Would have been a good topic to do a collab on! Cheers and the new Apature lights (I can only assume), looks good!
I come from that video!
Why aren't you verified
God bless you sir.
LOL
@@Anonymous-cm8jy He IS GOD, he can bless himself :-)
Please never stop making these informative videos.
thiozombie
If he could talk a fraction slower it would be fantastic. Not a criticism, just a suggestion.
@@gregdowle8031 you are somewhat right.
A trim command (known as TRIM in the ATA command set, and UNMAP in the SCSI command set) allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally. Trim was introduced soon after SSDs were introduced.
Not sure about everyone else but I really appreciate the extra details you provide in your videos. Thanks for not "dumbing it down" too much!
Got the notification while formating a 30Gigs drive.
LoL
XD
Lol 5
Lol
@@daringcuteseal Lol 6
Yes, I've been using IBM PC's since the 80's, at that time the original hard drive controllers actually sat in a plug-in card attached to the original ISA bus (great-grandfather of all of the current generation of PCI buses). The control logic sat outside the hard drives themselves on this plug-in card. Now back then there used to be two different types of hard drive formats available, MFM & RLL, and so you had to get the appropriate type of controller card for the type of HDD you were getting. The low-level formatter was a piece of code within the controller card that you could execute by using a special program in DOS that initiated that portion of the controller's logic. That all went away once the IDE HDD's came, the controller logic all lay inside the HDD's themselves, and the smart "controller cards" actually just became simpler dumb interface cards. Low-level formatting also became inaccessible by that point, as the LLF was done at the factory, and never needed refreshing.
Perfect timing! I’m actually in the middle of formatting a 2tb drive to clean it of things I regret seeing
@@Hope_Upstairs ?
Porn
@@electronichaircut8801 😂😂😂
Has it completed yet?
Bruh
Before Vista a "full" format was just a deep scan over the drive for bad sectors & not actually writing zeros. Starting with Vista a full format does do the zero write.
back in the day there use to be format without erase command..this would read a sector then format that sector the re-write the data back into that sector..this took hours to do but was helpful in recovering drives that were starting to have issues
Drink every time he says „I’m not gonna get into that“
And double for "actually" and "basically".
@@LiveeviL6969 I don't have a deathwish, thanks
I have drank 20 beers so far
*finally finds a video where someone goes into detail about something no one ever goes into detail about so I can finally learn 100%*
TH-camr: I’m not going to get into that, I’m going to keep it simple for this video
GOD DAMMIT 😫
I swear I had a stroke reading your comment
LL formatting was back in the day when a 10 megabyte drive was _something..._ back when most personal computers ran on floppys or tape and the user was the device driver for everything.
"..........the user was the device driver for everything." 🤔😳😁👍😊👍
This is why i subbed to this channel
Some stuff u wonder what they do
This is a pretty good piece, I was already familiar with a lot of the stuff in isolation but you did a nice job putting it all together into an easily understandable package.
I had almost forgotten about low-level formatting. Seems that it took forever to low-level format my 30MB hard disk. I had the option to upgrade my Compaq 286 to the 30MB and all my friends thought I was nuts for paying extra for such a huge drive. "You will never be able to use 30MB on a home computer even 30 or 40 years from now."
Wow lol. I remember when I had the MASSIVE 40MB Hard Drive installed in a Tandy 1000. Crazy to think that only about 25 1.44 MB Floppy disks would fill the entire drive. Yet that was considered more than enough space at the time.
Bro you need to like assemble the stuff into like a playlist teach everything about computers because you explain everything so well.
You're literally a villain turned hero.
Thank you so much for what you do
As a tech guy who helps people with their computer and is constantly into tech stuff, I was always curious about this and even though you said you went extra, always remember there are people out there who understand everything you said and I learned a lot from the video! Thanks for the education and I always love your content. 🤘🏼🤙🏼
I'm surprised you explained all that shit and didn't bother with what NTFS and FAT32 are XD
english vs spanish
Oh right, I totally forgot, I was kinda looking forward that part... :C
@DEEJMASTER 333 🤣🤣
He only covered NTFS for format/quick format. A FAT filesystem doesn't require those particular files.
@DEEJMASTER 333 exFAT
I would love to know more about SSDs! You left us a bit short on them :D
@15:25 I would like a video on the NVME drive please regarding that specific process. I think it would be informative and interesting to the audience.
Some (if not many) Defrag programs will show the existence of the metadata files. Some will even display a name for each of them.
Yea, I remember seeing $MFT and thinking why I can't move it somewhere else...
@@phs125 I've seen that as well.
For future reference,
If you get swapfile.sys or pagefile.sys
And need to move it to make partitions,
Just disable paging, reboot, and defrag...
ThioJoe reminds me of myself when I was younger - except he's really cool and has a youtube channel.
The way you've completey turned the channel around is beautiful.
It's nice when a TH-camr explains somewhat complex technical stuff in terms I can understand. Though with "zone allocation" it might be irrelevant but, I am curious how big is "Big files"( like, "text document big", "mp3 big" "Cd Iso" big, "DVD iso" big or such. )
Thanks for the explanation of hard disc and SSD style storage. The MVME on both the motherboard and on the SSD looks like a better strategy. brilliant.
This was the fastest, easiest 16 min video I've watched. Thank you.
Now I know what is the meaning of: *Size of actual file and size of file in the disk* while seeing properties of a file or folder😎
@@shadycopilot Aww man
@@Custmzir so we back in the mine
Back in the day, low-level formatting wasn't just an "apparently you could" thing. It was a "you must do this first" thing.
My memory is fading a bit, but in MS-DOS, you had to run the DEBUG command (built-in to the OS) to access a piece of microcode to do the low-level format. The code was likely on the controller. Once running, you would put in the drive geometry which was printing on the drive from the factory. It said how many heads, cylinders, sectors, etc. there were on the drive. In addition to that, the drive often came with a defect map you had to key into the low-level formatting utility to remove those parts of the drive from visibility from the operating system. Isn't that something, a drive coming from the factory with known defects you had to work around. I don't miss those days.
Once IDE drives came around, the controller was on the drive itself and the computer had a "host bus adapter" to be able to connect to them. I remember back in the day wondering why I'd want all these things integrated on the motherboard. What if one component fails? I'd need to replace the whole board! That's expensive and inconvenient! Now, it's not a big deal. The reliability of components is really good at this point with all the mistakes being made decades ago and manufacturers learning from them. With the advent of IDE drives, Zone Bit Recording became a thing so your drive geometry changed from a physical feature to a logical one. When the operating system asks for some data on this (cylinder, head, sector), the drive knows where it actually is and reports it back. Most spinning drives today have only one physical platter with one or two heads. Not like back in the day when it was common to have a drive with many platters and two heads each.
My first hard drive was a MFM, 5.25", full-height (3") drive holding 30MB. I would next upgrade to a RLL, 3.25", half-height (1.5") drive holding 40MB which just happened to be a slower drive than the first one.
Very informative. I enjoyed it, however, the part that got me more glued was where you stopped. I'm using more SSDs now to create content and I was interested in the Open Channel SSD thing about NVME drives.
First time I watch one of ThioJoe's videos in the first 10 minutes.
Interesting thing: floppies and hard drives also have a sort of translation layer. It’s called sector skew. Sectors are not laid out in numeric order but so that, ideally, the next sector to be read is directly under the head when the previous action is done. This is set during low level formatting.
Do you know if it was possible to low level format a floppy disk outside of the factory?
@@luhgarlicbread All floppies were low level formatted by the user. That's why you could select the different capacity for your needs based on your hardware.
@@PabSungenis
Oh, cool
I think you forgot to mention one very important thing about non-quick formatting, that is it will check which cells/sectors are bad when writing 0's to them, then it will mark them and isolate them in order to prevent the OS from using them.
I'm addicted to these videos! Very well explained👍
The way I recall "Low level formatting" was in the pre ATA interface days. The PC (usually DOS based Disk File System) actually completely controlled the hard disk. This also only allowed for logical drives of 32MB (yes MB). The Hard Disk Controller was plugged into an expansion slot on your computer and then you would use the dos debug tool to access a certain memory address to invoke the setup tools to partition and low level format your drives then allocate them as logical drives.
I dont recall ever having to re low level format a drive on a PC XT or AT you merely set up the drive tracks, sectors and partitions once. Then you could return to DOS and format the drives.
The ATA drives introduced the actual hardware control to the drives themselves and a simplified interface for the computers. So while you may use the DOS/Windows format command afterwards what physically happened on the disk was out of the computers hands as the ATA on drive controller actually took control and allowed for automatic management of bad sectors by swappibg bad sectors with a reserve of alternative sectors.
Nice one, u bring to this platform a genuine approach sharing ur knowledge with others and create a supportive community. So long as u tell it as it is I'll continue to tune in.YES from me too
Thanks very very much. Now I understand. You present these subjects in a way that my poor old brain absorbs what you are teaching. Thanks.
This video was better than I thought it would be. Thank you.
Your Videos are very good and helped many people including me. Keep up the good work!
This man's explanations are all over the place. Reminds me of how a child would explain things.
No structure, no pacing or pauses, and lots of 'I'm not going to get into that'.
That's the clearest explanation I've ever heard
Another way of looking at low-level formatting is that the original write head is positioned by physical location, then it writes the address that defines the location. Simply put, at the location for track 1, sector 1, it writes "track 1, sector 1" , then "track 1, sector 2", then "track1, sector3", etc. The higher-level software of the OS then relies less on the physical head position and more on the written data to confirm its position just before reading or writing subsequent data. If it didn't do it this way, it would be all to easy to make a positioning mistake and store data in the wrong place, overwriting something else.
Superb explanation, thanks!
Very interesting and informative! Thanks!
Very interesting, especially the bit about choosing big blocks (7:40) for big files, but you forgot to tell us the important stuff, like how/why to choose FAT formatting, or FAT32, or NTFS, or exFAT, whatever all those are. Now THAT info seems more useful than knowing about sectors and cells and pages and whatnot.
I learn a lot on this channel please keep these videos coming.
Ah, those days when every floppy disk had to be formatted before it could be used. And then came pre-formatted floppies. And Norton Utilities had a Disk Editor that could recover a deleted file - mostly manually. Something really got better.
Dude these videos are so informativ! I don't know any other channel that has tech content that is so unique as this one. Most channels just have the standard top 10 best gaming laptops or something. Not ThioJoe, this video and the System32 hidden programs video are so unique and high quality. The reason I am subscribed to your channel!
Keep up the good work man!
You mean informative
One reason why low level formatting is no longer available is because it actually needs to be very precise, especially with higher data density. The formatting still fades today, but that is offset by it being much stronger and also better manufacturing. You used to access sectors on a hard disc using a CHS tuple (cylinder, head, sector) back with IDE. In fact on very old devices you actually had to tell the BIOS how many sectors, cylinders and heads (usually platters * 2) the disk actually had. nowadays with SATA (and NVMe as well) we exclusively use LBAs, which basically just gives each sector a number. One optimisation an HDD might do is switch between heads for every sector, which further improves sequential read/write performance. Fun fact: the MBR (to the extend of still being used) still contains CHS tuples for each partition. And yes, even GPT disks usually contain an MBR, though it's mostly useless.
Okay this video was more useful than the classic "how to change the oil in your computer" from back in the day. Thank you very much.
I remember LL Formatting (LLF) and that the reading arm mechanism was large and had density.
That also meant the seek time was longer due to the read arm weight and that, speed, would wear out the reader arm.
Thus, drives would fail and computer speed was limited from sluggish to not very fast.
Therefore, when LLF became strictly a manufacturing process, reader arms became lighter, could move faster, reduced seek time, and thus computers got a speed boost.
If I had a dollar for every time ThioJoe said: "I'm not gonna get into that," I'd have enough money to buy an SSD! 👍😂
Three
@@user-zs8eg4mu8t Would you care to elucidate on your one word reply? ... Three SSD's?
@@marcse7en ssds
1 rtx 3090
Thanks a lot for the knowledge you have been impacting into us.😌
"impacting into us"
The detailed info about the drives makes the video very educational but still intresting. Keep it up! :)
I think formatting codes lol (Because I see the notification)
Edit : Yes, got like from ThioJoe again woohoo! (Wait the likes disappear lol)
Hey guys, if your really serious trying to format the entire disk drive, don’t forget to backup first. 😇👍🏻
great tip.. sometimes i totally forget 🤣
My computer told me that the backup utility is no longer functioning. I tried to reload the original OS and it does not recognize it. Would the repair DVD for Windows XP from Amazon be able to help me? More and more things on my computer are looking stranger and stranger. Files are ever increasingly missing and more and more control options are missing. What should I do?
This helps so much! I want to get an SSD and want to know what to do with the old hard drive, and now I know
Computer Master who saved my computer's life🔥🔥
Omg the computer Master 😍 tysm🥺
Awesome job explaining out things in detail for better understanding for all types of mind sets. You calling must have been a teacher or guide for others. Technical thinking is extremely important and very hard to find, especially these days. Never loose that moral spark you carry big dog. I definitely learned a lot from your channel and that's what it's about. GREAT JOB 👍
I needed this! I just completed my NTLite Windows 10 version and wanted to store it on my USB drive for my new build. Thank you!
Edit: Do I need to non-quick format to use my flash drive for installing windows? What about BIOS updates?
dude your videos are awesome, i honestly didnt knew about metadata files on disk
Just stumbled upon your channel and I started to get all the answers to my questions!
this video reminded me of info tech class where I was given a computer to troubleshoot. I can't count how many times I had to manually tell the bios how many cylinders, clusters, sectors there were on the hdd before I realized the cmos battery was dead lol.
I love these videos so much..
Simply explained for everyone!
Found this video a bit late but one thing that should be noted about SSDs is that they’re organized in pretty much the exact same way that RAM also is these days. Just as SSDs are organized into pages, so too is RAM and most modern CPUs have recursive page tables to map memory addresses the same way that an SSD’s mapping table is able to translate virtual LBAs to physical addresses of SSD pages. Since operating systems always have to design their own heap allocators to map the RAM, it should go without saying that mapping an SSD should in theory be just as straightforward. I say this as someone who has studied both the OSDev Wiki and Philipp Oppermann’s tutorial on writing kernels from scratch extensively.
Good video Joe. An excellent crash course on how drives work.
If I were wearing a hat it would acknowledge the fact that this short is awesome. My disc are cleaner when I format them and it takes more bytes which is what I aim for. Thank you for sharing.
THIO you are amazing! Thank you, for your incredibly informative tutorials.
I had been looking for this video since ages. Very informative! Thanks 👍☺️
Very educational! And surprisingly clearly explained! I'll be checking out more of your videos. Thanks!
Thiojoe went from a troll to creating actual education videos. Times have really changed
Avoid doing a non -quick format on flash devices. It shortens their life.
Now show us how to Unformat a Quick format! Keep up the good work ThioJoe. I am a long time fan!
No mention of the different file systems, selecting allocation unit sizes all important factors that can effect your drive's performance and data handling. So many important facts omitted.
I come from the era when dinosaurs roamed the Earth and some hard drives used hydraulic to move the heads in and out. Also, the drive weren't sealed and heads were replaceable, did it myself quite a few times. So much of this is nostalgia for me. Now the discs are smaller, and the capacities are larger. We used tools similar to OSForensics to patch deleted files back into existence if they hadn't been overwritten. Good to know that down deep that hidden world is still there, not that I doubted it.
I, distantly, remember having to Low Level Format drives - 30 years ago!
This is also the reason you should use file or disk encryption. When you delete an encrypted files pointer, and the encryption key is not compromised, then it is functionally the same as being raw formatted.
I remember the days when you had to low level format a drive. I had a program that would allow me to inspect the data on the drive and by looking at the file names you could see what files were deleted because the extension had been changed and by editing the extension back to it original extension the file would be restored if it hadn't been overwritten. The program was PFM, no its not what your thinking, if i recall it stands for Personal File Manager.
Thank you for making this video. I learned a lot. And also I am now
motivated to format one of my drives 😄
Also when you were talking about directly communicating with the SSD about storing data, I literally was thinking, isn't that what the NVME protocol should do!
You made every single second of this video worth . You constantly do on all of your videos though tbh . Cheers mate 👍
Great video, I never knew what quick format does...
the original low level format was mainly to pair the drive with the controller. you couldn't just put a used hard disk in another machine and read the data from it, or read it reliably.
this was with the old MFM / RLL / SCSI drives.
when IDE drives were invented, the term "Low level format" stuck, but the low level format on these just updated the bad block allocation table in the firmware on the drive, hiding it from the OS and disk checking utilities. the same is true with modern "Low level format utilities" like seatools etc, they just perform a write / verify / write / verify on each track / sector and mark the ones that fail as not used in the drives firmware. the advantage of the IDE was that instead of the sector / track / bad / not used table being on the controller card, it was moved to the drive. so you could then move the drive to any controller card without the need to low level format the disk and lose all the data on the drive.
Thanks for adding the time stamp.
Well whenever I sell a computer with a hard drive I'm definitely going to not quick format it lol
Yeah, obliterate those bits
Another couple items on Old versus new low level formatting
The old ones used to be able to change what they call the intersector Gap and the interleave. Mostly because of speed issues based on old computers versus newer computers and still being able to use the same drives. The intersector Gap was the distance between sectors on the platter in a track. The interleave happened to do with how many sectors you skipped before the numbering changed. Nowadays that number is pretty much consecutive numbers. So for instance you would get sector 1, skip a sector, sector to skip a sector. And then we got around to the beginning again you would have one plus the number of sectors right after the first sector. That was with a single interleave Gap. The reasoning for this was some of the computers were so slow that they could not read two sectors at a time reliably. So they used one sector or more to give the computer time to catch up.
The other thing about low level formatting was they used to put all the tracking information on one side of one platter. Nowadays this spread out that low level formatting tracking magnetic track on multiple platters so is not to have all the data wiped because the tracking information was scrubbed off because of a head crash
By the way the ZFS idea for putting more tracks on longer tracks was actually implemented on the commodore 64 floppy file system. IIRC there were four zones from enter to outer tracks. It was also done that way because the medium itself could not physically read reliably if the magnetic zones bits were packed too closely. Kind of like the difference between an audio cassette recording and a helical scan recording the VCR does
Which makes you wonder how much they could have recorded on a standard cassette tape if they used the helical recording method.
A DAT tape drive give you an idea, since I've been able to record about 2 GB on one. E.G. a 2 hour movie
Sorry to put more sectors on longer tracks. I think they referred to it as variable bitrate
Back in the day, I found some software to low level format floppy disks. I made drivers that where several mb in size, however, the reliability was awful. Hahaha I also made drives that where only a few kb, which worked well, however, because the standard format of a drive was already quite reliable, I didn't see much of an advantage in reliability. The largest stable format I got from memory was something like 2mb or something. Or was a lot of fun!
Very helpful information and indepth thank you
I use Red Key every time I either get a new Drive or if I throw away or sell an old drive. Red key has helped me also recover bad sectors.
I remember at one time you could create blocks etc, can't remember the software I used but when we moved computers to another department we had to do this.
Oh, I thought you were gonna talk about file formats. This is gonna be interesting anyways, as always!
Yes we used to always low-level format hard drives. To everyone out there just remember to always back up your data from any hard drive.
Nice informative video, I now have a better understanding of how and what my drives do … thanks 👍
On the older hard drives that could be low level formatted, the sectors not only could magnetically "fade" over time or become "bad sectors", the sectors could 'move' by the hard drive platter(s) spindle bearing wear over time, making the drive head re-seek or re-write the data more times until it is found or written to. So the old MSDOS would cover up this mess by error correction code (ECC) in each sector until a "sector not found" error appears that kills your data. Because the head seek/write limits are reached.
Wonderfully technical! Thank you so much!
Good stuff! Up next: _What Does _*_Defragmentation_*_ Actually Do, Anyway?_
I still LLF Drives from Maxtor and Seagate and Connor as I have that LLF Software .
Informative video. Your videos are suitable for computer class lectures. If a teacher is watching them. Can definitely play or recommend to the students. SmartTJ 😎
Thank you i finaly know what the defrag does after all this time, i used to think it just fixed broken stuff, now i understand it as organising data on a disc drive to make it quicker for the computer to locate information 😅 and that i never have to defragment again now all my drives are ssd