Thank you so much for this video, I am about to have an exam within 2 hours and I managed to understand this topic within minutes. Keep it up, stay blessed!
@@hamiltonfungula63 lol youtube for a cram session? when all else fails remember ABACADABA ;) I am enjoying his animated videos though, nice smooth minimalistic animations instead of a constent view of some crazy tweekers face taking himself way to seriously .
I am a manager of a data center and love to ask this question on interviews. Enterprise NAS appliances are designed to be highly available, with redundant controllers, with dual power supplies, and dual network links (often arranged in LACP teams). NAS serves file systems (in the form of shared folders) to end points (typically end user devices) on a corporate network (the LAN). Now, if you want to use NAS for a non traditional workload (such as an NFS export for VMWare data stores), you'd typically devote switches to that use case to avoid the contention with typical end user LAN traffic (when using the same switches as your LAN) described in the video. The reason SAN is marketed as a faster technology has less to do with bandwidth of the different networks (there are 25, 40 and 100 Gb/s ethernet links as well) and more to do with the fundamental differences between NAS and SAN . SANs share raw disk "blocks" (from a storage "target") to endpoints (typically other servers, or storage initiators) commonly in the form of logical disks (LUNs). Block transactions are faster than filesystem transactions because they operate at a lower level and are overall more simple, so block transactions require less overhead. Filesystem transactions occur "on top of" block transactions. Meaning a simple file copy must be process at the file system level as well as at the block level. The filesystem (which is what the NAS serves) in a SAN environment is handled at each independent endpoint, so the SAN target doesn't have to do any of the file system overhead processing, such as tracking and updating filesystem metadata. The filesystem processing overhead is thereby distributed across each endpoint/server. But if you have a single storage target, you could be bottlenecked at the block level if you have several busy servers attached. If you are trying to compare a NAS to a SAN, realize you are comparing apples to oranges. If you want to make apple pie, the apples are probably a better ingredient. But if you want to make orange juice, you'll want to go with oranges. Similarly NAS is designed for some use cases, and SAN for others. Generally speaking - are you trying to share files with end users (NAS), or are you trying to share disks to servers (SAN)?
the typical examples of tech use: NAS - home, offices for file sharing SAN and another distributed file systems operated on block level - Data centers. For example: virtualization like OpenStack, Hadoop clusters, Kubernetes utilize such technology to provision volumes from the available "disk" pool There no way to compare these 2 tech. as they r created for different purpose.
Pretty spot on, currently supporting one of those RAID Array/SAN/NAS companies. I'd add that a SAN can utilize Fibre Channel as well as 10/25/40/100 to host the access to the LUNs. In addition the LUNs can be provided in multiple channels, with multipathing, a given client/server can have a much more robust connections to the block level storage than the typical single connection to a NAS over a specific protocol. Of course if you have FC or iSCSI you really need to think proper security, there are a large number that don't setup isolated fabrics or two-way chaps. Generally, your best client performance is going to be a SAN but the costs of your workstations and isolated network switches is going to be higher than just plopping a device that serves up some NAS protocol like smb/nfs. Unless someone needs that kind of performance, such as 4k or 8k video editing, a NAS is usually sufficient.
I love how the simple nature of these videos never fails to attract the "Well, actually" tech-geniuses, that then proceed to spew a couple paragraphs of tech-jargon in order to show how smart they are.
Not their fault that the video is wrong in multiple spots and has several misconceptions, for most people it's hard to step out of their specialization and explain something a novice would understand. For example saying that iSCSI is slower than Fibre Channel, when most implementations today of FC are between 8 and 16 Gbps with the higher tier available being 32 Gbps, while Ethernet (iSCSI) is baseline 10 or 25 Gbs with more than a few 40 Gbps and many higher end implementations of 100 Gbps.
Dude, this is the BEST video I can find on the Internet that explains NAS SAN clearly for a beginner. So many vids are focusing on all those terminologies which is a nightmare for the audience. Yours is straightforward and comes from real-world..
Exactly! Even Professor Messer, with all due respect to him, I could not understand a single thing he was saying in his video. Now this one with animation is much more clear and understandable!
I don't know how you manage to create videos about the exact things I need to know about. I am extremely thankful for the time you take to make your videos, the animations are fabulous and I can see a lot of time behind. Thank you very much, keep being awesome.
Currently studying for my Network+ and this video was very helpful in explaining the differences between the two! Thanks for putting this kind of content out!
I'm sorry to say that, but this is absolute misconception of these two technologies. In enterprise solutions NAS can be and usually is implemented in a fault tolerance manner. Solutions like EMC Isilon or Netapp C-dot clusters with redundant network connections for instance are as much reliable as SAN devices. The main difference is about file system and data transfer protocol. In NAS file system is already on the device, it's optimized for data sharing across large amount of clients/servers. while on SAN devices only RAW blocks of storage space are presented to the HOSTs. Cause of that SAN is considered very often as faster solutions with lower latency. which finally brings us to protocols of transporting the data. In NAS it's IP which have significant amount of metadata overhead comparing to FCP used in SAN (also different different acknowledgment algorithms). Protocols like FCoE or FCIP or ISCSI are intentionally skipped cause we are talking rather about traditional implementation of SAN which is FCP. Well there are also a lot of other differences like main purpose of use, replication methods and distances, backup techniques and granularity of restore, antivirus implementation and so on. but this is not the topic for you-tube comment ;) Take care and stay "Storage" focus.
Out of plenty of IT traning videos I have seen or watched, your training videos, Sir, are right on point and your teaching style and animations, pictures, and explanations are comprehensive enough for a six year old to assimilate and understand! Please do not stop making these videos, keep 'em coming! Thank you Sir! ~Respectfully - Anthony
SAN is fun to play with. Bought some old IBM equipment, pcie SAS controller cards and cables from ebay. Used two debian servers with 40Gbps infiniband. 60 hours spent. worth it :D
You are making some incorrect assumptions about these technologies. NAS is a single system dependent upon a traditional layer 3 ip packet switched network and requires a client implementation for presentation. A SAN is a layer 2+ technology that involves a multitude of systems for providing connectivity/switching across the network and presentation appears native to the OS due to encapsulated SCSI payloads. A NAS can be extremely fault tolerant and redundant and a SAN can be poorly implemented and unreliable. A SAN can be ethernet or fibre channel (note fibre channel is NOT fibre optics, its a protocol for encapsulation of SCSI frames). While this video IS simple, it is also misleading and neglects some finer details that are worth noting to actually clear confusion and get rid of propagating assumptions.
So many inaccuracies in this article and there are NAS systems that are offer very high levels of redundancy. Remember a NAS can offer either block or file access depending on the applied protocol such as iSCSI, CIFS, or NFS. The best differentiators are that a SAN offers access to storage arrays using purpose specific protocols such as fiber channel while NAS systems utilize TCP/IP networks to either serve files or encapsulate drive commands. The true differentiator is the use if TCP/IP over purpose specific protocols. You can certainly discuss the performance and efficiency aspects about the solutions offered, but this explanation isn't defining them properly to start.
2 ปีที่แล้ว +1
I'd say that at the end the main difference is where is the filesystem managed. If in the server, NAS: you access files If in the client, SAN: you access blocks Nowadays both SAN and NAS systems (on their core) provide NAS and SAN capabilities.
@ bro can you just give me a simple explanation with an example of what san is because I'm doing an assessment on it and I'm confused
2 ปีที่แล้ว +4
@@hassii6803 I'd say that SAN/NAS are no longer differentiated by the way they're attached to the network nor their redundancy, fault tolerance or whatsoever. There are storage networks with both modes mixed. Natural SAN systems can act, and define themselves, as NAS as well, and vice-versa. What I observe is that, by convention, when a storage keeps the filesystem and serves files (NFS, CIFS,...) is called a NAS. When it serves blocks so that the client builds and manages its own filesystem on top, is called a SAN. Of course, for serving SAN mode, fault tolerance and robustness in the network is more critical, but nothing that any enterprise grade NAS cannot provide.
I have one word for you: Synology. I’ve got three, which do so much more than simple storage. I have two in a HA configuration with bonded network links, and a third which runs DHCP, LACP and Home Assistant on Docker. It’s practically an entire server back end.
I came back after several years to this video. I am appreciating how it condenses so well and so much good and correct information into just 4 minutes. Thank you, PowerCert Animations 👍
I'd love more on this bc again, YOU are THE FIRST to explain that the DIFFERENCE; That SANs are architecturally designed around physical redundancy of each hardware element a NAS is otherwise comprised. This is literally the FIRST EXPLANATION that ever explained the DIFFERENCE. Thank you !! But, if multiple people use the same "content" ... how doesn't it slow down..? Obviously they aren't using 1 array per client ... as there has to be economy of scale over a provision of local data. So HOW do you keep it from slowing down based on the number of users..? And if the users can write data -- that must be sync'd across all the discrete arrays, also, which would be doubly taxing -- 1, for the writes (while reading) and 2. because there're likely permission issues ... that's assuming "multiple arrays that're essentially mirrored". I get that FC can handle the bandwidth with minimal latency, but the drives..? And parity is a lot of IO overhead... no..? Also, are SANs always block-level..? Or can they be object-level..? Again, thank you! Now though, as I mentioned, I'm interested in how it manages keeping the data sync'd. :)
My NAS is architecturally designed around physical redundancy of each hardware element. I have dual PS, dual CPU, dual banks of RAM, mirrored (RAID1) OS drives, RAID6 remaining drives, even fans are redundant and hot swappable. That's a POOR description of the difference of NAS vs SAN. The redundancy of a SAN is the connection to the data not the hardware. The storage type and communication channels are the difference.
If I look at the word NAS in the mirror, it will say SAN. Same goes for the word SAN. I shall put this on my resume and apply to a big tech firm, with confidence I will be immediately hired as genius-level lead engineer.
I agree with you guys, not his best video, sadly, but others videos from his TH-cam series is just amazing. The key difference between NAS and SAN is that the first one uses file-level and the second one, block-level. Both can be redundant btw. PD: I know you put a lot of effort into doing these videos, but try to edit this one soon so other people could keep trusting your channel ;) Cheers!!!
This video is focused on secondary aspects (and some of them are not even correct). The main difference between the two is that a NAS is efficient for one-to-many topologies (many clients per server, with only the server directly connected to the disks and the clients using slower but cheaper links ), while a SAN is much more efficient (and expensive) for cases where each "client" needs full storage bandwidth and/or direct access (for example in the backbone to share the disk arrays between the servers). In the second situation a NAS could become a bottleneck with just one or two clients connected to it, but in a SAN no server is in the middle between the "clients" and the disks. However they are not competing technologies but complementary, as the most common setup is a hybrid of both. NAS servers acting as frontend for networked clients and the SAN as a backbone to share the storage between multiple servers (or few privileged front ends in the need of maximum bandwidth). Also, the SAN "direct access to the disks" from multiple computers is only the ideal world. In reality each disk array has a raid controller, which is just a specialised Linux server typically embedded in the disk array (except some SDS solutions), so even in a SAN there is indeed a "server" in the middle. But still much more efficient as it basically acts as a SCSI or SAS switch, and it can easily be made redundant thanks to SAS dual connections
Real entreprise NAS has no bottleneck as they have multiple 10/40/100 Gbps network adapters and are behind very capable distribution switches that can handle any load thrown at them. SAN is more a thing of the past now kept alive because many bought them and are not stuck to maintain them - just like IBM Mainframes were.
Some parts are true but some parts are not. If we are talking about entry level devices, most of the comments are true. But in enterprise business, middle or high end devices, all can mixed up. Enterprise scale-out NAS devices have similar reduncany level with SAN devices, can expandable without interruption also can be hundreds of PB capacity. Of course the target learners are important, so as for basic explanation, content is good. : )
This FREE Video: Gives crystal clear explanation of what they are. My Cyber Security class: SANS are multiple file servers that are more secure than NAS.
I used to stand up server systems that contained a large storage device. The weird thing was that it was the form factor of a NAS, but we connected it with fiber channel and it served as a local hard drive just like a SAN.
What form factor are you talking about? From the outside, SAN and NAS devices will most likely look exactly the same: a rack-mounted appliance with a bunch of drives or connected drive boxes plus optical cables on the back. Heck, you won't even be able to tell a rack-mounted server from a NAS if you don't know exactly which of them does what.
While not as Robust as a SAN, the redundancy on most NAS systems can equal the redundancy of SAN's - in the example given if the power supply goes out the NAS is (presumably) gone and not available - but the same is true of a SAN if there is only one power supply... point being is that both can have two power supplies. Both can use RAID for data resiliency. The SAN can have two Fibre Channel connections to remove single point of failure. This is where the NAS is somewhat exposed as it's a little tougher to provide redundant connection - but not impossible. While I understand that this is just a brief snapshot of the technologies, but I believe it wasn't fairly represented in this regard.
You however still do have a single point of failure in the form of the NAS' operating system Multiple PSUs, Redundant network links or expanded JBODS don't change that fundamental SPOF risk that is OS corruption, or a single motherboard + Memory It's poorly phrased but in truth the best way of describing the difference is a NAS is a standalone device, where as a SAN is fundimentally a data storage server cluster
@@rockymarquiss8327 You miss the point - SAN's don't have a singular host OS in the same way as NAS - Even when clustered, You cannot cluster NAS in the same way that you do in a SAN. In a clustered NAS you have 2 stand alone servers that are replicating data on a schedule with an Active/Passive relationship: what is normally known as a "Hot Standby" Should the Active NAS fail the secondary will take over, but there will be a break in service, the SPOF mentioned prior, while the Passive partner recognises the failure and data disparency since the last replication cycle. This is fine for most intermittent access uses such as File Servers In a SAN the Controllers are Active/Active using the same data source - Should 1 controller fail the others are alreadys serveing and only the direct connections to the controller that failed are interupted, resulting in a retry which immediately hits the other controllers - For certain business uses - such as those requiring guarenteed data integrity such as live stock databases, or those who would be heavily affected by the delay in the failover of the NAS cluster are keys
@@bengrogan9710 NAS and SAN serve different purposes and have different functionality. However, the redundancy exists. NAS isn't HA to the same degree as SAN as you point out... But they still have redundancy.
@@rockymarquiss8327 That last comment goes against your original claim that NAS can equal the redundancy of SANs
4 ปีที่แล้ว
Engraçado que estes vídeos me ensinaram mais do que a faculdade de análises de sistemas onde o professor um nerd , brocha e frustrado levava um ano para uma explicação simples que ele complicava ao máximo e no restante do tempo falava de sua vida pessoal. Parabéns!
Good video. :) HBA network cards with fibre channel cables are used in SAN. Tape Library devices are designed to use SAN but they are really expensive. SAN can be a network solution for backup purposes or application storing data (Databases of Exchange or SQL).
Fibre Channel is NOT Fiber Optics! This is how misconceptions start that some industry professionals carry with them for years or even DECADES! Fibre channel is a PROTOCOL and while it does use fibre optics in MANY implementations, it does also have COPPER implementations! Someone tells these beginning techs that fibre channel uses fibre optics and they latch on to it because of some stupid inference like, "Hey! It's in the name! They wouldn't name it fibre channel if it used copper!" Except they did! Many things wrong in this video but the fibre channel one is the type of misconception that propagates and infects for years because beginning techs think they have the right understanding, it sounds right but they never look any deeper into what it is or does. SAN does NOT equal Fibre optics. Many implementations use fibre optics but there are a large number that don't! Also, the inference that fibre channel is the only SAN implementation or, at the very least, the industry default is again misleading. It's as if iSCSI doesn't even exist! iSCSI is NOT necessarily slower, ethernet switches are capable of far greater speeds than fibre channel and performance will vary based on implementation.
Thanks for explaining that fibre channel isn't necessarily fiber optics. I was not aware of that. "[...] ethernet switches are capable of far greater speeds than fibre channel" What is the max speed of ethernet? Isn't it 10gbps? What is the max speed of FC? Isn't it 128gbps?
@@whitenite007 Current speeds of Ethernet switches top out around 400Gbs and are easy to find (www.cisco.com/c/en/us/solutions/data-center/high-capacity-400g-data-center-networking/index.html#~products). The 800Gbs standard is already in the works. 10Gbs is simply what average consumers are exposed to, not what is available in the commercial comms world.
As a very simple explanation that will be true enough for anyone who needs one, you can think of SAN as a networked harddisk controller, whereas NAS is more like a Google Drive. You send and receive files to/from your NAS, but when you add a "disk" to your SAN, it appears on your PC as if you had connected it through USB or something.
"Videos for school children and housewives" is a little harsh, but yes the video is skipping over a fair bit of detail. A NAS is nothing more than a file server (SMB/CIFS/NFS etc.) running on dedicated hardware. So they are useful for those situations where a file server would be appropriate (serving files to SMB/CIFS/NFS clients - typically workstations, not servers). The discussion around the "Network" in NAS is not well differentiated from the "Network" in SAN. Both use this term. The key is that a NAS uses an ethernet - TCP/IP network, whereas a SAN uses a dedicated (typically Fibre Channel or FCAL) network. The two types of network are incompatible and cannot be interconnected. The exception is iSCSI, where the lines get blurred. iSCSI is a SAN running on a TCP/IP network and may even share that network with general TCP/IP (LAN) traffic. iSCSI gets you some of the SAN benefits at a fraction of the cost of an FCAL SAN. Note that an FCAL or iSCSI SAN presents itself to the server as one or more SCSI devices and all I/O operations are block level SCSI commands. And yes, a switched FCAL SAN is a very expensive solution, unless you are large enough to justify it and benefit from its advantages. These advantages tend to come in the form of reduced total cost of ownership, due to shared infrastructure, where you have a large, diverse and dynamic server farm (think eBay/Amazon/Google etc.). You get the ability to add, remove and re-use storage quickly and easily (even automatically), reducing total storage needs. The key differences between NAS and SAN are really only important to those than can afford a SAN (typically large organisations). As a NAS shares files, it is ideal where that is the requirement. It is not so good where you really want block level access, as is the case for relational database systems (RDBMS). An RDBMS really should be on block level storage, as you really don't want the overhead, latency or bottlenecks of a NAS system between your RDBMS and your storage. Also boot devices, swap devices, and other applications sensitive to latency and/or bandwidth are likely to be on SCSI devices, not NAS. In these cases if you can't afford a SAN, you might need to consider direct attached storage (DAS), rather than a NAS or iSCSI option. For home use however, I believe only those who need to study SAN technology (probably ICT professionals) would be likely to go anywhere near a SAN and only with really old (i.e. cheap) gear. Most of us would be far more likely to stick with NAS/iSCSI/DAS at home. And no, this won't become obsolete because of the cloud. The cloud inherently uses NAS, SAN and iSCSI under the covers in order to provide its services. The "Cloud" is nothing more than a shared infrastructure technology for the masses (using modern interfaces). Bear in mind that we were doing shared infrastructure 30 - 50 years ago, just for banks, governments and others with deep pockets. And we didn't call it the "Cloud".
This video has an oversimplified version of a NAS. High end NASes have fault tolerance so that when one NAS machine is offline, the other NAS can take over automatically. For Synology NASes it's called this High Availability. Synology NAS also offers iSCSI and other connectivity technologies that speed up the data transfer. Basically, if you want to have certain SAN features , you don't have to shell out the big bucks for a SAN solution. A high-end NAS can satisfy that just as well at a fraction of the cost. No, when a NAS goes down, it does not explode with shrapnel flying all over.
This video was pretty good at explaining the traditional differences between SANs and NASs. However, there are many modern devices that blur the lines between a SAN and a NAS. This blurring of lines confuses a lot of people and makes it necessary to have videos like this.
However, as soon as the solution scales up and more performance is required, dedicated switches and network adapters for storage traffic become a necessity anyway. That's where Fibre Channel comes into play and guess what people are choosing? I still hope to see a working iSCSI SAN one day, but I haven't seen any just yet. The need for a dedicated NAS is probably disputable - a file server on a VM does a pretty good job, no additional hardware needed. Set up a cluster for fault tolerance and you're good to go.
I saved this video mostly because the COMMENTS are correct... the video is SEVERELY lacking and should take a note from the comments. Further, the way the connections are illustrated is just incomplete and gives a false idea.
I run a 2 SANs in my Basement where my vsphere HA Cluster life :) - 2 or 3 HPE DL380p G8 with SFP+FC and TrueNAS Baremetal install isn‘t anymore expensive those times. For sure, i connect my LUNs thru iSCSI Multipath IO and the SAN Network is my only Network with dual 10 GBe connected to 2 MicroTik CRS309-1G-8S+IN for Failover. In production State i do RoundRobin Balancing over both 10GBe Connections - Run like a charm! (Pay special Attention on your HBA Controller, the builtin P420i from HP is not recommended for ZFS)
Okay, I have just 1 question to ask: Why can’t others (TH-camrs) make and explain videos like yours? Your termination is just so easy to follow. You don’t seem to be going on and on. Believe me, I value you’re videos. Thank you ever so kindly. 🙏🏻
@@danielwoosewicz6556 hello Sir. I have been looking for your other comments for a while but I can't find it. I even 'Ctrl F' the webpage your name. I think there is too many comments to load them all. Could you please kindly tag me in your comment where you explain the misinformation. Thanks 🙏🙏
To be fair, if it's QSFP+ connectors for the fibre, if you put everything close enough together, you won't need to go fibre, just buy 1m direct attached QSFP+ cables.
thanks mate, your easy to understand animation for this topic, its very very basic definition for NAS vs SAN and all kind of IT tech erupted in war against you. Its not a PhD thesis but simple and easy to understand video for those who have no clue at all. Its not lecture for Storage Data experts to deploy assets for banks or stock exchanges etc. get some life. Before watching this animation, i had no ideo about SAN. now i do have SOME info many many thanks for taking my very very small time and giving me what i wanted.
Of course you can have redundant components in a NAS, as you can in any other type of computer. But that doesn't make the NAS redundant. If you have a kernel panic, for instance, then it doesn't matter how many power supplies you have; your clients won't be receiving any data.
Not a bad video, but important to note that things like redundant power supplies and online capacity upgrades are standard features of large SMB-grade NAS devices, and don’t necessarily require a costly SAN.
There are lot of NAS out there as an example Isilon (EMC product) that supports redundancy even if node goes down there wont be any affect on NAS even if whole cluster goes down you can use superna eye glass for disaster recovery purposes to replicate/mirror data.
It's a good idea. I once built a cluster that used eight gigabit ports per node in a link aggregation scheme. Was blazingly fast for the money... not flat-shared-everything fast of course, but still... it flew on it's reclaimed ProCurve 48G switchfabric ; ) 'Beowolf Cluster' is the answer to the question... WTF do we do with all this pulled ex-enterprise gear?
Man you absolutely wrong. NAS sure can have more than one Power Suppy. And there could be more NASs in the network, and there sure could be more than one switch in ethernet network. And SAN also could be not redundand
this is actually not true NAS is a Network attached storage. Redundancy depends on the actual system. Speed also depends on the actual system. The main point here is that a NAS is using CIFS/NFS Protocol. its a file based storage. it ha its own file system and can read / write whit different operating system, that's why people call NAS as Unified Storage. SAN is using FC or iSCSI protocol, which are block type protocols. after your mount an iSCSI or FC volume, you need to format it to the given filesystem. Performance and redundancy also depends on the actual system. depending on the system, both can do more than 'just storing data'. NAS can provide unified services, deduplication, data reduction, encryption also there are NAS capable SAN storages on the market, like DellEMC Unity, which can provide NAS services on a selected volume
>> that's why people call NAS as Unified Storage I think it's a misconception. "Unified" historically has been a term for SAN storage which incorporated file-based access at some point in time. See, for instance, IBM Storwize V7000 Unified. EMC VNX became "Unified" as soon as it incorporated Celerra NAS. I've never heard this term being applied to NAS systems. NetApp never calls its FAS storage "Unified". Neither does Dell EMC with its Isilon. In this sense, iSCSI support doesn't make a NAS "unified storage". iSCSI does have its place in small and even some medium environments. However, as environments grow, a separate storage network becomes a must anyway so why not use a protocol which was designed for this purpose in the first place? Even if you stick to iSCSI, you still have to buy additional switches and expansion cards. Even though iSCSI NAS is a "unified storage" from a purely technical standpoint, corporate customers (80% of the SAN market) will not take such claims seriously. iSCSI was a pretty big thing in early 2000s. In 2007, Gartner even predicted that iSCSI will become a dominating protocol for SANs, but that never happened. Your other points are pretty solid still.
@@alexrogov7186 You are absolutely right, File&block capability makes a system unified. This is why I wrote "People call NAS storages Unified". I heard soooo many times and you can imagine as a storage specialist, how triggering is it when you hear from people around you :) In regards of iSCSI. A lot of people doesn't know that the best practice is to buy iSCSI offload capable network cards and build separate SAN network. "we will use it on production network, no problem" or "1GBASE-T is ok for iSCSI, right?" this is another trigger point for me. I remember when companies like Dell, IBM, Fujitsu introduced their iSCSI platforms in early 2000s. Everyone hyped it and you're right; never beat the good old FCP. Most high-end storage doesn't even support replication on iSCSI which makes FCP even stronger on the market. What really surprises me is that how CI & HCI platforms became strong in the past two years. Also Dell Technologies announced a new storage last year which is unified and also can run virtual machines and has a very good data reduction rate (4:1++ | except with binary files/host side encrypted or media). I feel we are entering a new era in the Data management.
Thats what I was thinking when he said in the video that NAS doesnt have redundancy. We had NAS with dual different sources of power , 2 power cables to 2 power supplies, it had 2 switches on stack connected to it which was connected to the network through other 4 switches, raid 50, etc Cant have more redudancy than what we had, unless there were a second NAS, which we actually got later.
One place I worked had a Room full of EMC Fridge-sized cabinets/ When we had to prepared for a storm, we called a number, and fuel a dozen trucks showed up and parked outside the building in a designated parking pad reserved for them. And the first truck connected to our generator. IN our case we added storage by the room with more EMC fridges. 12 at a time. they came ready to put drives in pallets of the drive. We had pallets of spare drives on hand. We also had a destruction machine. load the drive-in and chunks of metal and bits and pieces came out after horrendous noises and suffering on the part of the sacrificed drives. Those pieces were securely transported to a blast furnace. to be turned into slag by the pallet-sized bin. I have no idea what the data was. I was just an application test and never got close to anything secure. I even had to have security escort me to the washroom and back.
I keep a SAN at home. Xtreemfs, 2 nodes at mom's house and 3 nodes at my own, both of those include the front end. Both can do file level and block level access btw.
Love the Videos, you make more sense than my tutor haha.. But seriously, these are helping so much with my understanding. Now to find good references for my assignments :)
Nas devices can be stacked similarly to your san example. I would also say both setups are still dependent on your network as the data still needs a path to local clients to retrieve and send data.
my answer about this video ""woow, you talk about old nas server technology, today you have nas with dual controller, with iscsi 10, 25, 40 gb / s, there are NAS with ISCSI per block, there are nas with protocol fc, this video would be great if we were in the 2000""
Great video but at least today, two parts of this video aren't quite correct anymore. A NAS is not just a data store, they are typically extremely multi purpose with tons of capabilities out of the box with their included OS, especially from QNAP and Synology. There are enterprise NAS's from QNAP and Synology with dual controllers, dual power supplies, with both raid 6 and raid 60 support along with backup chassis failover.
@@Fabio-gc1xf you must be unaware of their enterprise DC models with dual controller, power supply, boot rom, etc not to mention dual chassis failover available on even their consumer grade products. They make great stuff.
@@Fabio-gc1xf So is SAN. It is how you set things up that makes them fault tolerant, not the type of system. If you only have one drive behing a SAn then it is not fault tolerant. If you only have one disk shelf behind a SAN....I hope you get my drift. You can set NAS up to be fault tolerant too.
Your information is misleading and inaccurate. There are very robust NAS storage systems out there for the data centers and enterprises. The main difference between a SAN and NAS, is a SAN is Block-level storage and a NAS is File-level storage. And in some instants SAN will host a NAS "head" unit to share out a disk volume as a network file share. This is a poor and incomplete explanation of the differences between a NAS and SAN.
This explanation is indeed very inaccurate, examples of NAS that have great scalability are the NetApp FAS storage platforms. For anything SAN you would need fiber switches and I just worked on a large scale Hitachi F1500 build out with 32Gb FC on Brocade switches. By the way, all our blades connect through FIC into the SAN network, we assign LUNs to the environments and can increase or decrease those on the fly. Anyways, this explanation is way too simplified, storage pros will know ;-)
Al van der Laan you are wrong, because you can have SAN without FC. iSCSI could be used as block level protocol and SAN could be built up on Ethernet switches
Here is an example of a NAS (affiliate) amzn.to/2VgnRgD
Thank you so much for this video, I am about to have an exam within 2 hours and I managed to understand this topic within minutes. Keep it up, stay blessed!
@@hamiltonfungula63 lol youtube for a cram session? when all else fails remember ABACADABA ;) I am enjoying his animated videos though, nice smooth minimalistic animations instead of a constent view of some crazy tweekers face taking himself way to seriously .
That's very expensive for a 4-bay system.
@@lachiu1 , do you know any in amazon that more cheaper?
@@kanzai12 just diy
I am a manager of a data center and love to ask this question on interviews. Enterprise NAS appliances are designed to be highly available, with redundant controllers, with dual power supplies, and dual network links (often arranged in LACP teams). NAS serves file systems (in the form of shared folders) to end points (typically end user devices) on a corporate network (the LAN). Now, if you want to use NAS for a non traditional workload (such as an NFS export for VMWare data stores), you'd typically devote switches to that use case to avoid the contention with typical end user LAN traffic (when using the same switches as your LAN) described in the video. The reason SAN is marketed as a faster technology has less to do with bandwidth of the different networks (there are 25, 40 and 100 Gb/s ethernet links as well) and more to do with the fundamental differences between NAS and SAN . SANs share raw disk "blocks" (from a storage "target") to endpoints (typically other servers, or storage initiators) commonly in the form of logical disks (LUNs). Block transactions are faster than filesystem transactions because they operate at a lower level and are overall more simple, so block transactions require less overhead. Filesystem transactions occur "on top of" block transactions. Meaning a simple file copy must be process at the file system level as well as at the block level. The filesystem (which is what the NAS serves) in a SAN environment is handled at each independent endpoint, so the SAN target doesn't have to do any of the file system overhead processing, such as tracking and updating filesystem metadata. The filesystem processing overhead is thereby distributed across each endpoint/server. But if you have a single storage target, you could be bottlenecked at the block level if you have several busy servers attached. If you are trying to compare a NAS to a SAN, realize you are comparing apples to oranges. If you want to make apple pie, the apples are probably a better ingredient. But if you want to make orange juice, you'll want to go with oranges. Similarly NAS is designed for some use cases, and SAN for others. Generally speaking - are you trying to share files with end users (NAS), or are you trying to share disks to servers (SAN)?
Great explanation.
the typical examples of tech use:
NAS - home, offices for file sharing
SAN and another distributed file systems operated on block level - Data centers. For example: virtualization like OpenStack, Hadoop clusters, Kubernetes utilize such technology to provision volumes from the available "disk" pool
There no way to compare these 2 tech. as they r created for different purpose.
This is a long reply, but I read it all and it was helpful so just wanted to say thanks.
Pretty spot on, currently supporting one of those RAID Array/SAN/NAS companies. I'd add that a SAN can utilize Fibre Channel as well as 10/25/40/100 to host the access to the LUNs. In addition the LUNs can be provided in multiple channels, with multipathing, a given client/server can have a much more robust connections to the block level storage than the typical single connection to a NAS over a specific protocol. Of course if you have FC or iSCSI you really need to think proper security, there are a large number that don't setup isolated fabrics or two-way chaps.
Generally, your best client performance is going to be a SAN but the costs of your workstations and isolated network switches is going to be higher than just plopping a device that serves up some NAS protocol like smb/nfs. Unless someone needs that kind of performance, such as 4k or 8k video editing, a NAS is usually sufficient.
Thank you
I love how the simple nature of these videos never fails to attract the "Well, actually" tech-geniuses, that then proceed to spew a couple paragraphs of tech-jargon in order to show how smart they are.
Not their fault that the video is wrong in multiple spots and has several misconceptions, for most people it's hard to step out of their specialization and explain something a novice would understand. For example saying that iSCSI is slower than Fibre Channel, when most implementations today of FC are between 8 and 16 Gbps with the higher tier available being 32 Gbps, while Ethernet (iSCSI) is baseline 10 or 25 Gbs with more than a few 40 Gbps and many higher end implementations of 100 Gbps.
@@Torsin2000 😆🤣😂😭😅
@@janX9 I'm glad someone saw what I did there.
@@Torsin2000 yeah, that was good :)
Dude, this is the BEST video I can find on the Internet that explains NAS SAN clearly for a beginner. So many vids are focusing on all those terminologies which is a nightmare for the audience. Yours is straightforward and comes from real-world..
Exactly! Even Professor Messer, with all due respect to him, I could not understand a single thing he was saying in his video. Now this one with animation is much more clear and understandable!
I hope not because this video is not correct.
I just started working for Pure Storage and I am 100% sure that the trainer used this video to write her training material, literally word by word
Lol😂
dang 😂
I don't know how you manage to create videos about the exact things I need to know about. I am extremely thankful for the time you take to make your videos, the animations are fabulous and I can see a lot of time behind. Thank you very much, keep being awesome.
Dude I can’t tell you enough how much I love these animated videos. You helped me learn binary math
Currently studying for my Network+ and this video was very helpful in explaining the differences between the two! Thanks for putting this kind of content out!
Have you taken the test?
I'm sorry to say that, but this is absolute misconception of these two technologies. In enterprise solutions NAS can be and usually is implemented in a fault tolerance manner. Solutions like EMC Isilon or Netapp C-dot clusters with redundant network connections for instance are as much reliable as SAN devices.
The main difference is about file system and data transfer protocol. In NAS file system is already on the device, it's optimized for data sharing across large amount of clients/servers. while on SAN devices only RAW blocks of storage space are presented to the HOSTs. Cause of that SAN is considered very often as faster solutions with lower latency. which finally brings us to protocols of transporting the data. In NAS it's IP which have significant amount of metadata overhead comparing to FCP used in SAN (also different different acknowledgment algorithms).
Protocols like FCoE or FCIP or ISCSI are intentionally skipped cause we are talking rather about traditional implementation of SAN which is FCP.
Well there are also a lot of other differences like main purpose of use, replication methods and distances, backup techniques and granularity of restore, antivirus implementation and so on. but this is not the topic for you-tube comment ;)
Take care and stay "Storage" focus.
Out of plenty of IT traning videos I have seen or watched, your training videos, Sir, are right on point and your teaching style and animations, pictures, and explanations are comprehensive enough for a six year old to assimilate and understand! Please do not stop making these videos, keep 'em coming! Thank you Sir! ~Respectfully - Anthony
Much appreciated
You have the greatest videos: simple, yet not primitive, informative, yet not overwhelming.
Your videos are more easier to understand than anyone else on the web. I really wish you would do a A+ or Network+ course.
Many are not very accurate.
For visual learners, this channel is a godsend!
SAN is fun to play with. Bought some old IBM equipment, pcie SAS controller cards and cables from ebay. Used two debian servers with 40Gbps infiniband. 60 hours spent. worth it :D
i learn much more from videos than from a book. And this channel has some high quality videos
And faster.
You are making some incorrect assumptions about these technologies. NAS is a single system dependent upon a traditional layer 3 ip packet switched network and requires a client implementation for presentation. A SAN is a layer 2+ technology that involves a multitude of systems for providing connectivity/switching across the network and presentation appears native to the OS due to encapsulated SCSI payloads. A NAS can be extremely fault tolerant and redundant and a SAN can be poorly implemented and unreliable. A SAN can be ethernet or fibre channel (note fibre channel is NOT fibre optics, its a protocol for encapsulation of SCSI frames). While this video IS simple, it is also misleading and neglects some finer details that are worth noting to actually clear confusion and get rid of propagating assumptions.
Absolutely the best explanations on IT concepts on TH-cam. Thank you for your dedicated work to provide easy to understand concepts on complex issues.
So many inaccuracies in this article and there are NAS systems that are offer very high levels of redundancy. Remember a NAS can offer either block or file access depending on the applied protocol such as iSCSI, CIFS, or NFS.
The best differentiators are that a SAN offers access to storage arrays using purpose specific protocols such as fiber channel while NAS systems utilize TCP/IP networks to either serve files or encapsulate drive commands. The true differentiator is the use if TCP/IP over purpose specific protocols.
You can certainly discuss the performance and efficiency aspects about the solutions offered, but this explanation isn't defining them properly to start.
I'd say that at the end the main difference is where is the filesystem managed.
If in the server, NAS: you access files
If in the client, SAN: you access blocks
Nowadays both SAN and NAS systems (on their core) provide NAS and SAN capabilities.
@ bro can you just give me a simple explanation with an example of what san is because I'm doing an assessment on it and I'm confused
@@hassii6803 I'd say that SAN/NAS are no longer differentiated by the way they're attached to the network nor their redundancy, fault tolerance or whatsoever. There are storage networks with both modes mixed.
Natural SAN systems can act, and define themselves, as NAS as well, and vice-versa.
What I observe is that, by convention, when a storage keeps the filesystem and serves files (NFS, CIFS,...) is called a NAS.
When it serves blocks so that the client builds and manages its own filesystem on top, is called a SAN.
Of course, for serving SAN mode, fault tolerance and robustness in the network is more critical, but nothing that any enterprise grade NAS cannot provide.
Absolutely the best IT materials in the whole universe. There is nothing better than you guys!
I have one word for you: Synology. I’ve got three, which do so much more than simple storage. I have two in a HA configuration with bonded network links, and a third which runs DHCP, LACP and Home Assistant on Docker. It’s practically an entire server back end.
Exactly how I wanted to know. Simply great explanation in just 4 minutes !
Please never stop what you're doing. Your videos are so simple to understand that it really helps a lot of people. Keep up the great work!!!
Thanks.
I'm in the middle of studying for my MCSA for Windows 10 and Server 2012 R2, so this really does help!!!
Can't explain simpler than this. Great video for anyone who wants to understand the basics.
I came back after several years to this video. I am appreciating how it condenses so well and so much good and correct information into just 4 minutes.
Thank you, PowerCert Animations 👍
I'd love more on this bc again, YOU are THE FIRST to explain that the DIFFERENCE; That SANs are architecturally designed around physical redundancy of each hardware element a NAS is otherwise comprised.
This is literally the FIRST EXPLANATION that ever explained the DIFFERENCE. Thank you !!
But, if multiple people use the same "content" ... how doesn't it slow down..?
Obviously they aren't using 1 array per client ... as there has to be economy of scale over a provision of local data. So HOW do you keep it from slowing down based on the number of users..? And if the users can write data -- that must be sync'd across all the discrete arrays, also, which would be doubly taxing -- 1, for the writes (while reading) and 2. because there're likely permission issues ... that's assuming "multiple arrays that're essentially mirrored".
I get that FC can handle the bandwidth with minimal latency, but the drives..?
And parity is a lot of IO overhead... no..?
Also, are SANs always block-level..?
Or can they be object-level..?
Again, thank you! Now though, as I mentioned, I'm interested in how it manages keeping the data sync'd. :)
My NAS is architecturally designed around physical redundancy of each hardware element. I have dual PS, dual CPU, dual banks of RAM, mirrored (RAID1) OS drives, RAID6 remaining drives, even fans are redundant and hot swappable. That's a POOR description of the difference of NAS vs SAN. The redundancy of a SAN is the connection to the data not the hardware. The storage type and communication channels are the difference.
@@ClimberMel So what will happen if a capacitor on your Motherboard blows and takes the whole MB with it? Not so redundant anymore, is it?
Thanks for your concise, tidy video at the same time with informations enough to grasp the essence of the difference between the systems.
Dude. This channel is a very good find for me. It explains many tech-related topics and the animation style is amazing.
If I look at the word NAS in the mirror, it will say SAN. Same goes for the word SAN.
I shall put this on my resume and apply to a big tech firm, with confidence I will be immediately hired as genius-level lead engineer.
Good luck with that.👍
I agree with you guys, not his best video, sadly, but others videos from his TH-cam series is just amazing.
The key difference between NAS and SAN is that the first one uses file-level and the second one, block-level. Both can be redundant btw.
PD: I know you put a lot of effort into doing these videos, but try to edit this one soon so other people could keep trusting your channel ;)
Cheers!!!
Agreed. That is the main difference. Redundancy, scale etc. can be matched by either NAS or SAN.
I love how direct and to the point this channel's explanations are. Another great video. Thanks!
Explained very well, simple to understand. Thank you.
This video is focused on secondary aspects (and some of them are not even correct). The main difference between the two is that a NAS is efficient for one-to-many topologies (many clients per server, with only the server directly connected to the disks and the clients using slower but cheaper links ), while a SAN is much more efficient (and expensive) for cases where each "client" needs full storage bandwidth and/or direct access (for example in the backbone to share the disk arrays between the servers).
In the second situation a NAS could become a bottleneck with just one or two clients connected to it, but in a SAN no server is in the middle between the "clients" and the disks.
However they are not competing technologies but complementary, as the most common setup is a hybrid of both. NAS servers acting as frontend for networked clients and the SAN as a backbone to share the storage between multiple servers (or few privileged front ends in the need of maximum bandwidth).
Also, the SAN "direct access to the disks" from multiple computers is only the ideal world. In reality each disk array has a raid controller, which is just a specialised Linux server typically embedded in the disk array (except some SDS solutions), so even in a SAN there is indeed a "server" in the middle. But still much more efficient as it basically acts as a SCSI or SAS switch, and it can easily be made redundant thanks to SAS dual connections
Real entreprise NAS has no bottleneck as they have multiple 10/40/100 Gbps network adapters and are behind very capable distribution switches that can handle any load thrown at them. SAN is more a thing of the past now kept alive because many bought them and are not stuck to maintain them - just like IBM Mainframes were.
Some parts are true but some parts are not. If we are talking about entry level devices, most of the comments are true. But in enterprise business, middle or high end devices, all can mixed up. Enterprise scale-out NAS devices have similar reduncany level with SAN devices, can expandable without interruption also can be hundreds of PB capacity. Of course the target learners are important, so as for basic explanation, content is good. : )
This FREE Video: Gives crystal clear explanation of what they are.
My Cyber Security class: SANS are multiple file servers that are more secure than NAS.
Very nice, clear explanation. Good pace and clear voice. Nicely done and much appreciated!
Bull's eye! Right on the money 💰. Very crisp and clear in explanation.
Very nice explanation with animation
Finally I understand the difference between NAS and SAN. Thanks a lot.
I love the attention to detail when the NAS blows up and shoots a drive on to the pc while leaving a mark. Amazing videos!
My teacher left us to study own and i somehow manage to learn on my own on this topic.
You should be my teacher then.
Thank you
PowerCert Animated Videos your explanation is always best. TBH better than my teachers too
NAS can be seen by OS as local drive with iSCSI.
Your video saving my tons of time. Thank you so much
I used to stand up server systems that contained a large storage device. The weird thing was that it was the form factor of a NAS, but we connected it with fiber channel and it served as a local hard drive just like a SAN.
What form factor are you talking about? From the outside, SAN and NAS devices will most likely look exactly the same: a rack-mounted appliance with a bunch of drives or connected drive boxes plus optical cables on the back. Heck, you won't even be able to tell a rack-mounted server from a NAS if you don't know exactly which of them does what.
While not as Robust as a SAN, the redundancy on most NAS systems can equal the redundancy of SAN's - in the example given if the power supply goes out the NAS is (presumably) gone and not available - but the same is true of a SAN if there is only one power supply... point being is that both can have two power supplies. Both can use RAID for data resiliency. The SAN can have two Fibre Channel connections to remove single point of failure. This is where the NAS is somewhat exposed as it's a little tougher to provide redundant connection - but not impossible.
While I understand that this is just a brief snapshot of the technologies, but I believe it wasn't fairly represented in this regard.
You however still do have a single point of failure in the form of the NAS' operating system
Multiple PSUs, Redundant network links or expanded JBODS don't change that fundamental SPOF risk that is OS corruption, or a single motherboard + Memory
It's poorly phrased but in truth the best way of describing the difference is a NAS is a standalone device, where as a SAN is fundimentally a data storage server cluster
@@bengrogan9710 even SANs have an OS... And you can cluster NAS as well. You can be just as redundant with both technologies.
@@rockymarquiss8327 You miss the point - SAN's don't have a singular host OS in the same way as NAS - Even when clustered,
You cannot cluster NAS in the same way that you do in a SAN.
In a clustered NAS you have 2 stand alone servers that are replicating data on a schedule with an Active/Passive relationship: what is normally known as a "Hot Standby"
Should the Active NAS fail the secondary will take over, but there will be a break in service, the SPOF mentioned prior, while the Passive partner recognises the failure and data disparency since the last replication cycle.
This is fine for most intermittent access uses such as File Servers
In a SAN the Controllers are Active/Active using the same data source - Should 1 controller fail the others are alreadys serveing and only the direct connections to the controller that failed are interupted, resulting in a retry which immediately hits the other controllers - For certain business uses - such as those requiring guarenteed data integrity such as live stock databases, or those who would be heavily affected by the delay in the failover of the NAS cluster are keys
@@bengrogan9710 NAS and SAN serve different purposes and have different functionality. However, the redundancy exists. NAS isn't HA to the same degree as SAN as you point out... But they still have redundancy.
@@rockymarquiss8327 That last comment goes against your original claim that NAS can equal the redundancy of SANs
Engraçado que estes vídeos me ensinaram mais do que a faculdade de análises de sistemas onde o professor um nerd , brocha e frustrado levava um ano para uma explicação simples que ele complicava ao máximo e no restante do tempo falava de sua vida pessoal. Parabéns!
Good video. :) HBA network cards with fibre channel cables are used in SAN. Tape Library devices are designed to use SAN but they are really expensive.
SAN can be a network solution for backup purposes or application storing data (Databases of Exchange or SQL).
High End NAS has the same feature you mentioned about SAN
clearly explanations. Nice video for sharing on SAN & NAS
fast , especific , organized , astonishing job !
Perfectly explained. Exactly how I wanted to know bout SAN vs NAS. Can you do a video on VSAN?
Fibre Channel is NOT Fiber Optics! This is how misconceptions start that some industry professionals carry with them for years or even DECADES!
Fibre channel is a PROTOCOL and while it does use fibre optics in MANY implementations, it does also have COPPER implementations!
Someone tells these beginning techs that fibre channel uses fibre optics and they latch on to it because of some stupid inference like, "Hey! It's in the name! They wouldn't name it fibre channel if it used copper!" Except they did!
Many things wrong in this video but the fibre channel one is the type of misconception that propagates and infects for years because beginning techs think they have the right understanding, it sounds right but they never look any deeper into what it is or does. SAN does NOT equal Fibre optics. Many implementations use fibre optics but there are a large number that don't!
Also, the inference that fibre channel is the only SAN implementation or, at the very least, the industry default is again misleading. It's as if iSCSI doesn't even exist! iSCSI is NOT necessarily slower, ethernet switches are capable of far greater speeds than fibre channel and performance will vary based on implementation.
Thanks for explaining that fibre channel isn't necessarily fiber optics. I was not aware of that.
"[...] ethernet switches are capable of far greater speeds than fibre channel"
What is the max speed of ethernet? Isn't it 10gbps? What is the max speed of FC? Isn't it 128gbps?
@@whitenite007 Current speeds of Ethernet switches top out around 400Gbs and are easy to find (www.cisco.com/c/en/us/solutions/data-center/high-capacity-400g-data-center-networking/index.html#~products). The 800Gbs standard is already in the works. 10Gbs is simply what average consumers are exposed to, not what is available in the commercial comms world.
As a very simple explanation that will be true enough for anyone who needs one, you can think of SAN as a networked harddisk controller, whereas NAS is more like a Google Drive. You send and receive files to/from your NAS, but when you add a "disk" to your SAN, it appears on your PC as if you had connected it through USB or something.
"Videos for school children and housewives" is a little harsh, but yes the video is skipping over a fair bit of detail.
A NAS is nothing more than a file server (SMB/CIFS/NFS etc.) running on dedicated hardware. So they are useful for those situations where a file server would be appropriate (serving files to SMB/CIFS/NFS clients - typically workstations, not servers).
The discussion around the "Network" in NAS is not well differentiated from the "Network" in SAN. Both use this term. The key is that a NAS uses an ethernet - TCP/IP network, whereas a SAN uses a dedicated (typically Fibre Channel or FCAL) network. The two types of network are incompatible and cannot be interconnected. The exception is iSCSI, where the lines get blurred. iSCSI is a SAN running on a TCP/IP network and may even share that network with general TCP/IP (LAN) traffic. iSCSI gets you some of the SAN benefits at a fraction of the cost of an FCAL SAN.
Note that an FCAL or iSCSI SAN presents itself to the server as one or more SCSI devices and all I/O operations are block level SCSI commands. And yes, a switched FCAL SAN is a very expensive solution, unless you are large enough to justify it and benefit from its advantages. These advantages tend to come in the form of reduced total cost of ownership, due to shared infrastructure, where you have a large, diverse and dynamic server farm (think eBay/Amazon/Google etc.). You get the ability to add, remove and re-use storage quickly and easily (even automatically), reducing total storage needs.
The key differences between NAS and SAN are really only important to those than can afford a SAN (typically large organisations). As a NAS shares files, it is ideal where that is the requirement. It is not so good where you really want block level access, as is the case for relational database systems (RDBMS). An RDBMS really should be on block level storage, as you really don't want the overhead, latency or bottlenecks of a NAS system between your RDBMS and your storage. Also boot devices, swap devices, and other applications sensitive to latency and/or bandwidth are likely to be on SCSI devices, not NAS. In these cases if you can't afford a SAN, you might need to consider direct attached storage (DAS), rather than a NAS or iSCSI option.
For home use however, I believe only those who need to study SAN technology (probably ICT professionals) would be likely to go anywhere near a SAN and only with really old (i.e. cheap) gear. Most of us would be far more likely to stick with NAS/iSCSI/DAS at home.
And no, this won't become obsolete because of the cloud. The cloud inherently uses NAS, SAN and iSCSI under the covers in order to provide its services. The "Cloud" is nothing more than a shared infrastructure technology for the masses (using modern interfaces). Bear in mind that we were doing shared infrastructure 30 - 50 years ago, just for banks, governments and others with deep pockets. And we didn't call it the "Cloud".
very useful, i'll research more to understand you explanation, thanks!
Looks like shakespare is inlove
This video has an oversimplified version of a NAS. High end NASes have fault tolerance so that when one NAS machine is offline, the other NAS can take over automatically. For Synology NASes it's called this High Availability. Synology NAS also offers iSCSI and other connectivity technologies that speed up the data transfer. Basically, if you want to have certain SAN features , you don't have to shell out the big bucks for a SAN solution. A high-end NAS can satisfy that just as well at a fraction of the cost.
No, when a NAS goes down, it does not explode with shrapnel flying all over.
This video was pretty good at explaining the traditional differences between SANs and NASs. However, there are many modern devices that blur the lines between a SAN and a NAS. This blurring of lines confuses a lot of people and makes it necessary to have videos like this.
However, as soon as the solution scales up and more performance is required, dedicated switches and network adapters for storage traffic become a necessity anyway. That's where Fibre Channel comes into play and guess what people are choosing? I still hope to see a working iSCSI SAN one day, but I haven't seen any just yet. The need for a dedicated NAS is probably disputable - a file server on a VM does a pretty good job, no additional hardware needed. Set up a cluster for fault tolerance and you're good to go.
@@alexrogov7186 Just make one.. icicimov.github.io/blog/high-availability/Highly-Available-iSCSI-Storage-with-SCST-Pacemaker-DRBD-and-OCFS2/
Thank you so much for your videos. They’ve helped me pass the A+ 901 Test. Keep up the good work.
Simple, great explanation of this difference. Thanks! 👍
I saved this video mostly because the COMMENTS are correct... the video is SEVERELY lacking and should take a note from the comments. Further, the way the connections are illustrated is just incomplete and gives a false idea.
I run a 2 SANs in my Basement where my vsphere HA Cluster life :) - 2 or 3 HPE DL380p G8 with SFP+FC and TrueNAS Baremetal install isn‘t anymore expensive those times. For sure, i connect my LUNs thru iSCSI Multipath IO and the SAN Network is my only Network with dual 10 GBe connected to 2 MicroTik CRS309-1G-8S+IN for Failover. In production State i do RoundRobin Balancing over both 10GBe Connections - Run like a charm! (Pay special Attention on your HBA Controller, the builtin P420i from HP is not recommended for ZFS)
I don't understand anything you've written 😂
@@vdc-1375 - no matter, it' a lot of standard Datacenter typical used designations ;)
Excellent. Very well illustrated with clear, simple explanations. Thank you.
Okay, I have just 1 question to ask: Why can’t others (TH-camrs) make and explain videos like yours? Your termination is just so easy to follow. You don’t seem to be going on and on. Believe me, I value you’re videos. Thank you ever so kindly. 🙏🏻
Unfortunately, this video is not good, and in fact it's very misleading. Please refer to my other comments under this video.
@@danielwoosewicz6556 hello Sir. I have been looking for your other comments for a while but I can't find it. I even 'Ctrl F' the webpage your name. I think there is too many comments to load them all. Could you please kindly tag me in your comment where you explain the misinformation. Thanks 🙏🙏
Very well illustrated with clear, simple explanations.
Thank u
To be fair, if it's QSFP+ connectors for the fibre, if you put everything close enough together, you won't need to go fibre, just buy 1m direct attached QSFP+ cables.
Good animation & information
Very good nice and clear explanation, love the animation as well :)
thanks mate, your easy to understand animation for this topic, its very very basic definition for NAS vs SAN and all kind of IT tech erupted in war against you. Its not a PhD thesis but simple and easy to understand video for those who have no clue at all. Its not lecture for Storage Data experts to deploy assets for banks or stock exchanges etc. get some life.
Before watching this animation, i had no ideo about SAN. now i do have SOME info many many thanks for taking my very very small time and giving me what i wanted.
Thank you, helped a lot. Diagrams make it really easy to understand.
very good accent, i'm french and i understand everything , thanks a lot !!!! i do understand very well now ! and these explications are very nice !
On est ensemble
You guys really are the best of all the cert training videos I wish you did more! Always clear and simple explanations, well done.
Thank you.
A NAS can have redundant power supplies, and other devices. And run RAID drives.
Of course you can have redundant components in a NAS, as you can in any other type of computer. But that doesn't make the NAS redundant. If you have a kernel panic, for instance, then it doesn't matter how many power supplies you have; your clients won't be receiving any data.
Not a bad video, but important to note that things like redundant power supplies and online capacity upgrades are standard features of large SMB-grade NAS devices, and don’t necessarily require a costly SAN.
Enjoyed this video, simple and easily understood. Could you do one on Hyperconvergence? :)
Hey man, you are the best, congratulation. I will enjoy so much if you make more video. Thanks
There are lot of NAS out there as an example Isilon (EMC product) that supports redundancy even if node goes down there wont be any affect on NAS even if whole cluster goes down you can use superna eye glass for disaster recovery purposes to replicate/mirror data.
Really love your video teaching and animation man...
Please make a video of VSAN aswell
There is one SAN vendor, Coraid, that repurposes Ethernet to serve as the substrate for its network in order to reduce cost.
It's a good idea. I once built a cluster that used eight gigabit ports per node in a link aggregation scheme. Was blazingly fast for the money... not flat-shared-everything fast of course, but still... it flew on it's reclaimed ProCurve 48G switchfabric ; )
'Beowolf Cluster' is the answer to the question... WTF do we do with all this pulled ex-enterprise gear?
Man you absolutely wrong. NAS sure can have more than one Power Suppy. And there could be more NASs in the network, and there sure could be more than one switch in ethernet network. And SAN also could be not redundand
also fiber channel is not fiber optics yiu can have fibre channel protocol with copper
this is actually not true
NAS is a Network attached storage. Redundancy depends on the actual system. Speed also depends on the actual system. The main point here is that a NAS is using CIFS/NFS Protocol. its a file based storage. it ha its own file system and can read / write whit different operating system, that's why people call NAS as Unified Storage.
SAN is using FC or iSCSI protocol, which are block type protocols. after your mount an iSCSI or FC volume, you need to format it to the given filesystem. Performance and redundancy also depends on the actual system.
depending on the system, both can do more than 'just storing data'. NAS can provide unified services, deduplication, data reduction, encryption
also there are NAS capable SAN storages on the market, like DellEMC Unity, which can provide NAS services on a selected volume
>> that's why people call NAS as Unified Storage
I think it's a misconception. "Unified" historically has been a term for SAN storage which incorporated file-based access at some point in time. See, for instance, IBM Storwize V7000 Unified. EMC VNX became "Unified" as soon as it incorporated Celerra NAS. I've never heard this term being applied to NAS systems. NetApp never calls its FAS storage "Unified". Neither does Dell EMC with its Isilon. In this sense, iSCSI support doesn't make a NAS "unified storage".
iSCSI does have its place in small and even some medium environments. However, as environments grow, a separate storage network becomes a must anyway so why not use a protocol which was designed for this purpose in the first place? Even if you stick to iSCSI, you still have to buy additional switches and expansion cards. Even though iSCSI NAS is a "unified storage" from a purely technical standpoint, corporate customers (80% of the SAN market) will not take such claims seriously. iSCSI was a pretty big thing in early 2000s. In 2007, Gartner even predicted that iSCSI will become a dominating protocol for SANs, but that never happened.
Your other points are pretty solid still.
@@alexrogov7186 You are absolutely right, File&block capability makes a system unified. This is why I wrote "People call NAS storages Unified". I heard soooo many times and you can imagine as a storage specialist, how triggering is it when you hear from people around you :)
In regards of iSCSI. A lot of people doesn't know that the best practice is to buy iSCSI offload capable network cards and build separate SAN network. "we will use it on production network, no problem" or "1GBASE-T is ok for iSCSI, right?" this is another trigger point for me. I remember when companies like Dell, IBM, Fujitsu introduced their iSCSI platforms in early 2000s. Everyone hyped it and you're right; never beat the good old FCP. Most high-end storage doesn't even support replication on iSCSI which makes FCP even stronger on the market.
What really surprises me is that how CI & HCI platforms became strong in the past two years. Also Dell Technologies announced a new storage last year which is unified and also can run virtual machines and has a very good data reduction rate (4:1++ | except with binary files/host side encrypted or media). I feel we are entering a new era in the Data management.
Thats what I was thinking when he said in the video that NAS doesnt have redundancy.
We had NAS with dual different sources of power , 2 power cables to 2 power supplies, it had 2 switches on stack connected to it which was connected to the network through other 4 switches, raid 50, etc
Cant have more redudancy than what we had, unless there were a second NAS, which we actually got later.
Extremely well explained! Thank you!
One place I worked had a Room full of EMC Fridge-sized cabinets/ When we had to prepared for a storm, we called a number, and fuel a dozen trucks showed up and parked outside the building in a designated parking pad reserved for them. And the first truck connected to our generator. IN our case we added storage by the room with more EMC fridges. 12 at a time. they came ready to put drives in pallets of the drive. We had pallets of spare drives on hand. We also had a destruction machine. load the drive-in and chunks of metal and bits and pieces came out after horrendous noises and suffering on the part of the sacrificed drives. Those pieces were securely transported to a blast furnace. to be turned into slag by the pallet-sized bin. I have no idea what the data was. I was just an application test and never got close to anything secure. I even had to have security escort me to the washroom and back.
thanks for the video, you did a great job explaining it in a very simple way :-)
What a wonderful and layman explanation you indeed get a concept of SAN from this video 👍👍
Your all videos are awesome
Thank you so much for uploading such a great content on TH-cam
God bless you
Simple Clear, thank you so much for shairing the video.
You guys are fast becoming my favourite channel - thanks a million!
I keep a SAN at home. Xtreemfs, 2 nodes at mom's house and 3 nodes at my own, both of those include the front end.
Both can do file level and block level access btw.
Love the Videos, you make more sense than my tutor haha.. But seriously, these are helping so much with my understanding. Now to find good references for my assignments :)
That's exactly what I need to watch. Thanks for sharing!
Nas devices can be stacked similarly to your san example. I would also say both setups are still dependent on your network as the data still needs a path to local clients to retrieve and send data.
my answer about this video ""woow, you talk about old nas server technology, today you have nas with dual controller, with iscsi 10, 25, 40 gb / s, there are NAS with ISCSI per block, there are nas with protocol fc, this video would be great if we were in the 2000""
To the point explanation. Thank you very much.
i was just wondering about this. you're the bomb man
Good explain! Thank You, Guy!
welocme
Great video but at least today, two parts of this video aren't quite correct anymore. A NAS is not just a data store, they are typically extremely multi purpose with tons of capabilities out of the box with their included OS, especially from QNAP and Synology. There are enterprise NAS's from QNAP and Synology with dual controllers, dual power supplies, with both raid 6 and raid 60 support along with backup chassis failover.
But still a single box and a single point of failure even with the redundant parts.
@@Fabio-gc1xf you must be unaware of their enterprise DC models with dual controller, power supply, boot rom, etc not to mention dual chassis failover available on even their consumer grade products. They make great stuff.
@@stevey500 you said there were two parts of the video that were incorrect. What's the other?
@@Fabio-gc1xf So is SAN. It is how you set things up that makes them fault tolerant, not the type of system. If you only have one drive behing a SAn then it is not fault tolerant. If you only have one disk shelf behind a SAN....I hope you get my drift. You can set NAS up to be fault tolerant too.
@@RealityCheck6T9 Most of it is actually not correct.
i expected you talk about differences between file level and block level storage
Modern NAS can do a lot more than just store data on the network!! It can host websites, content, and run docker containers.
Wrong. NAS is always just storage. Synology and other vendors call their home servers "NAS" which is misleading.
@@hansi4308 Lol okay. 😂
Here's the simplest way to think about NAS vs. SAN. A NAS serves files, and a SAN serves blocks.
I think of SAN as "sand". There is a thing such as "sand block" but no such thing as a "nas block". From there, i remember that NAS is only for files
Exact! SAN is Block Level System and can be shown as sperated drives, but NAS is File Level System (Shared folders).
I know the answer even before u speak
NAS is the reverse of SAN
Mr Toxic 🤣🤣
1:24 why did it explode? 😭😭
Power Surges
Your information is misleading and inaccurate. There are very robust NAS storage systems out there for the data centers and enterprises. The main difference between a SAN and NAS, is a SAN is Block-level storage and a NAS is File-level storage. And in some instants SAN will host a NAS "head" unit to share out a disk volume as a network file share. This is a poor and incomplete explanation of the differences between a NAS and SAN.
This explanation is indeed very inaccurate, examples of NAS that have great scalability are the NetApp FAS storage platforms. For anything SAN you would need fiber switches and I just worked on a large scale Hitachi F1500 build out with 32Gb FC on Brocade switches. By the way, all our blades connect through FIC into the SAN network, we assign LUNs to the environments and can increase or decrease those on the fly. Anyways, this explanation is way too simplified, storage pros will know ;-)
Al van der Laan you are wrong, because you can have SAN without FC. iSCSI could be used as block level protocol and SAN could be built up on Ethernet switches
Haha what a chain of reaction
NAS' can serve block level storage via iscsi and FC protocols. They are not exclusively file level. This is even moreso true when using zfs.
Great Job has done every time on these videos. Keep it up the good work!
was planning on making a SAN for my home network storage but the price doesnt makes SANs