I have that 4 port 10Gb MikroTik switch you showed and it rocks! After setting it up, I didn't have to worry about it ever again. I have owned mine for about 3 years now and I only have to restart the switch 3 times just because I upgraded my network and I needed the switch to recognize all the new devices again. It does run a bit hot to touch but it never gave me any problems as far as the reliability of the connection goes. My only gripe is that the MikroTik interface is not very intuitive and can be a bit confusing. The web interface can be hard to access to if you switch the default access IP to something else. Other than that... it's a very solid 10Gb switch!
Yeah, I have the 8 port version and it’s a rock solid value. I’d definitely buy another if or when needed. Ubiquiti has a SFP+ 8 port 10gbe switch (USW-AGGREGATION) for about the same price (possibly a little less) as the MikroTik and I would still choose the MikroTik CRS309-1G-8S+IN
I think sub $699 and 45W makes it practical for many folks though. In Europe, some folks are spending $7/W per year. So saving 100W over a 32-port second-hand switch for them can pay itself back in a year or less.
@@ServeTheHomeVideo Yeah, I am in the UK so I can really feel the price savings from saving so much power. If I actually had a use and need for 10Gb or faster network speeds then this would be the switch that I would be looking to buy, far superior speeds for not much more money over 10Gb switches. I'm heavily budget limited at the moment, I look at £40 Raspberry Pi SBC's and dream of a day where I can spend some money on one instead of paying out on yet more bills..... Such is life I guess.
@@marcogenovesi8570 I don't remember how much that was at release from the top of my head. What I can say with confidence is that for how low power this Switch is, and for a piece of hardware with a Single-Core at 650MHz it's actual damn impressive that it is able to handle 4× 100GbE connections simultaneously while keeping total power at around 60W (assuming the each 100GbE port is using 10W while in use, plus the 20W for the rest of the system). It is also a very simply to use and clean little package, and it has loads of options to choose from for powering it which is awesome to have alongside redundant PSU's!
@@marcogenovesi8570 Hardware accelerated routing with a few simple ACLs across VLANs is pretty much a must for me at 100GBE. This not having that is quite limiting if this should be the core switch in a SOHO or home lab environment. I am not giving up VLAN separation and I am certainly not gonna buy a separate 100GBE router that I attach via cable.
It works well and is stable.. the GUI and Configuration methods are weak and the concept of "bridge" for vlans is hard for most people to grasp that have a networking background. Microtik treats vlans as "tags" and bridges as the interfaces between the tags where things like juniper and cisco just treat a vlan as a network on the device. Its love and hate.. we have a TON of them in the field and they're great hardware.. reliable and stable.
I think they need to work with Marvell and focus on offloading as much to a consistent switch interface as possible. MikroTik's challenge with switches is that they have RouterOS built for low-cost routers, but then have to scale across architectures on different products. That is one of the reasons so many whitebox switch companies take Broadcom's base software package, add a few tweaks and use that, or just pledge support for SONiC these days.
@@ServeTheHomeVideo This sounds like an interesting topic. I have formed many of my RouterOS opinions off their lower end switches which is probably not a fair assessment of the systems capabilities.
For the money their routers are fairly stable, you can set 'em up in HA for fraction of price of Cisco 'edge'/service routers and _without_ the recurring license fees.
I think the intended implementation is as a core switch with downlinks to other switches who then serve the rest of the client PC's and servers rather than a switch sitting between your server and your SAN. In the former case, you would want RDMA of some kind, but in the latter, case the switches generally don't care as they are just forwarding whatever aggregate traffic arrives on their ports.
FYI, the Marvell Prestera ASICs are natively supported by the Linux switchdev driver, which makes them (along with Mellanox) particularly desirable for those who just want to set up their gear using ordinary Linux tools instead of proprietary CLIs and GUIs.
Very cool... While you're looking at breaking the 100GbE ports down... I am actually looking at this as a cheaper way to get high speed connectivity between nodes in a cluster. There are a couple of downsides... 1) Price. The price is all over the place $720 - $1200 depending on where you purchase it. 2) Documentation. ( Did not get a good review on Amazon) 3) The processor. As indicated low end processor which could be a limiting factor. That said, the goal here is to create a higher speed network for a cluster of servers to take advantage of NVMe drives for faster response. Definitely cool in that the Mellonox / NVidia switches are ~10K (16 ports)
People always recommended to me - if you want something for networking for your home or small company, especially if you are making something professional - go with MikroTik. They might be more expensive than dirt cheap units, but they are sooo reliable and so, so much over normal specifications. So I assume it is true at least for some segment. That advice was given to me good 15 years ago, so things might have changed.
@@ServeTheHomeVideo That is fair. Though I also got from China External HDD's - 32 to 35 TB for 35$ each. With delivery and taxes. And again - that quote was for general networking and that particular Access Point that I was supposed to buy was like 150$. Whereas more normal ones went for 40-50$.
@@jannegrey no real manufacturer has yet made any 32 TB HDD (and only a couple of those are left, WD and Seagate) and biggest SSDs around are like 8TB and cost an arm and a leg. Imho you either wrote a wrong number or it's a well-known scam of fake high capacity drives. Test the true capacity of that drive with H2testw or ChkFlsh tools before trusting those drives with any data
Mikrotik has been cheaper than more or less everything else in the same category for at least a decade. They make businness-grade stuff, not consumer-grade. So yes it is more expensive than random tplink device you get at big box store but much cheaper than other businness-grade product lines from other vendors
@@marcogenovesi8570 I know. I bought it because it was dirt cheap for 10TB Seagate disk. It was sold as such. So I went to disk manager and found out that it shows as two 5 TB (so 2 different letters). But there was exactly 11.000.000 MB of unformatted space on each "half-disk" (so it is either 32 TB or 36 TB - I start to lost count, given how many conversions of 1024 you should suppose to do). I couldn't believe it - at first they were wonky, but I had 3 external drives already plugged in. I unplugged them, I formatted them for NTFS and they work! And despite my fears the data that I put in on it didn't disappear. Also they were specced better than they were supposed to. They were supposed to be cheap because of USB 2.0, they had 3.0 and they actually use that bandwidth. They came with small installation guide for "Windows" and "XP"..... it isn't very well translated. I have pictures. If I could link them and not risk YT deleting the post - I would. If you can think of any other method of sending pictures - please tell me. I will gladly do so. I suspect that they might be "rejects" from Seagate plant (or whomever makes HDD's for them). 32-36 TB on 2.5 inch very portable very thin chassis. I literally had no idea those existed ffs! I always thought that internal drives were usually bigger than external. I seriously have no idea what is going on! They went on sale only in November. They come from China and it takes 1-2 months for them to get here. I ordered 10 of them now. Might be the worst financial decision of my life (well, one of the worst), because there is no guarantee that they are up to that impossible spec. And yes- I checked their full capacity. I was able to fill them (I only went for 95% fill, so they wouldn't slow down like old HDD's did, when they were full - though this was mostly due to page file), wait 3 days and connect them to a different computer and copy all of the data! not at once of course. I do not have 32 TB drive outside of this one, but all the drives combined that I have have that storage capacity.
If you have a NIC with iWARP support (Intel and Chelsio mainly) they are TCP based and don't use PFC. Flow control is still recommended, but not necessary. Some of the RoCEv2 RDMA implementations can now work over lossy fabric. They add a special header to the frame that can track packet loss and retransmit the data, but they are vendor specific last I checked.
We looked at the original USW-Leaf (link in the description) - Ubiquiti is focused on using Nephos chips (a MediaTek subsidiary) for 25/100GbE which are at best a 3rd tier option in networking. My feedback then, and then when they were trying to re-launch that line a bit later is that if they want to be serious about higher-speed networking, they need to use switch silicon from a major vendor.
This is fantastic, and will be great for next generation stuff (if they can get the price down to under ~$400ish in the next couple of years), BUT I'm tired of all of these conflicting networking standards. it's STILL difficult to find integrated 10g onboard on motherboards (for some stupid reason lots of manufacturers are pushing 2.5g which is both too slow and incompatible with the normal 10g stuff that has been around for a long time now making it an overall worse solution) AND on the switch side, for some annoying reason, I still can't find a decent 8 port 10g switch for under $200 which given that they've been on the market for some time, shouldn't be that hard.
I'm with you on this; reasonable consumer networking has been stuck on 1GbE for nearly 2 decades now, the move to nickle and dime us with 2.5 and 5GbE is clearly just a planned obsolescence strategy as they move up the ladder to eventual consumer 10GbE once 10Gb internet service becomes more available.
Apparently the main reason NBaseT (2.5Gb and 5Gb) exists is because of twisted pair (UTP/STP) cable standards. 10GBaseT over CAT6 is limited to around 30-55 meters. You need CAT6a to support the full 100 meters. 2.5GBaseT can do 100 meters over CAT5e and 5GBaseT can do it over CAT6. In other words, this is to allow buildings that are already wired up for CAT5e or CAT6 to upgrade beyond Gigabit without having to rip it out and replace it all with CAT6a. I think we're hitting the limits on what we can do with UTP/STP cable anyway, as the highest you can do as of 2023 is 40Gbps with CAT8 over 30 meters (so really switch uplinks on a datacenter rack or from switch to router unless your computer is fairly close to the switch physically). Any faster than that and you either need fiber or twinax DACs. Over longer distances and you're using fiber.
Well, mikrotik makes one, you can find them for sub 200$, there is also a tp link 8 port 10g for around that price, and also the ubiquiti switch aggregation switch.
@@killerb255 Well, utp is just for patch cords now or for end point access. And this won't really change any time soon, this is obviously a physical limitation. Not only because buildings are wired up. And, there is no good scenario cost wise, for giving end users anything beyond 2.5, for example. If you do Cat8 over long distance 2.5, that will still be v expensive. I dont think these standads will change. The increasing number of cheaper higher speed switches is only going to desaturate the extremely fast storage now available for servers in a rack. But then again, i personally jsut stick to fiber channel with FCoE for that. anyway im tired as fuck and dont know what im rambling about, have a great day yall.
MikroTik sure does great stuff, but sometimes with a bit strange product decisions. This seem reasonable. I think this switch is targeted to connect a campus or a couple of buildings together, not really for a datacenter. RouterOS is fun and great as well.
Very cheap for 100G, but the RDMA stuff is extremely important if you want to test / learn about it with home lab. Especially since I had some very bad experience with mikrotik updates (iptv igmpv3 support on SwOS is basically "downgrade so it's unaware of IGMP existing" despite literally years of begging on the forums to fix it,) so I highly doubt it would be included in an update.
This could be a nice backbone for a SMB if you use one port as an uplink for WAN access and the other 3 to link 3 other switches that support uplinks like these and the remaining ports are standard ethernet up to 10Gbps or 25Gbps for the internal servers/workstations and maybe even chain the network further down with the 10/25Gbps links down to 1Gbps to 5Gbps port switches with 10Gbps to 25Gbps uplinks to provide network access to users with lower requirements. This is indeed a great backbone for a medium sized business with higher than average network performance needs.
06:26 very important information there! On the CCR1074 we tried to use the gigabit management port to forward traffic, and at 300Mbits the router started to randomly crash. MGMT ports are really only for management, although a few years ago it was just marked "ether1" without any clear indication of how limited these ports are. So be aware.
@@ServeTheHomeVideo Yeah, that is quite clear if you look at the block diagram. But a "normal" looking copper port crashing a router at 1/3 its rated speed was kinda unexpected 🙂 I am going to take a look, but the last time I checked they still not highlighting how limited these ports are.
Stop making videos about Mikrotik products :P. It's getting really hard to find their gear thanks to everyone shedding light on how awesome their stuff is.
That's a nice toy. I just upgraded to 2.5/10GbE QNAP so 100GbE is a few years away or me. Still nice to se what's coming. I'm on ARM SBCs and the devices only now start to come with 2.5GbE. Fast enough for me... ...for now. Cheers.
this is a milestone - really - hopefully we will see some more copycats and more competition - in a few years these will be commonplace for smb - game changer for inhouse compute - 100gbe nic are not super expensive - this is the best upgrade for smb right now - great upgrade path from 25g to 100g and then add another - this kit matches up well with nvme -you have had these for months - what took so long for a review?
Hardware accelerated routing with a few simple ACLs across VLANs is pretty much a must for me at 100GBE. This not having that is quite limiting if this should be the core switch in a SOHO or home lab environment.
@@DavidTrejo Please correct me if I am wrong, but I think the hardware acceleration is for switching (layer 1 and 2) only? Routing between VLANs would be layer 3. See also 12:38.
We have 25gbit FTTH here in Switzerland from init7 so SFP28 is definitely something you want in your home. Too bad the SMF QSFP28 modules are still quite expensive.
MikroTik is really really winning my heart with there products, its very hard for me now days as many companies do a copy and past or do something stupid like add port multipliers when not needed like on my asus board B650E-E -EEEEE why ...... the B650EEEEEEEEEEEEEEE has 8 built on SATA sure some are used for other things as they can be extra pci-e ports but the board has USB Audio, USB RGB.... where MikroTik have paid attention to there design based on need and what works well, or a router on a PCIE
My problem with 25 Gb is even if the cost is low, what would I use it for? I rarely utilize my 10 Gb connections now, about 60 minutes per month. Plus, my Internet connection is 500 Mb per second, which rarely seems to max-out. Now if I could find a managed switch with 8 to 12 Ethernet ports for $200 - $300, that would interest me. I wouldn’t want QSFP28 or SFP+ because those transceivers seem very over priced.
there's some nice 8/12-port switches from juniper and aruba. the current models with 10g uplinks will rarely pop up in your price range, but you can get lucky. the older models without 10g definitely can be found at the price and are as fully managed as it can get, with a less brainfuck than i.e. a cisco sg-300 or sg-350.
I would like to see a 6x to 8x 40gbe ("breakable" into x4 10gbe each) Mikrotik switch for $400-$500. Idealy with the same switch chip and capabilities as CRS326-24S+2Q+RM. (So either 8x 40gbe or even better 6x 40gbe + 8x 10gbe)... 100gbe is an overkill for home (last time I tested with a modern Zen3 I wasn't able to go over ~32gbe single threaded. I can saturate 40gbe with multiple TCP streams, though) and 40gbe NICs are really cheap these days...
Dual 25GbE and 100GbE NICs are also dropping in price very quickly these days. With Intel's launch of Sapphire Rapids that we covered a few weeks ago, high-end server NICs are now 400GbE, making 100GbE NICs two generations old. I think a lot of folks are better off using 25GbE for the higher throughput per channel operation.
afaik 40gbit is a dead end technology that cannot scale up like the 25/100/400/whatever-is-next by just multiplying by four, nor be broken down in 10gbit trasparently at interface level, that's why the 40gbit NICs and switches are so cheap. So abandon all hopes anybody will ever make new 40gbit stuff, the only way forward is the 25/100/400 train
@@ServeTheHomeVideo Still, it would be nice to have an option of a small 4~8x 40gbe efficient switch for home lab etc. (As said, Mikrotik already uses 98DX8332 switch chip in CRS326-24S+2Q+RM, so it would be ideal to use the very same switch within a cheaper 40gbe switch). Of course, many will choose the higher speed, but the NICs and tranceivers are also significantly more expensive than the 40gbe ones, even the second-hand ones...
@@marcogenovesi8570 "nor be broken down in 10gbit trasparently at interface level" LOL, The very switch chip I was refering to can do port braekup 40->4x10gbe, as it is the case with Mikrotik CRS326-24S+2Q+RM, which is a quite new switch. Most PC's today still have 1gbe, so there is a lot of room for 40gbe...
I've just upgraded to a 10GbE switch. I get about 8Gb/s when I run iperf3, and about 3.5Gb/s on actual real-life workloads. Definitely worth the upgrade from my previous 1Gbe switch, but I don't think I need anything faster for a while.
Even if I’m not running at (10gb) line speed, just not saturating the (1gb) link is huge eg start a large transfer on 1gb and 1-2 devices are now barely networked for anything additional
WOW, that's "cheap"! There are several Mikrotik products that I really like, I own 1 of the CRS 309 10gb switches, bought a second for work for a cheap top of rack type of use.
Thanks for the amazing energy as always! I often watch your review videos just to out myself in a better mood, your passion and positivity absolutely radiates through the screen, even my mother who doesn't speak English at all heard you and said "he sounds like he loves whatever he does!" One question, can I stay in SwitchOS if I am using breakout cable? Thinking of adding a 25Gb mellanox to my truenas server, or add a 40Gb chelsio and use a breakout to 10Gb sfp+ to connect to 4 ports on the crs317, could that all be achieved in SwitchOS instead of RouterOS? Just need layer2 switching. Thanks! Please point me in the right direction for documentations too, I am not a networking person at all.
Hey Patrick! Any plans to look at the Dell Precision 7865 Threadripper workstation? Hope we see Threadripper rackmount workstation offerings from these vendors soon!
I am not sure TBH. With Genoa out, it puts a lot of pressure on Threadripper 5995WX since Genoa 64C is faster. The next Dell review will be the new R760
Looks like a cool switch but it's simply overkill for my homelab. I'm running everything in the rack at 40Gbe/40Gbi (Mlnx SX6036) and I can't manage to saturate the network with anything except synthetic benchmarks, so no real point in upgrading, especially with the other hardware being so expensive comparatively.
Totally. The other way to look at it though is on. $/Gbps basis usually the 100GbE gear is less expensive than 10GbE, but that assumes you need it. BTW the CX5's are totally with it if you can get them for only a bit more. Usually Mellanox does a first gen at a new speed, then has its next-gen as the good gen at that speed
@@ServeTheHomeVideo I get all of my Dell rebranded Mellanox-4 cards off of e-bay as used gear. They work great for my home machines. They're 10/25 gig capable so running them through 10 gig MikroTik switches for my home lab. I'd be prime candidate for this new switch.
What is the limit of how fast you can pull data off of the PCI Express bus or for that matter how fast can you pull data from a hard drive or a SSD? I'm just wondering if you have a 100Gb Ethernet card how much thruput can you really get out of it. I suspect its no where near 100Gb. The breakout of multiple 25Gb lines going to one of those 100Gb ports might makes sense since you're talking about capacity more that speed but the other I don't know.
The RDMA bit - I wonder if iWARP would work, but I primarily see this as a good thing for a backup network. Link it into our Mellanox via 100G and *boom* all the backup traffic moves out off the prod network, and you get a 100g link to a different room. put a second one there or just put a 100G nic in the backup server and break that storage wall.
I like this form factor, but not necessarily as a four-port rackmount switch. It'd be awesome if Mikrotik produced another device like this, except swap out the wimpy MIPSBE CPU for their Annapurna AL73400 CPU and call it their "Cloud Router." You know, like how that Annapurna CPU is the crucial piece that turns a CRS518-16XS-2XQ into a powerful CCR2216-1G-12XS-2XQ. On a network with dedicated switching equipment, the border router should also be dedicated, and that router does not need very many physical ports. These days, it seems like we only really need to route between virtual interfaces and vlans. That's why the "Cloud Router" moniker would fit. These four ports are just enough to make a router redundantly join a switching fabric and then provide the CPU and memory oomph to provide routing to the network the switches in the fabric don't have. But alas, if I want Mikrotik's router in this 100-gigabit class, I'd have to pay one of their heftiest price tags and get 12 additional SFP28 ports local to the router itself that really should be on another CRS518 in the fabric instead.
It is very common for network gear. If you look at a lot of the O-RAN gear we review, power drives, and I/O all on the front, only fans in the back so that way someone does not need to service from behind the units in the field.
How do you test 100Gbps? I love to see the testing. I have a 10Gbps test setup. I am planning to set up a 100G test. A Single PC can consume 100Gbps speed?
If you saw on the STH main site today, there is a new CRS510 with two of the QSFP28 ports exposed as 4x SFP28 each www.servethehome.com/mikrotik-crs510-8xs-2xq-in-25gbe-and-100gbe-switch-announced-marvell-prestera/
I wish I could earn so much that I could consider a $700 switch "cheap" and have a home server that could put in use all the features of a 4-port 100-gigabit switch.
OMG lot of money flying in 20mins video. Showed 6 25Gig SFPs with 2 100gig SFP and two of that 100gig mikrotik switch with 25gig switch. I just saw 400gig Optic in the video what the F###. Nice video. Thanks
CRS504-4XQ-IN is listed for $799 on Mikrotik product page. Is the $699 price mentioned in the video out of date, or this unit currently sold for under MSRP?
We show the Amazon page and link in the description to it at $692. The $799 is MSRP. Normally MikroTik switches sell 15-17% below MSRP. We actually did a price analysis in the early pandemic days to show this when it was hard to get anything to review: www.servethehome.com/mikrotik-crs-switch-cost-analysis-q2-2020/
At this price point, I wonder how this will compare to just going for Fiber Channel with a Brocade FC switch with 2 cards (FC HBA and NIC) for each host, or even FCoE on one of used Nexus 5500/5600 series. Any thoughts? Assuming that the primary use for such a switch in a home environment would be to serve storage.
This thing's awesome! Just very unfortunate that they don't support BGP EVPN 😭. But shouldn't RoCEv2 just work? I thought only v1 had those requirements.
afaik RoCE uses UDP and relies on the switches know this is RoCE and packets should not be reordered, and do congestion control. v2 does not change this. If you want something that "just works" and does not require special support in the switches you have to use iWarp that uses TCP instead and has no other special requirements
@@marcogenovesi8570 More recent ConnectX cards (I think CX-5 and newer, but dont quote me on that) has built-in congestion control with RoCEv2, but having ECN support in your network is recomended. Some people prefer to have PFC still, but PFC is not without its problems (flow control can be very annoying in bigger networks, even if you limit it to just RDMA traffic using PFC) Also I dont know how cross-vendor this built-in congestion control would be
I have a quick question. My new wired router has 3 firewall settings I need to know if I should enable them for home use. FTP ALG, PPTP ALG, SIP ALG. Thanks in advance.
It's that I already invested in a Qnap M2108-2C back when they used to be a lot more affordable (they cost €180 more now), and that it is usually not practical to lay fiber in houses over here (usually stone walls) but otherwise this would be great...
Just a comment about laying fibre in homes, I found a really interesting product recently on alibaba, it is a transperant jacket single mode fibre cable. You put it up using little transperant stickers, and it is almost invisible along walls.
Some n00b questions. What kind of uplink router this is reasonable to attach to? 2.5/10/25/100G? How would that square with the fact that 2G is best internet uplink I can get in my location? Is it reasonable to split the rest into 12x25? Can one go even lower than that? 25G is over the speed limit of Gen3 SSDs isn't it? How would one saturate such a link without breaking the bank?
Typically you would use this in an environment where you have storage or other servers on the local network. For example, you could have a NAS with SSDs and then use fiber to connect that to your workstation and use the other ports with breakout cables to connect to servers or other nodes so they have access, albeit slower, to the storage as well.
Do you know whether RDMA support is a hardware or software thing? If the switch ASIC itself can support PFC and other RDMA features then we are just some RouterOS update away. We can definitely expect it to come out in the future. The other case we just reach a dead end though.
Why wouldn´t RDMA / RoCE work? From the switchs view, it´s just another Layer 2 / Layer 3 packet (depending on implementation) like any other package.. The switch is "transparent" to RDMA traffic.
I got a 100GbE network between 2 workstations and a NAS, only to find out I cant get 100GbE in Windows 11 Pro for Workstations, as QNAP does not support SMB Direct or RDMA. Fastest speeds Im seeing are 55 Gb/s write and 12-14 Gb/s read -- a tenth of the potential speed! I've spent a month on this problem now and all of QNAPs tech support and sales and other IT pro's cannot help. So unless Im missing something, it seems the hardware is about a decade ahead of the software implementation? I dont get it.
i like that the stuff is coming down (you barely even get 10 gigabit on a pfsense sold router) but what nics would you pair with it? for a basic homelab usage, say between my desktop and truenas server (and maybe a proxmox server in the future)
buying the product is easy finding the cable + sends to acutally USE the 100gbs is impossble .. id like to see a switch to ssystem cable + ends cuz id like to use that in my house i move a lot of files and it would really help to have 100 gbs vs 1
@@ServeTheHomeVideo You can keep the energy and heat down a bunch by keeping with copper twinax when possible. Also quite a bit cheaper. This applies to server-side too.
CRAP thats a good price i just bought the dual 40G quad 10g 48x1G for about the same, i wish the one i just bought was 2x100g so i could get this 4x100
Its all very cool and all and its definitely incredible value - but at what point are you going to saturate a 10gb link at home let alone saturating a 100GB link. Unless you are striping (raid 0) your data across multiple NVME Gen4 drives - Which you dont want to do unless you are prepared to replace them fairly regularly or you are running enterprise NVME drives.
I wonder how much performance you can actually get on a single socket connection. I found our xeon cascade and ice lake cpus really struggle to push enough data to saturate 100gbe...at most we get about 54gbe.
Very common for edge network gear. If you have the power and ports on the same side you do not need to access the rear of a rack for service. You can see it on some of the O-RAN focused servers we review as well.
1) I would LOVE to try and deploy RDMA on something like this, JUST to see if it will work. (I already have NFSoRDMA deployed over my Mellanox 36-port 100 Gbps Infiniband switch, so for me, to be able to test it, all I would need to do is just change the port type from IB to ETH, and then see if it will even run.) 2) I think that an 8-port version of this that's relatively low cost, low power, and low noise would be the PERFECT switch in a homelab setting. 4x 100 GbE ports is probably the bare minimum, but it is nice to know that if I want to connect a few systems up together, I WOULDN'T need to fire up my said Mellanox switch to be able to do something like that. Cost-wise though, because this is a new switch, on a $/Gbps switching capacity basis, or $/port, this MikroTik switch will cost more than my Mellanox switch that I was able to buy off eBay, but again, it's also a LOT lower noise (the Mellanox switch is a LOT louder and also consumes more power (~200 W range)).
$700 is pretty good for 4x100G. I paid more than 1/3 of that for a used Brocade that I had to, ahem, "lightly convince" to turn its four stacking ports into 2x40G ports and 2x4x10G ports, and its management UX is truly horrible compared to anything modern.
Apparently the base SoC supports PFC, but MikroTik still doesn't support it. (dig through the forums, there's a response from support back in 2019) That said, it'll probably work about as well as iSCSI on a switch with tiny buffers... well enough until you push real traffic across it.
I wasn't even considering 25gb LAN, much less 100gb, and here I am now considering a mix of both SMH. @CraftComputing teased an upcoming video on 100gb LAN with a switch that looks like this one so I want to see that video before I start throwing money at eBay.
I bought this switch and 1 mellanox cx455a and 1 mellanox cx555a used nic. Can’t get them to work with my Windows 11 machines. The nic’s gets seen by windows and has drivers that are “working”. But, they will not show up on the Ethernet network. Can anyone help?
I have that 4 port 10Gb MikroTik switch you showed and it rocks! After setting it up, I didn't have to worry about it ever again. I have owned mine for about 3 years now and I only have to restart the switch 3 times just because I upgraded my network and I needed the switch to recognize all the new devices again. It does run a bit hot to touch but it never gave me any problems as far as the reliability of the connection goes.
My only gripe is that the MikroTik interface is not very intuitive and can be a bit confusing. The web interface can be hard to access to if you switch the default access IP to something else. Other than that... it's a very solid 10Gb switch!
Yeah, I have the 8 port version and it’s a rock solid value. I’d definitely buy another if or when needed. Ubiquiti has a SFP+ 8 port 10gbe switch (USW-AGGREGATION) for about the same price (possibly a little less) as the MikroTik and I would still choose the MikroTik CRS309-1G-8S+IN
definitely add some cooling
Out of my price range, but for what it offers that is some INSANE value!
I think sub $699 and 45W makes it practical for many folks though. In Europe, some folks are spending $7/W per year. So saving 100W over a 32-port second-hand switch for them can pay itself back in a year or less.
@@ServeTheHomeVideo Yeah, I am in the UK so I can really feel the price savings from saving so much power.
If I actually had a use and need for 10Gb or faster network speeds then this would be the switch that I would be looking to buy, far superior speeds for not much more money over 10Gb switches.
I'm heavily budget limited at the moment, I look at £40 Raspberry Pi SBC's and dream of a day where I can spend some money on one instead of paying out on yet more bills..... Such is life I guess.
it's in the same bonkers value as the other 4x10gb switch was at release
@@marcogenovesi8570 I don't remember how much that was at release from the top of my head.
What I can say with confidence is that for how low power this Switch is, and for a piece of hardware with a Single-Core at 650MHz it's actual damn impressive that it is able to handle 4× 100GbE connections simultaneously while keeping total power at around 60W (assuming the each 100GbE port is using 10W while in use, plus the 20W for the rest of the system).
It is also a very simply to use and clean little package, and it has loads of options to choose from for powering it which is awesome to have alongside redundant PSU's!
@@marcogenovesi8570 Hardware accelerated routing with a few simple ACLs across VLANs is pretty much a must for me at 100GBE.
This not having that is quite limiting if this should be the core switch in a SOHO or home lab environment. I am not giving up VLAN separation and I am certainly not gonna buy a separate 100GBE router that I attach via cable.
Mikrotik is really changing the game, if they could get RouterOS to really work they could be leaders.
It works well and is stable.. the GUI and Configuration methods are weak and the concept of "bridge" for vlans is hard for most people to grasp that have a networking background. Microtik treats vlans as "tags" and bridges as the interfaces between the tags where things like juniper and cisco just treat a vlan as a network on the device.
Its love and hate.. we have a TON of them in the field and they're great hardware.. reliable and stable.
I think they need to work with Marvell and focus on offloading as much to a consistent switch interface as possible. MikroTik's challenge with switches is that they have RouterOS built for low-cost routers, but then have to scale across architectures on different products. That is one of the reasons so many whitebox switch companies take Broadcom's base software package, add a few tweaks and use that, or just pledge support for SONiC these days.
Untangle?
@@ServeTheHomeVideo This sounds like an interesting topic. I have formed many of my RouterOS opinions off their lower end switches which is probably not a fair assessment of the systems capabilities.
For the money their routers are fairly stable, you can set 'em up in HA for fraction of price of Cisco 'edge'/service routers and _without_ the recurring license fees.
I think the intended implementation is as a core switch with downlinks to other switches who then serve the rest of the client PC's and servers rather than a switch sitting between your server and your SAN. In the former case, you would want RDMA of some kind, but in the latter, case the switches generally don't care as they are just forwarding whatever aggregate traffic arrives on their ports.
FYI, the Marvell Prestera ASICs are natively supported by the Linux switchdev driver, which makes them (along with Mellanox) particularly desirable for those who just want to set up their gear using ordinary Linux tools instead of proprietary CLIs and GUIs.
thx for the info we are palaning the upgrade of our production studio network from 10Gbe to a new speed. This is an incredible device.
Very cool...
While you're looking at breaking the 100GbE ports down... I am actually looking at this as a cheaper way to get high speed connectivity between nodes in a cluster.
There are a couple of downsides...
1) Price. The price is all over the place $720 - $1200 depending on where you purchase it.
2) Documentation. ( Did not get a good review on Amazon)
3) The processor. As indicated low end processor which could be a limiting factor.
That said, the goal here is to create a higher speed network for a cluster of servers to take advantage of NVMe drives for faster response.
Definitely cool in that the Mellonox / NVidia switches are ~10K (16 ports)
People always recommended to me - if you want something for networking for your home or small company, especially if you are making something professional - go with MikroTik. They might be more expensive than dirt cheap units, but they are sooo reliable and so, so much over normal specifications. So I assume it is true at least for some segment. That advice was given to me good 15 years ago, so things might have changed.
I am unaware of anything this cheap for 4x 100GbE elsewhere, and we have been looking on AliExpress weekly.
@@ServeTheHomeVideo That is fair. Though I also got from China External HDD's - 32 to 35 TB for 35$ each. With delivery and taxes.
And again - that quote was for general networking and that particular Access Point that I was supposed to buy was like 150$. Whereas more normal ones went for 40-50$.
@@jannegrey no real manufacturer has yet made any 32 TB HDD (and only a couple of those are left, WD and Seagate) and biggest SSDs around are like 8TB and cost an arm and a leg. Imho you either wrote a wrong number or it's a well-known scam of fake high capacity drives. Test the true capacity of that drive with H2testw or ChkFlsh tools before trusting those drives with any data
Mikrotik has been cheaper than more or less everything else in the same category for at least a decade. They make businness-grade stuff, not consumer-grade. So yes it is more expensive than random tplink device you get at big box store but much cheaper than other businness-grade product lines from other vendors
@@marcogenovesi8570 I know. I bought it because it was dirt cheap for 10TB Seagate disk. It was sold as such. So I went to disk manager and found out that it shows as two 5 TB (so 2 different letters). But there was exactly 11.000.000 MB of unformatted space on each "half-disk" (so it is either 32 TB or 36 TB - I start to lost count, given how many conversions of 1024 you should suppose to do). I couldn't believe it - at first they were wonky, but I had 3 external drives already plugged in. I unplugged them, I formatted them for NTFS and they work! And despite my fears the data that I put in on it didn't disappear. Also they were specced better than they were supposed to. They were supposed to be cheap because of USB 2.0, they had 3.0 and they actually use that bandwidth. They came with small installation guide for "Windows" and "XP"..... it isn't very well translated.
I have pictures. If I could link them and not risk YT deleting the post - I would. If you can think of any other method of sending pictures - please tell me. I will gladly do so.
I suspect that they might be "rejects" from Seagate plant (or whomever makes HDD's for them). 32-36 TB on 2.5 inch very portable very thin chassis. I literally had no idea those existed ffs! I always thought that internal drives were usually bigger than external. I seriously have no idea what is going on! They went on sale only in November. They come from China and it takes 1-2 months for them to get here. I ordered 10 of them now. Might be the worst financial decision of my life (well, one of the worst), because there is no guarantee that they are up to that impossible spec.
And yes- I checked their full capacity. I was able to fill them (I only went for 95% fill, so they wouldn't slow down like old HDD's did, when they were full - though this was mostly due to page file), wait 3 days and connect them to a different computer and copy all of the data! not at once of course. I do not have 32 TB drive outside of this one, but all the drives combined that I have have that storage capacity.
A piece on RDMA and requirements in multi layer switches would be interesting
If you have a NIC with iWARP support (Intel and Chelsio mainly) they are TCP based and don't use PFC. Flow control is still recommended, but not necessary. Some of the RoCEv2 RDMA implementations can now work over lossy fabric. They add a special header to the frame that can track packet loss and retransmit the data, but they are vendor specific last I checked.
This is pretty awesome. My home lab is already saturating my 10G networking, so this is suddenly becoming really tempting for this price.
Your wallet: NOOOOO!!!! 🥺😭
😈
@@DavidTrejo Thankfully I am sharing my lab with my roommate, so we would share the cost.
Hopefully this product release makes Ubiquiti do a competitive device. Been wanting a small quiet 100GbE switch but stuck in the Ubiquiti ecosystem.
We looked at the original USW-Leaf (link in the description) - Ubiquiti is focused on using Nephos chips (a MediaTek subsidiary) for 25/100GbE which are at best a 3rd tier option in networking. My feedback then, and then when they were trying to re-launch that line a bit later is that if they want to be serious about higher-speed networking, they need to use switch silicon from a major vendor.
@patrick - great advice. Hopefully they listen 😅
Mikrotik sure has nice reliable switches
Hi Patric, thanks for mentioning RDMA. I wasn’t aware it’s something that might not work. Need to investigate further.
This is fantastic, and will be great for next generation stuff (if they can get the price down to under ~$400ish in the next couple of years), BUT I'm tired of all of these conflicting networking standards. it's STILL difficult to find integrated 10g onboard on motherboards (for some stupid reason lots of manufacturers are pushing 2.5g which is both too slow and incompatible with the normal 10g stuff that has been around for a long time now making it an overall worse solution) AND on the switch side, for some annoying reason, I still can't find a decent 8 port 10g switch for under $200 which given that they've been on the market for some time, shouldn't be that hard.
I'm with you on this; reasonable consumer networking has been stuck on 1GbE for nearly 2 decades now, the move to nickle and dime us with 2.5 and 5GbE is clearly just a planned obsolescence strategy as they move up the ladder to eventual consumer 10GbE once 10Gb internet service becomes more available.
Apparently the main reason NBaseT (2.5Gb and 5Gb) exists is because of twisted pair (UTP/STP) cable standards.
10GBaseT over CAT6 is limited to around 30-55 meters. You need CAT6a to support the full 100 meters.
2.5GBaseT can do 100 meters over CAT5e and 5GBaseT can do it over CAT6.
In other words, this is to allow buildings that are already wired up for CAT5e or CAT6 to upgrade beyond Gigabit without having to rip it out and replace it all with CAT6a.
I think we're hitting the limits on what we can do with UTP/STP cable anyway, as the highest you can do as of 2023 is 40Gbps with CAT8 over 30 meters (so really switch uplinks on a datacenter rack or from switch to router unless your computer is fairly close to the switch physically).
Any faster than that and you either need fiber or twinax DACs. Over longer distances and you're using fiber.
Well, mikrotik makes one, you can find them for sub 200$, there is also a tp link 8 port 10g for around that price, and also the ubiquiti switch aggregation switch.
@@killerb255 Well, utp is just for patch cords now or for end point access. And this won't really change any time soon, this is obviously a physical limitation. Not only because buildings are wired up. And, there is no good scenario cost wise, for giving end users anything beyond 2.5, for example. If you do Cat8 over long distance 2.5, that will still be v expensive. I dont think these standads will change. The increasing number of cheaper higher speed switches is only going to desaturate the extremely fast storage now available for servers in a rack. But then again, i personally jsut stick to fiber channel with FCoE for that. anyway im tired as fuck and dont know what im rambling about, have a great day yall.
MikroTik sure does great stuff, but sometimes with a bit strange product decisions. This seem reasonable. I think this switch is targeted to connect a campus or a couple of buildings together, not really for a datacenter. RouterOS is fun and great as well.
Very cheap for 100G, but the RDMA stuff is extremely important if you want to test / learn about it with home lab.
Especially since I had some very bad experience with mikrotik updates (iptv igmpv3 support on SwOS is basically "downgrade so it's unaware of IGMP existing" despite literally years of begging on the forums to fix it,) so I highly doubt it would be included in an update.
dont use SwOS for anything other than a basic vlan setup. stick to ROS.
This could be a nice backbone for a SMB if you use one port as an uplink for WAN access and the other 3 to link 3 other switches that support uplinks like these and the remaining ports are standard ethernet up to 10Gbps or 25Gbps for the internal servers/workstations and maybe even chain the network further down with the 10/25Gbps links down to 1Gbps to 5Gbps port switches with 10Gbps to 25Gbps uplinks to provide network access to users with lower requirements.
This is indeed a great backbone for a medium sized business with higher than average network performance needs.
06:26 very important information there! On the CCR1074 we tried to use the gigabit management port to forward traffic, and at 300Mbits the router started to randomly crash. MGMT ports are really only for management, although a few years ago it was just marked "ether1" without any clear indication of how limited these ports are. So be aware.
Great point. It also messes up features like L3 HW offloading on switches since that port is off-chip.
@@ServeTheHomeVideo Yeah, that is quite clear if you look at the block diagram. But a "normal" looking copper port crashing a router at 1/3 its rated speed was kinda unexpected 🙂 I am going to take a look, but the last time I checked they still not highlighting how limited these ports are.
Well, they could also help by not putting the stinking thing in the system (software!) bridge by default.
@@jfbeam as far as the ccr series go, they dont come with any pre existing default config. but agreed regarding their crs series.
I'm still running 1GbE everywhere, but this is really interesting for how cheap it is.
Stop making videos about Mikrotik products :P. It's getting really hard to find their gear thanks to everyone shedding light on how awesome their stuff is.
As much as I am not a fan of Mikrotik. This product I have been looking at it for a while. It is tempting the performance price is excellent!
That's a nice toy. I just upgraded to 2.5/10GbE QNAP so 100GbE is a few years away or me. Still nice to se what's coming. I'm on ARM SBCs and the devices only now start to come with 2.5GbE. Fast enough for me... ...for now. Cheers.
We are going to have a big 2.5GbE switch series coming in about a month.
I bought two of these switches after watching this video.
this is a milestone - really - hopefully we will see some more copycats and more competition - in a few years these will be commonplace for smb - game changer for inhouse compute - 100gbe nic are not super expensive - this is the best upgrade for smb right now - great upgrade path from 25g to 100g and then add another - this kit matches up well with nvme -you have had these for months - what took so long for a review?
Very nice review with solid technical comments. I would like to add, if RDMA is a requirement, Infiniband EDR should be considered.
Hardware accelerated routing with a few simple ACLs across VLANs is pretty much a must for me at 100GBE.
This not having that is quite limiting if this should be the core switch in a SOHO or home lab environment.
I agree that this is a required feature and Patrick stated that this DOES have hardware accelerated inter-vlan routing. Or did I mis-hear?🤔
@@DavidTrejo Please correct me if I am wrong, but I think the hardware acceleration is for switching (layer 1 and 2) only?
Routing between VLANs would be layer 3. See also 12:38.
We have 25gbit FTTH here in Switzerland from init7 so SFP28 is definitely something you want in your home. Too bad the SMF QSFP28 modules are still quite expensive.
^5 from Germany where I get 100mbit max :-)
Damn that's like free for 100gbe. ~$200 per port thank you STH for this amazing review
All I can say vsan lab or proxmox
MikroTik is really really winning my heart with there products, its very hard for me now days as many companies do a copy and past or do something stupid like add port multipliers when not needed like on my asus board B650E-E -EEEEE why ...... the B650EEEEEEEEEEEEEEE has 8 built on SATA sure some are used for other things as they can be extra pci-e ports but the board has USB Audio, USB RGB.... where MikroTik have paid attention to there design based on need and what works well, or a router on a PCIE
The router on a PCIe card is sitting off to the left of the screen in this video
@@ServeTheHomeVideo I would love to play with that and see if its possible to install other software on it like opensense or something like that.
My problem with 25 Gb is even if the cost is low, what would I use it for? I rarely utilize my 10 Gb connections now, about 60 minutes per month. Plus, my Internet connection is 500 Mb per second, which rarely seems to max-out. Now if I could find a managed switch with 8 to 12 Ethernet ports for $200 - $300, that would interest me. I wouldn’t want QSFP28 or SFP+ because those transceivers seem very over priced.
there's some nice 8/12-port switches from juniper and aruba. the current models with 10g uplinks will rarely pop up in your price range, but you can get lucky. the older models without 10g definitely can be found at the price and are as fully managed as it can get, with a less brainfuck than i.e. a cisco sg-300 or sg-350.
I would like to see a 6x to 8x 40gbe ("breakable" into x4 10gbe each) Mikrotik switch for $400-$500.
Idealy with the same switch chip and capabilities as CRS326-24S+2Q+RM. (So either 8x 40gbe or even better 6x 40gbe + 8x 10gbe)...
100gbe is an overkill for home (last time I tested with a modern Zen3 I wasn't able to go over ~32gbe single threaded. I can saturate 40gbe with multiple TCP streams, though) and 40gbe NICs are really cheap these days...
Dual 25GbE and 100GbE NICs are also dropping in price very quickly these days. With Intel's launch of Sapphire Rapids that we covered a few weeks ago, high-end server NICs are now 400GbE, making 100GbE NICs two generations old. I think a lot of folks are better off using 25GbE for the higher throughput per channel operation.
afaik 40gbit is a dead end technology that cannot scale up like the 25/100/400/whatever-is-next by just multiplying by four, nor be broken down in 10gbit trasparently at interface level, that's why the 40gbit NICs and switches are so cheap. So abandon all hopes anybody will ever make new 40gbit stuff, the only way forward is the 25/100/400 train
@@ServeTheHomeVideo Still, it would be nice to have an option of a small 4~8x 40gbe efficient switch for home lab etc.
(As said, Mikrotik already uses 98DX8332 switch chip in CRS326-24S+2Q+RM, so it would be ideal to use the very same switch within a cheaper 40gbe switch).
Of course, many will choose the higher speed, but the NICs and tranceivers are also significantly more expensive than the 40gbe ones, even the second-hand ones...
@@marcogenovesi8570 "nor be broken down in 10gbit trasparently at interface level"
LOL, The very switch chip I was refering to can do port braekup 40->4x10gbe, as it is the case with Mikrotik CRS326-24S+2Q+RM, which is a quite new switch.
Most PC's today still have 1gbe, so there is a lot of room for 40gbe...
I've just upgraded to a 10GbE switch. I get about 8Gb/s when I run iperf3, and about 3.5Gb/s on actual real-life workloads. Definitely worth the upgrade from my previous 1Gbe switch, but I don't think I need anything faster for a while.
10GbE is a huge upgrade over 1GbE!
Even if I’m not running at (10gb) line speed, just not saturating the (1gb) link is huge eg start a large transfer on 1gb and 1-2 devices are now barely networked for anything additional
WOW, that's "cheap"! There are several Mikrotik products that I really like, I own 1 of the CRS 309 10gb switches, bought a second for work for a cheap top of rack type of use.
Thanks for the amazing energy as always! I often watch your review videos just to out myself in a better mood, your passion and positivity absolutely radiates through the screen, even my mother who doesn't speak English at all heard you and said "he sounds like he loves whatever he does!"
One question, can I stay in SwitchOS if I am using breakout cable? Thinking of adding a 25Gb mellanox to my truenas server, or add a 40Gb chelsio and use a breakout to 10Gb sfp+ to connect to 4 ports on the crs317, could that all be achieved in SwitchOS instead of RouterOS? Just need layer2 switching. Thanks! Please point me in the right direction for documentations too, I am not a networking person at all.
Ha! Thanks for that. I think you have to use RouterOS with these.
and im here at home super excited that im designing a 10gig network for my home
Hey Patrick! Any plans to look at the Dell Precision 7865 Threadripper workstation? Hope we see Threadripper rackmount workstation offerings from these vendors soon!
I am not sure TBH. With Genoa out, it puts a lot of pressure on Threadripper 5995WX since Genoa 64C is faster. The next Dell review will be the new R760
Hey! That witch looks familiar! 🙂
Looks like a cool switch but it's simply overkill for my homelab. I'm running everything in the rack at 40Gbe/40Gbi (Mlnx SX6036) and I can't manage to saturate the network with anything except synthetic benchmarks, so no real point in upgrading, especially with the other hardware being so expensive comparatively.
Overkill, but you're running Infiniband. I would have called that overkill just a couple of years ago!
That's killer for sme clients. I still like the Noah's ark style of switches for homelab customers tho
Now waiting for cheaper Mellanox-4 cards cause even 10Gbe cards are like twice expensive than few years earlier which is ridicules.
Totally. The other way to look at it though is on. $/Gbps basis usually the 100GbE gear is less expensive than 10GbE, but that assumes you need it. BTW the CX5's are totally with it if you can get them for only a bit more. Usually Mellanox does a first gen at a new speed, then has its next-gen as the good gen at that speed
@@ServeTheHomeVideo I get all of my Dell rebranded Mellanox-4 cards off of e-bay as used gear. They work great for my home machines. They're 10/25 gig capable so running them through 10 gig MikroTik switches for my home lab. I'd be prime candidate for this new switch.
Is awesome buy any company that requires that bandwidth wouldn't worry about the cost
This will be a perfect switch for my pool house.
no RoCE or iWarp .... maybe is because Mikrotik is more used to ISP environments than Servers environments
its like Cisco Nexus vs Catalyst
maybe
What is the limit of how fast you can pull data off of the PCI Express bus or for that matter how fast can you pull data from a hard drive or a SSD? I'm just wondering if you have a 100Gb Ethernet card how much thruput can you really get out of it. I suspect its no where near 100Gb. The breakout of multiple 25Gb lines going to one of those 100Gb ports might makes sense since you're talking about capacity more that speed but the other I don't know.
PCIe Gen3 x16 can handle a 100Gbps link. A PCIe Gen5 x16 link can handle 400Gbps so it takes only a x4 link on the new generation of servers
The RDMA bit - I wonder if iWARP would work, but I primarily see this as a good thing for a backup network. Link it into our Mellanox via 100G and *boom* all the backup traffic moves out off the prod network, and you get a 100g link to a different room. put a second one there or just put a 100G nic in the backup server and break that storage wall.
Amazing how this video has not been up for 24 hours and yet the price jumped $30 USD. But I do want one to extend my 40 gig off of my Nexus 3K :)
I like this form factor, but not necessarily as a four-port rackmount switch. It'd be awesome if Mikrotik produced another device like this, except swap out the wimpy MIPSBE CPU for their Annapurna AL73400 CPU and call it their "Cloud Router." You know, like how that Annapurna CPU is the crucial piece that turns a CRS518-16XS-2XQ into a powerful CCR2216-1G-12XS-2XQ. On a network with dedicated switching equipment, the border router should also be dedicated, and that router does not need very many physical ports. These days, it seems like we only really need to route between virtual interfaces and vlans. That's why the "Cloud Router" moniker would fit. These four ports are just enough to make a router redundantly join a switching fabric and then provide the CPU and memory oomph to provide routing to the network the switches in the fabric don't have. But alas, if I want Mikrotik's router in this 100-gigabit class, I'd have to pay one of their heftiest price tags and get 12 additional SFP28 ports local to the router itself that really should be on another CRS518 in the fabric instead.
My OCD can't take the power and data being on the front. It would mess with the feng shui of my network rack 😶
That 518-16XS-2XQ.... 🤤
It is very common for network gear. If you look at a lot of the O-RAN gear we review, power drives, and I/O all on the front, only fans in the back so that way someone does not need to service from behind the units in the field.
How do you test 100Gbps? I love to see the testing.
I have a 10Gbps test setup. I am planning to set up a 100G test.
A Single PC can consume 100Gbps speed?
I keep forgetting that QSFP can be broken out. Otherwise I was like only 4 ports?
If you saw on the STH main site today, there is a new CRS510 with two of the QSFP28 ports exposed as 4x SFP28 each www.servethehome.com/mikrotik-crs510-8xs-2xq-in-25gbe-and-100gbe-switch-announced-marvell-prestera/
I wish I could earn so much that I could consider a $700 switch "cheap" and have a home server that could put in use all the features of a 4-port 100-gigabit switch.
I would like to see how well the Supermicro A3SSV-8C-SPLN10F runs with pfsensee (running snort. to examine at the packet level).
OMG lot of money flying in 20mins video. Showed 6 25Gig SFPs with 2 100gig SFP and two of that 100gig mikrotik switch with 25gig switch. I just saw 400gig Optic in the video what the F###. Nice video. Thanks
We are going to have the 64x 400GbE switch review on the STH main site (probably no video) in 2 weeks.
@@ServeTheHomeVideo You always show unique things. Your videos are awesome. Thanks.
CRS504-4XQ-IN is listed for $799 on Mikrotik product page. Is the $699 price mentioned in the video out of date, or this unit currently sold for under MSRP?
We show the Amazon page and link in the description to it at $692. The $799 is MSRP. Normally MikroTik switches sell 15-17% below MSRP. We actually did a price analysis in the early pandemic days to show this when it was hard to get anything to review: www.servethehome.com/mikrotik-crs-switch-cost-analysis-q2-2020/
At this price point, I wonder how this will compare to just going for Fiber Channel with a Brocade FC switch with 2 cards (FC HBA and NIC) for each host, or even FCoE on one of used Nexus 5500/5600 series. Any thoughts? Assuming that the primary use for such a switch in a home environment would be to serve storage.
This thing's awesome! Just very unfortunate that they don't support BGP EVPN 😭. But shouldn't RoCEv2 just work? I thought only v1 had those requirements.
afaik RoCE uses UDP and relies on the switches know this is RoCE and packets should not be reordered, and do congestion control. v2 does not change this.
If you want something that "just works" and does not require special support in the switches you have to use iWarp that uses TCP instead and has no other special requirements
@@marcogenovesi8570 check, thanks!
@@marcogenovesi8570 More recent ConnectX cards (I think CX-5 and newer, but dont quote me on that) has built-in congestion control with RoCEv2, but having ECN support in your network is recomended. Some people prefer to have PFC still, but PFC is not without its problems (flow control can be very annoying in bigger networks, even if you limit it to just RDMA traffic using PFC)
Also I dont know how cross-vendor this built-in congestion control would be
I have a quick question. My new wired router has 3 firewall settings I need to know if I should enable them for home use. FTP ALG, PPTP ALG, SIP ALG. Thanks in advance.
It's that I already invested in a Qnap M2108-2C back when they used to be a lot more affordable (they cost €180 more now), and that it is usually not practical to lay fiber in houses over here (usually stone walls) but otherwise this would be great...
We actually have a QNAP switch in for a 2.5GbE round-up project.
Just a comment about laying fibre in homes, I found a really interesting product recently on alibaba, it is a transperant jacket single mode fibre cable. You put it up using little transperant stickers, and it is almost invisible along walls.
Some n00b questions. What kind of uplink router this is reasonable to attach to? 2.5/10/25/100G? How would that square with the fact that 2G is best internet uplink I can get in my location? Is it reasonable to split the rest into 12x25? Can one go even lower than that? 25G is over the speed limit of Gen3 SSDs isn't it? How would one saturate such a link without breaking the bank?
Typically you would use this in an environment where you have storage or other servers on the local network. For example, you could have a NAS with SSDs and then use fiber to connect that to your workstation and use the other ports with breakout cables to connect to servers or other nodes so they have access, albeit slower, to the storage as well.
I have a home lab and at this point I have never saturated my 100 Mb 24 port switch. For me this would be overkill.
Just got my network setup at 2.5Gbps and watched this 100Gbps ...
Do not watch this 400GbE then th-cam.com/video/1NeJEolN0N4/w-d-xo.html
Do you know whether RDMA support is a hardware or software thing? If the switch ASIC itself can support PFC and other RDMA features then we are just some RouterOS update away. We can definitely expect it to come out in the future. The other case we just reach a dead end though.
Why wouldn´t RDMA / RoCE work? From the switchs view, it´s just another Layer 2 / Layer 3 packet (depending on implementation) like any other package.. The switch is "transparent" to RDMA traffic.
You wouldn't happen to do the voice for Muscle Man on the Regular show as your main gig do you?
Routing is out of question, but does it do switching at wire speed and on all ports?
I got a 100GbE network between 2 workstations and a NAS, only to find out I cant get 100GbE in Windows 11 Pro for Workstations, as QNAP does not support SMB Direct or RDMA. Fastest speeds Im seeing are 55 Gb/s write and 12-14 Gb/s read -- a tenth of the potential speed! I've spent a month on this problem now and all of QNAPs tech support and sales and other IT pro's cannot help. So unless Im missing something, it seems the hardware is about a decade ahead of the software implementation? I dont get it.
And here I am, just upgraded my home network to 10gbps
i like that the stuff is coming down (you barely even get 10 gigabit on a pfsense sold router) but what nics would you pair with it? for a basic homelab usage, say between my desktop and truenas server (and maybe a proxmox server in the future)
I've never clicked a video so fast!
buying the product is easy finding the cable + sends to acutally USE the 100gbs is impossble .. id like to see a switch to ssystem cable + ends cuz id like to use that in my house i move a lot of files and it would really help to have 100 gbs vs 1
60W?? Man, now I want 100Gb. I have no need for it. I don't even use the whole 10Gb I have, but I want this.
More like
@@ServeTheHomeVideo
You can keep the energy and heat down a bunch by keeping with copper twinax when possible. Also quite a bit cheaper.
This applies to server-side too.
These really need to come in black.
Is it possible to set 2x25G + 2x10G from a single breakout cable?
CRAP thats a good price i just bought the dual 40G quad 10g 48x1G for about the same, i wish the one i just bought was 2x100g so i could get this 4x100
this is showing at $799 on Mictotik's site and $725 on Newegg. Don't see this model at all on Amazon. Where is it $692?
Its all very cool and all and its definitely incredible value - but at what point are you going to saturate a 10gb link at home let alone saturating a 100GB link. Unless you are striping (raid 0) your data across multiple NVME Gen4 drives - Which you dont want to do unless you are prepared to replace them fairly regularly or you are running enterprise NVME drives.
I wonder how much performance you can actually get on a single socket connection.
I found our xeon cascade and ice lake cpus really struggle to push enough data to saturate 100gbe...at most we get about 54gbe.
Why did they put the power on the same side as the ports? That seems bizarre.
Very common for edge network gear. If you have the power and ports on the same side you do not need to access the rear of a rack for service. You can see it on some of the O-RAN focused servers we review as well.
1) I would LOVE to try and deploy RDMA on something like this, JUST to see if it will work.
(I already have NFSoRDMA deployed over my Mellanox 36-port 100 Gbps Infiniband switch, so for me, to be able to test it, all I would need to do is just change the port type from IB to ETH, and then see if it will even run.)
2) I think that an 8-port version of this that's relatively low cost, low power, and low noise would be the PERFECT switch in a homelab setting.
4x 100 GbE ports is probably the bare minimum, but it is nice to know that if I want to connect a few systems up together, I WOULDN'T need to fire up my said Mellanox switch to be able to do something like that.
Cost-wise though, because this is a new switch, on a $/Gbps switching capacity basis, or $/port, this MikroTik switch will cost more than my Mellanox switch that I was able to buy off eBay, but again, it's also a LOT lower noise (the Mellanox switch is a LOT louder and also consumes more power (~200 W range)).
thats a good price I have a Cisco 3650 that cost 4k
I'd like to see how this could work with HDMI over ethernet. The new HDMI 2.1 can do up to 48Gbits/s so it could be done without compression over TCP.
$700 is pretty good for 4x100G. I paid more than 1/3 of that for a used Brocade that I had to, ahem, "lightly convince" to turn its four stacking ports into 2x40G ports and 2x4x10G ports, and its management UX is truly horrible compared to anything modern.
you must have been very convincing and said the exactly right words for that!
Apparently the base SoC supports PFC, but MikroTik still doesn't support it. (dig through the forums, there's a response from support back in 2019) That said, it'll probably work about as well as iSCSI on a switch with tiny buffers... well enough until you push real traffic across it.
good point!
@@udirt Yes likely, but you don't need it with ZTR.
I wasn't even considering 25gb LAN, much less 100gb, and here I am now considering a mix of both SMH.
@CraftComputing teased an upcoming video on 100gb LAN with a switch that looks like this one so I want to see that video before I start throwing money at eBay.
It’s a lot cheaper then Cisco, HP Aruba, Oracle and other server brand switches.
power consumption is A+
Can you turn the smiley on for the videos?
There is a bit of a flicker so it was off for this one. It should be on in the newer videos.
@@ServeTheHomeVideo 😄 tnx
I barely saturate my 10 gig network in my home lab. It's a cool switch but I don't need it...yet...
Is there anything in the market that is like 12-24 ports of 2.5Gbe ? all i seem to find is 8ports and i need more then 8 (12 at last count)
its great and all but it'll just bottle neck at the end users device. I don't think my firestick can utilize that speed.
Brill Channel Bro - tots lovin it! I work for a DC in the Valley and this is super insightful. Big thanks!
Patrick I love your videos!
As someone who has worked in datacenters for going on 9 years... It pains me to see you suggest breakout cables D:
i dont even have 10gbps and some of you out there already upgrading to 100gbps :D
I bought this switch and 1 mellanox cx455a and 1 mellanox cx555a used nic. Can’t get them to work with my Windows 11 machines. The nic’s gets seen by windows and has drivers that are “working”. But, they will not show up on the Ethernet network. Can anyone help?
Are they set to Ethernet mode not Infiniband?
How do I set the mode? The mxlup and the utility program doesn’t give me a choice of ib or Ethernet. Thanks in advance
12:10 na man, I totally created stateless HW offloaded ACL rules on my CRS305 that block traffic from server to my PC (but not other way around) 😆
Did you test this powered only by PoE?
Itd sure be nice if mikrotik made ONIE switches... :/
What 100 g optical modules do you use in this switch ?