New Boot SSD for my PROXMOX System

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 112

  • @Trains-With-Shane
    @Trains-With-Shane 9 หลายเดือนก่อน +12

    Good enough for a test rig. Think I'll stick with established brands for anything that I put into production. Especially when you can get Western Digital, Crucial, and Samsung for less or just a tiny bit more money. At least you know the company will be around if and when you need to ever make a warranty claim.
    Now that being said the migration info was solid!

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +2

      The old one was a WD actually, although my Samsung drives have all been doing great in my production Proxmox system.

  • @NetBandit70
    @NetBandit70 9 หลายเดือนก่อน +30

    FIKWOT: A name you can trust.

    • @JoseOcampo-g5m
      @JoseOcampo-g5m 9 หลายเดือนก่อน +1

      I switched the vowels in my hed

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 9 หลายเดือนก่อน +4

      ​​@@JoseOcampo-g5mno need for that since "fick" in German is exactly the word you though of. "F*k what?" is exactly what came to my mind when I heard of using a cheap Chinese SSD for a boot drive of a machine running stuff that might be important.
      Now it could work, who knows, but proxmox is not esxi and tends to write a bit more to the bit drive. A large overprovisioning area might help. Still wouldn't do that. I'd use brand name SSD for that, and preferably in a mirror.

  • @BillLambert
    @BillLambert 9 หลายเดือนก่อน +36

    I'm really not inclined to trust an unknown SSD manufacturer that doesn't even post spec sheets for their products.

    • @tendosingh5682
      @tendosingh5682 9 หลายเดือนก่อน

      Exactly why would you risk it unless it was huge discounted price and for non important use. The only thing that matters is the NAND quality and controller. Not that its new or big.

    • @cheebadigga4092
      @cheebadigga4092 6 หลายเดือนก่อน

      yeah I just use Kingston KC3000s wherever I can

    • @philipkeeler9997
      @philipkeeler9997 3 หลายเดือนก่อน

      Most of the bad Chinese shit has been found to be label stripped and replaced with flat out fraudulence.
      Storage has become ridiculously cheap. BUT you still gotta stay vigilant.
      Used to be you could trust WD benchmarking. Now... not so much.
      And Intel? WTF's goin' on over there. That big green splash NVid about to
      roll out a tsunami bigger than the other big green wave of LnxMint 22 (aka hurricane Wilma)
      Is rapidly corroding the stupidity of MS-snapshotting your every keystroke. Wow....

  • @snap_oversteer
    @snap_oversteer 9 หลายเดือนก่อน +8

    Recently I bought couple of older unused U.2 800GB SK Hynix SSDs for 40$ each along with 15$ aliexpress PCIe adapters, not the fastest SSDs out there but ~4PBW endurance and power loss protection are nice to have on a server.

  • @seanunderscorepry
    @seanunderscorepry 9 หลายเดือนก่อน +24

    Very cool of Farquad to send you an NVMe drive !

    • @citypavement
      @citypavement 4 หลายเดือนก่อน +1

      Meh, it's advertising.

  • @MikeDeVincentis
    @MikeDeVincentis 9 หลายเดือนก่อน +6

    Always feels like talking to a friend when watching your videos. Thanks for the explanation. Don't need this now but good to have for future reference.

  • @Mr.Leeroy
    @Mr.Leeroy 9 หลายเดือนก่อน +5

    the way you added new drive to ZFS pool is asking for headaches, add by using GPT UUID as the original was

  • @robertopontone
    @robertopontone 9 หลายเดือนก่อน +9

    Always impressed by your deep knowledge of such niche topics 😮

  • @leescott8278
    @leescott8278 9 หลายเดือนก่อน +2

    I actually just made a similar move from my 500gb M.2 boot drive to a 2tb 980 evo pro. I booted into a live CD and used dd to clone my boot drive to the new drive, after the clone was complete booted into gparted to expand the local-lvm partition. Once booted back into pve on the new drive i expanded the fs of local-lvm from the cli

    • @philipkeeler9997
      @philipkeeler9997 3 หลายเดือนก่อน

      Dude, 'clone the boot drive' to the NvMe
      Set BIOS, boot from it... carry on! Of Course!!
      Thank you. Sometimes things really are just that easy?

  • @Chris.Wiley.
    @Chris.Wiley. 9 หลายเดือนก่อน +5

    Boy oh boy would this video have helped me a couple of months back when I had my primary drive start to fail in my Proxmox backup server. I tried and tried to use Clonezilla to duplicate the failing drive to my new drive but failed miserably. I ended up backing up some key files from /etc and just doing a complete reinstall of PBS on the new drive.

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +4

      This is mostly made possible by using zfs, so you can resilver to the new drive while the old drive is still in the system.

    • @blakecasimir
      @blakecasimir 9 หลายเดือนก่อน +2

      CZ flat out refuses to backup drives with Proxmox on them. It's frustrating.

    • @Chris.Wiley.
      @Chris.Wiley. 9 หลายเดือนก่อน

      @@blakecasimir It seemed to complete OK when I cloned them, but no matter what I tried, the cloned drive would not boot. It's like the grub stuff didn't transfer over or something.

    • @Darkk6969
      @Darkk6969 9 หลายเดือนก่อน +2

      One of the reasons why I have a pair of 500 gig SSDs in ZFS mirror for boot in ProxMox. There is even special instructions on how to deal with failed ZFS boot drive.

    • @blakecasimir
      @blakecasimir 9 หลายเดือนก่อน +1

      @@Chris.Wiley. I didn't get that far, for me it failed with an error when trying to create a drive image.

  • @pedrobastos4342
    @pedrobastos4342 9 หลายเดือนก่อน +4

    If my drive is a lvm or ext4 instead of zfs, can I use only "dd" to copy the data partition?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +6

      Not while the partition is in use, but lvm has a similar mirror copy feature to zfs

  • @PlatyBZH
    @PlatyBZH 9 หลายเดือนก่อน +3

    Nice video, as always
    Why don't you just clone the drive with something like clonezilla, and resize the ZFS partition ?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +9

      This keeps the system online, and keeps zfs aware of the new disk

  • @SeanPorterPDX
    @SeanPorterPDX 9 หลายเดือนก่อน +2

    I have to say, I really like the way nano handles long lines… is that the default behavior or a plug-in or setting?

  • @ruwn561
    @ruwn561 9 หลายเดือนก่อน +9

    Tri-level cell. Not layers.

  • @shephusted2714
    @shephusted2714 9 หลายเดือนก่อน +2

    prices on ssd jumped up again - they should fall again at some point. my request for you would to be to make dual low power nas but with some special qualities - mega ram, good nvme caching layers and all flash arrays plus 40g dual port card - it is a lil pie in the sky but maybe you could do something with older hw - even a z420 board. megaram for nas is very inportant but the weakest link is probably networking for smb sector and homelabbers/prosumers but having dual nas is worth the time and money too, you can do point to point with dual port cards and also sync up nases quickly - the 56g cards are like 40 bucks

  • @parl-88
    @parl-88 9 หลายเดือนก่อน +2

    Great Video! Thanks for making it. Quick Question, could you teach us how to power your megalab? What parts did you use to MacGyver you way through it. I would love to copy that from you. Since having a real power supply is bulky for a lab setup. Thanks!

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +3

      It's this - www.mini-box.com/picoPSU-150-XT-150W-Adapter-Power-Kit
      I don't remember if I got the 120W or 150W but it's one of those. Not very powerful.

  • @Devicesdevices-et3mi
    @Devicesdevices-et3mi หลายเดือนก่อน

    Many thanks for this and was exactly what I was looking for. Main instructions start at 10min 17secs in.
    Worked very well unlike trying to Clone via various apps including Clonezilla, which always failed to boot once done.
    This method worked well and could do it Live which was a plus. In my instance, I connected the new drive via USB due to no spare space in tiny PC. It was quick and easy to follow. I didn't worry about the last part i.e. mirroring the ZFS as.my VM's were all contained on 2nd disk. Nice work..

  • @jurie_erwee
    @jurie_erwee 9 หลายเดือนก่อน +6

    7:15 Something disappeared 👀

  • @ewenchan1239
    @ewenchan1239 9 หลายเดือนก่อน +2

    Three stupid questions:
    1) Do you have a blog post with all of the commands? (specifically - the syntax for the zfs detach command)?
    2) I am guessing that this really only works if you going from a smaller drive to a bigger drive, but not the other way around?
    3) You mentioned that if you are using EFI, to leave the grub part out. But I thought that after the EFI loads, it will still go to the Grub menu in Proxmox, no?
    Your help is greatly appreciated.
    Thank you.

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +4

      Answers:
      1. openzfs.github.io/openzfs-docs/man/master/8/zpool-detach.8.html is the man page. The short syntax is 'zpool detach ' where device is exactly what 'zpool status' shows.
      2. It works as long as the amount of space consumed by the zpool will fit on the new drive, since it's done by a zfs resilver and not by copying the block device. Similar to replacing a zfs drive with a smaller one.
      3. It Depends. For legacy BIOS booting, the grub loader is in the 1M first partition and then the grub loader loads the kernel / initrd from the EFI partition. For EFI booting without secure boot, grub isn't used at all, systemd boot is loaded straight out of the efi partition. For EFI booting with secure boot, grub is stored on the efi partition. Basically, EFI stores the loader (grub or not) as a file in the FAT partition instead of a dedicated partition. In any case I would copy the 1M partition if it's empty or not.

    • @ewenchan1239
      @ewenchan1239 9 หลายเดือนก่อน +1

      @@apalrdsadventures
      Thank you.
      You help is greatly appreciated.

  • @LordApophis100
    @LordApophis100 9 หลายเดือนก่อน +4

    Those 16GB Optane you can get for 5$ make great boot drives, they will last forever and enough to host the system with a dedicated data pool

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +3

      RIP 3d Xpoint memory

    • @eDoc2020
      @eDoc2020 9 หลายเดือนก่อน +1

      IMO they're better for ZFS special and SLOG devices.

  • @EdvardasSmakovas
    @EdvardasSmakovas 9 หลายเดือนก่อน +3

    Thanks, good to learn other ways to do things. Just whould it more correct to zpool attach nvme disk by id of the disk (same way as the first was attached)?

  • @rogeramoe
    @rogeramoe 9 หลายเดือนก่อน +2

    How did you reclaim the extra space on the 2TB NVME after detaching from zfs pool and booting with it?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +2

      When I created the partition table, the partitions are expanded to fill the whole drive (since sfdisk was instructed to not use the last-lba and partition 3 size from the old drive). So zfs sees the full space.
      ZFS will then limit to the space of the smallest mirror in the pool when both are attached, but as soon as I detach the smaller drive the full space is available (even without rebooting). I just rebooted to physically remove the old drive and make sure the new drive is properly bootable (it is).

    • @rogeramoe
      @rogeramoe 9 หลายเดือนก่อน +1

      Understood.
      Really appreciate your Channel. 👍
      @@apalrdsadventures

  • @hpsfresh
    @hpsfresh 17 วันที่ผ่านมา

    Question. Is it (and why) better than sgdisk /dev/disk/by-id/ -R /dev/disk/by-id/ then sgdisk -G /dev/disk/by-id/ and then extend last partition?

  • @GeoffSeeley
    @GeoffSeeley 9 หลายเดือนก่อน +3

    You didn't touch on over-provisioning. I typically leave some NVMe free space, either by NS or partition to give some more spare space, although I mostly use old enterprise drives with high endurance where that isn't as important as they usually have plenty of reserved spare space.

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +5

      ZFS will properly use discard/trim, so unused space will be free for the wear leveling algorithm to use. In my case, the drive was less than half full before, so now it's less than 1/8 full, and has plenty of empty space for flash endurance.

  • @WobblycogsUk
    @WobblycogsUk 9 หลายเดือนก่อน +1

    My home-lab and I say thanks. I'd be interested in a deep dive in boot loaders if you are looking for video ideas. I use grub (because that's what Debian installs) but I have a feeling I should really be switching to EFI.

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน

      I really just go with what the installer does for boot loaders, but GRUB is pretty cool (especially theming)

  • @4megii
    @4megii 9 หลายเดือนก่อน +1

    I'm rather stuck.
    I don't know what to host.
    I have a NAS running TrueNAS with missmatched drives
    And a singular proxmox node with 16gb of ram a 240gb normal SSD and a 512gb SSD.
    I also have a VPS.
    The reason why I am stuck is, I can't open ports and I don't know how I can expose things to my domain on the Internet. Some people have said using a VPN, but I'm not sure.

  • @BGraves
    @BGraves 9 หลายเดือนก่อน +6

    A chinese gong rings comedically on every bootup.

  • @la3135
    @la3135 9 หลายเดือนก่อน

    Great video! Just a slightly off-topic question: What PSU are you using?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน

      It's this - www.mini-box.com/picoPSU-150-XT-150W-Adapter-Power-Kit

  • @MrShiffles
    @MrShiffles 9 หลายเดือนก่อน

    Ive used Clonezilla in the past with great success copying nvme ssds (dual boot Win/Linux systems sometimes) to each other.... does the zfs partition cause problems with Clonezilla?

  • @fio_mak
    @fio_mak 9 หลายเดือนก่อน +3

    What's the point of running zpool with just one drive? Where is redundancy in that?

    •  9 หลายเดือนก่อน

      To be able to use snapshots maybe?

    • @Cynyr
      @Cynyr 9 หลายเดือนก่อน +2

      Send/receive between nodes, you still get all the create a pool for a vm type stuff.

    • @LordApophis100
      @LordApophis100 9 หลายเดือนก่อน +2

      You still get all the other benefits of ZFS

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +3

      ZFS does 3 things - redundancy (merging disks into one), volume management (splitting the pool into sub-parts), and a filesystem. You can still use the volume manager and filesystem features and all of the benefits of zfs on a single-disk system.

  • @BrunodeSouzaLino
    @BrunodeSouzaLino 9 หลายเดือนก่อน +1

    But...Is MegaLab sitting on top of MegaBox? Also, there was the odd SSHD technology, where a mechanical drive had something like 8 GB of flash storage that worked as cache.

    • @Cynyr
      @Cynyr 9 หลายเดือนก่อน

      Doesn't seagate still make those?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน

      MegaLab is on the exact box it came in. Also, Apple sold a 'Fusion Drive' for awhile that did that, but for consumer stuff it's cheaper, smaller, an easier to have a flash only drive now.

    • @BrunodeSouzaLino
      @BrunodeSouzaLino 9 หลายเดือนก่อน

      @@apalrdsadventures I happen to have a Seagate one which has 1TB + 8GB of SSD cache. You can still buy those new.

  • @armoredstarfish921
    @armoredstarfish921 7 หลายเดือนก่อน

    I found that by far the biggest contributors to SSD wear were the HA services, which you can safely disable if you aren't in a cluster or don't need them - pve-ha-crm & pve-ha-lrm

  • @stevegraham5494
    @stevegraham5494 9 หลายเดือนก่อน +2

    Rocking the shirt from Veronica Explains! 👍

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +3

      Most* of the shirts I wear in videos are from other channels that I watch

  • @zyghom
    @zyghom 8 หลายเดือนก่อน

    man, I am just about to replace my ssd in my proxmox, will follow your guide, lets see where we are in few minutes ;-)
    EDIT: job done, all successful.
    I had a bit more complicated setup because:
    1- it was a replacement of broken SSD sata that works in mirror with SSD NVMe in my mini pc
    2- 3 partitions belonged to Proxmox but 4th one was passed through to VM and there used as TrueNAS storage
    3- because of replacement I had to use: zfs replace pool old_partition new_partition rather than attach
    4- exactly the same later in TrueNAS
    5- after resilvering all is ok and I checked the boot from the new disk only - works as well
    one comment to your video: attaching "sda1" or "nvme1" to zfs pool is not the best one - better use diskid or at least partuid - your life can get complicated if you just use /dev/sdaX ;-)
    perfect video, thank you a million!

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      Glad it's working well for you! zfs isn't super particular about dev names like some other filesystems on Linux, but using uuid is the best practice still.

    • @zyghom
      @zyghom 8 หลายเดือนก่อน

      @@apalrdsadventures yes, but sdX can be renumbered - my TrueNAS has 6 HDD and every reboot is different sdX. But if we use uuid or diskid it remains the same.

  • @sheldonkupa9120
    @sheldonkupa9120 9 หลายเดือนก่อน +5

    The wearout is a bit annoying with proxmox. I wish they would implement a ram disk for logs like in openmediavault, assume proxmox is too professional😉 meanwhile i use 2 very cheap small sata ssds, in a btrfs raid0. Thats quite performant for the os. And i can replace the ssds without regret. My vms reside on my nvme. I am very happy with the setup.

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +2

      I'd prefer to log via systemd, but the logs aren't that big and the systemd journal is larger than the Proxmox logs

  • @ronm6585
    @ronm6585 9 หลายเดือนก่อน +2

    Thank you.

  • @cd-stephen
    @cd-stephen 9 หลายเดือนก่อน +1

    well explained - ty

  • @zyghom
    @zyghom 8 หลายเดือนก่อน

    I tried today replacing second disk in TrueNAS - all was OK but when I tried the new disk to boot the system from it, it failed. So if you could make a video how to replace boot-pool disk in TrueNAS it would be great. Probably something with boot/EFI was not done - apparently "zpool replace..." was not enough to boot from new disk. In proxmox there is this command that does the job but now to do it in TrueNAS?

    • @apalrdsadventures
      @apalrdsadventures  8 หลายเดือนก่อน

      I believe TrueNAS has a system to add drives to the boot pool through their UI, although I haven't used TrueNAS in a few years.

    • @zyghom
      @zyghom 8 หลายเดือนก่อน

      @@apalrdsadventures this part I am not sure - TrueNAS deals nicely with replacing disk that are in user created pools but boot-pool is created by installer and I was not able to find "replace disk" in menu but I might be wrong. I will try again as it is good to try as long as everything works, not when s..t happened already ;-)
      But I tried from terminal and all was ok except new disk was not bootable

    • @zyghom
      @zyghom 8 หลายเดือนก่อน

      @@apalrdsadventures but magic of TrueNAS is: you download the backup, install from scratch, upload backup and everything is back except ssh keys so 15min job

  • @commanderkniggens8666
    @commanderkniggens8666 8 หลายเดือนก่อน

    I am new to your channel, wow, thanks for sharing this - it helps so much making my it knowledge special ;-)

  • @vidmonkey
    @vidmonkey 9 หลายเดือนก่อน +2

    How long did the old SSD last? Ive read some comments say that Proxmox eats consumer grade SATA/NVMe SSD drives. Any tips to prolonging the life of SSD drives used as a Proxmox boot drive? Any issues storing VMs and ISOs on the boot drive?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +3

      I don't think it's any more aggressive with boot drives by itself than other server systems. It has usual system logging, which is not a massive amount of data, but add in the VM disks on top of that and it can add up to a lot of background writes.
      But generally for longer SSD life, using a larger drive and filling it less means each flash cell gets programmed/erased less frequently. The old wisdom was to overprovision (leave empty space in the partition table), but using a modern fs like zfs that supports discard/trim will let the drive know which blocks can be discarded and the free space on the fs is basically the overprovision space. Some zfs tuning can be done (like increasing the block size) as well. Enabling discard support for the VMs also means their free space passes up to the drive as well.
      I'm using this to store the VMs/CTs on my test system, so this system does see all of the use of the VMs in addition to the Proxmox system itself. It's not doing a whole lot, but the VMs do get created/destroyed often as I often walk through my tutorials on a new VM each time.

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 9 หลายเดือนก่อน

      ​​@@apalrdsadventuresfrom what I heard proxmox writes quite a lot to the boot drive, as opposed to say esxi (rip), which could be rather safely run from a flash USB. Particularly if you're using it with a couple of other proxmox machines in an HA setup.

  • @armandoalvarez7
    @armandoalvarez7 หลายเดือนก่อน

    Quick question! I am trying to migrate from 2 sata ssds (setup in raid 0) to 1 NVME SSD, would the video be able to work for my case?

    • @apalrdsadventures
      @apalrdsadventures  หลายเดือนก่อน

      If you have no RAIDZ (raidz1/raidz2/raidz3) you can follow a similar process, but it's not identical.
      First, add the new NVMe SSD, add the partition table (copy it from either of the other ones), copy grub and copy boot partitions. zpool attach it to one of the sata SSDs. Now you'll have a zfs mirror with one sata ssd + the nvme ssd. Now you can detach the first sata ssd. Make sure to set autoexpand=on to expand the pool with the new space of the NVMe SSD.
      After that, you need to run zpool remove on the second ssd. This will remap all of the data from the second drive to the first one.

    • @armandoalvarez7
      @armandoalvarez7 หลายเดือนก่อน

      Your advise helped out amazingly but I did it just a little bit differently, I was able to remove one of the SSDs from the striped rpool, the zpool resilvered the remaining drive then I followed your video afterwards. Thank you very much!

  • @ktraglin
    @ktraglin 5 หลายเดือนก่อน

    How can I do this without the proxmox-boot-tool, using Ubuntu?

  • @a3-82
    @a3-82 3 หลายเดือนก่อน

    this is greats tutorial
    many thanks sir.

  • @souk-tv
    @souk-tv 9 หลายเดือนก่อน +1

    Proxmox is such an unpolished product. Given how long Solid State Drives have been around you'd think it wouldn't be as bad as what it is at destroying SSDs. It's like the SSD Terminator

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน

      Proxmox really isn't doing a lot to the root disk, it's the VMs that are doing a lot of disk IO.

  • @incandescentwithrage
    @incandescentwithrage 4 หลายเดือนก่อน

    How to build a reliable virtualization host:
    1) Desktop board of unknown hardware & driver provinance running Proxmox.
    2) FLIGGIDII 2TB SSD without integrated power loss protection.
    K.

  • @sohail579
    @sohail579 5 หลายเดือนก่อน

    how can I do this but my new boot drive I would like to install 2 new mirrored drives?

    • @apalrdsadventures
      @apalrdsadventures  5 หลายเดือนก่อน

      you can convert single drives to/from mirrors or add more disks to a mirror using zpool attach and zpool detach.
      In this example I attach the one new drive then detach the one old drive (so it goes single -> 2-way mirror -> single), but you could just as easily prep the 2 new drives (using the same boot / efi partition process on each drive) then attach both (now in a 3-way mirror). Once resilvering is done with both drives, you can detach the first (now in a 2-way mirror).

    • @sohail579
      @sohail579 5 หลายเดือนก่อน

      @@apalrdsadventures thanks for the information I have started to do it now and when I write the partition file back to the first new disk (not tried the other yet) it comeplets but i get this error too Partition 1 does not start on physical sector boundary. is this ok?

    • @sohail579
      @sohail579 5 หลายเดือนก่อน

      ok also just realized my single boot drive is not in a zpool by its self am i screwed here?

    • @apalrdsadventures
      @apalrdsadventures  5 หลายเดือนก่อน

      hrm I wonder if your existing drive is using 512 byte sectors and the new drive is using 4096 byte sectors? Usually we partition everything assuming 4096 byte even if the drive claims 512 byte.
      As to the zpool, is the zpool combined with more disks or is it not using zfs at all?

    • @sohail579
      @sohail579 5 หลายเดือนก่อน

      @@apalrdsadventures Yes its using 512 if i recall now when i installed (just installed with the proxmox installer gui) i remember thinking why would i use zfs with only 1 drive so didnt now that im learning im moving over to 2 drives in a zfs pool, do i have a way around this?

  • @ralphvanos9742
    @ralphvanos9742 9 หลายเดือนก่อน +5

    Why not just clone it with Clonezilla?

    • @stonent
      @stonent 9 หลายเดือนก่อน +1

      That's what I would have done.

    • @NFTwizardz
      @NFTwizardz 9 หลายเดือนก่อน +1

      Wait can i just clone the proxmox boot drive and plug and play?

    • @apalrdsadventures
      @apalrdsadventures  9 หลายเดือนก่อน +6

      Clonezilla requires me to keep the system down during the whole process, but also won't expand the partition table unless I do the same process from Clonezilla instead of the booted system.

    • @Devicesdevices-et3mi
      @Devicesdevices-et3mi หลายเดือนก่อน

      ​@@apalrdsadventures Yeap and Clonezilla failed for me no matter how I tried. It does clone BUT it fails to boot.
      I then stumbled on your method which works a treat. I even did did mine with new drive attached via USB as no there were no spare room in mini PC for the new disk. Once done, swapped it over and booted without an issue. Nice!

  • @diacritic8508
    @diacritic8508 9 หลายเดือนก่อน

    Wot the Fik??? is how you exclaim when you lose data on an SSD... they could literally multiply their sales overnight by just rebranding it.

  • @citypavement
    @citypavement 4 หลายเดือนก่อน

    Oh no. He's fallen to the dark side.

  • @m0les
    @m0les 9 หลายเดือนก่อน +1

    I love how you're a tenth dan wizzard in storage tech, but you tape down your SSD and boot by bridging header pins with a twiddler just like skrubs such as I.
    Also, I feel I need to raise the pedantry by pointing out you said "cat", but never actually ran /usr/bin/cat.

    • @eDoc2020
      @eDoc2020 9 หลายเดือนก่อน

      cat is in /bin, not /usr/bin. Actually the recently the norm is a merged /bin and /usr/bin so both work but /bin is the traditional location. IMO if you say "cat" you should show a feline.

    • @m0les
      @m0les 9 หลายเดือนก่อน

      @@eDoc2020(pedantry increases)