Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save gangefors/2029e26501601a99c501599f5b100aa6 to your computer and use it in GitHub Desktop.
Save gangefors/2029e26501601a99c501599f5b100aa6 to your computer and use it in GitHub Desktop.
How to install TrueNAS SCALE on a partition instead of the full disk

Install TrueNAS SCALE on a partition instead of the full disk

The TrueNAS installer doesn't have a way to use anything less than the full device. This is usually a waste of resources when installing to a modern NVMe which is usually several hundred of GB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition would be helpful.

The easiest way to solve this is to modify the installer script before starting the installation process.

  1. Boot TrueNAS Scale installer from USB stick/ISO

  2. Select shell in the first menu (instead of installing)

  3. While in the shell, run the following commands:

    sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install
    /usr/sbin/truenas-install
    

    For TrueNAS Scale 24.10+ see this comment.

    The first command modifies the installer script so that it creates a 16GiB boot-pool partition instead of using the full disk. The second command restarts the TrueNAS Scale installer.

  4. Continue installing according to the official docs.

Step 7-12 in the deprecated guide has instructions on how to allocate the remaining space to a partition you can use for data. If you are using a single drive just ignore the steps that has to do with mirroring.

Deprecated guide using a USB stick as intermediary

Unfortunately this is only possible by using an intermediate device to act as the installation disk and later move this data to the NVMe. Below I have documented the steps I took to get TrueNAS SCALE to run from a mirrored 16GB partition on NVMe disks.

For an easier initial partition please see this comment and the discussion that follows. Should remove the need to use a USB stick as a intermediate medium.

  1. Install TrueNAS SCALE on a USB drive, preferrably 16GB in size. If you use a 32GB stick you must create a 32GB partition on the NVMe, wasting space that can be used for VMs and Docker/k8s applications.

  2. Boot and enter a Linux shell as root. For example by enabling SSH service and login by root password.

  3. Check available devices

     $ parted
     (parted) print devices
     /dev/sdb (15.4GB)  # boot device
     /dev/nvme0n1 (500GB)
     /dev/nvme1n1 (512GB)
     (parted) quit
    

If you only have one NVMe disk just ignore the instructions that include the second disk (nvme1n1). This disk is used to create a ZFS mirror to handle disk failures.

  1. Clone the boot device to the other devices

     $ cat /dev/sdb > /dev/nvme0n1
     $ cat /dev/sdb > /dev/nvme1n1
    
  2. Check the partition layout. Fix all the GPT space warning prompts that show up.

     $ parted -l
     [...]
     Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the
     space (an extra 946741296 blocks) or continue with the current setting?
     Fix/Ignore? f
     [...]
     Model:  USB  SanDisk 3.2Gen1 (scsi)
     Disk /dev/sdb: 15.4GB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start   End     Size    File system  Name  Flags
      1      20.5kB  1069kB  1049kB                     bios_grub
      2      1069kB  538MB   537MB   fat32              boot, esp
      3      538MB   15.4GB  14.8GB  zfs
     [...]
    

    The other disks partition table should look identical to this.

  3. Remove the zfs partition from the new devices, number 3 in this case. This is the boot-pool partition and we will recreate it later. The reason we remove it is that zfs will recognize metadata that makes it think it's part of the pool while it is not.

     $ parted /dev/nvme0n1 rm
     Partition number? 3
     Information: You may need to update /etc/fstab.
    
  4. Recreate the boot-pool partition as a 16GiB large partition with a sligtly later start sector than before, make sure that it is on a sector divisable with 2048 for best performance (526336 % 2048 = 0). We also do this to make sure that zfs doesn't find any metadata from the old partition.

    Start with the smaller disk if they are not identical.

     $ parted
     (parted) unit kiB
     (parted) select /dev/nvme0n1
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start    End        Size       File system  Name  Flags
      1      20.0kiB  1044kiB    1024kiB                       bios_grub
      2      1044kiB  525332kiB  524288kiB  fat32              boot, esp
    
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start      End          Size         File system  Name       Flags
      1      20.0kiB    1044kiB      1024kiB                              bios_grub
      2      1044kiB    525332kiB    524288kiB    fat32                   boot, esp
      3      526336kiB  17303552kiB  16777216kiB               boot-pool
    
  5. Now you can create a partition allocating the rest of the disk.

     (parted) mkpart pool 17303552kiB 100%
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  6. Do the same for the next device, but this time use the same values as in the printout above. We do this to make sure that the partitions are exactly the same size. In this example the disks are slightly different in size so using 100% on the second disk would create a partition larger than the one we just created on the smaller disk.

     (parted) select /dev/nvme1n1
     Using /dev/nvme1n1
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) mkpart pool 17303552kiB 488386560kiB
     (parted) print
     Model: TS512GMTE220S (nvme)
     Disk /dev/nvme1n1: 500107608kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  7. Make the new system partitions part of the boot-pool. This is done by attaching them to the existing pool while detaching the USB drive.

    $ zpool attach boot-pool sdb3 nvme0n1p3
    

    Wait for resilvering to complete, check progress with

    $ zpool status
    

    When resilvering is complete we can detach the USB device.

    $ zpool offline boot-pool sdb3
    $ zpool detach boot-pool sdb3
    

    Finally add the last drive to create a mirror of the boot-pool.

    $ zpool attach boot-pool nvme0n1p3 nvme1n1p3
    $ zpool status
    pool: boot-pool
    state: ONLINE
    scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            boot-pool      ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p3  ONLINE       0     0     0
                nvme1n1p3  ONLINE       0     0     0
    

    At this point you can remove the USB device and when the machine is rebooted it will start up from the NVMe devices instead. Check BIOS boot order if it doesn't.

  8. Now that the boot-pool is mirrored we want to create a mirror pool using the remaining partitions.

    $ zpool create pool1 mirror nvme0n1p4 nvme1n1p4
    $ zpool status
    pool: boot-pool
    state: ONLINE
    scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            boot-pool      ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p3  ONLINE       0     0     0
                nvme1n1p3  ONLINE       0     0     0
    
    pool: pool1
    state: ONLINE
    config:
    
            NAME           STATE     READ WRITE CKSUM
            pool1          ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p4  ONLINE       0     0     0
                nvme1n1p4  ONLINE       0     0     0
    

    But to be able to import it in the Web UI we need to export it.

    $ zpool export pool1
    
  9. All done! Import pool1 using the Web UI and start enjoying the additional space.

@paolomainardi
Copy link

Thanks! The question was more about partitioning the two disks. Instead of letting TrueScale use all of them, I have two NVMe drives, and I want to utilize the free space, perhaps for cache.

@sthames42
Copy link

Thanks! The question was more about partitioning the two disks. Instead of letting TrueScale use all of them, I have two NVMe drives, and I want to utilize the free space, perhaps for cache.

In that case, I think all you would have to do, Following step 9, is use parted to find the second NVME drive, partition it the same as the boot device using sgdisk, and mirror the boot pool as I described above. You can then create a mirrored VDEV of the remaining partitions to use as a cache.

If your boot disk is nvme0n1 and the second NVME drive is nvme0n2, you can replicate the boot drive partition table with sgdisk:

sgdisk /dev/nvme0n1 -R /dev/nvme0n2
sgdisk -G /dev/nvme0n2

This will make an exact copy of your boot disk with new GUIDs. Then you should be able to see all the new partitions in Storage->Disks and create your mirrors. nvme0n1p3 and nvme0n2p3 would be your boot-pool mirrors and nvme0n1p4 and nvme0n2p4 your remaining disks which can be mirrored, as well.

If you decide to do all this when installing a new NAS, no worries. But please be careful when doing this an existing server. I have done all this on a Proxmox server but not on a TrueNAS server. Please let me know how it worked.

@VaillantHassIo
Copy link

good guide @sthames42 ! one question though, when i import the nvme0n1p4 ( ssd-boot pool ) it always comes with dataset ACL SMB4/NFS and ACL Passtrough set. Is that correct and if so why? Thought its handled like a separate disk/partition

@sthames42
Copy link

sthames42 commented Jan 15, 2025

good guide @sthames42 ! one question though, when i import the nvme0n1p4 ( ssd-boot pool ) it always comes with dataset ACL SMB4/NFS and ACL Passtrough set. Is that correct and if so why? Thought its handled like a separate disk/partition

@VaillantHassIo, I don't see those options in my TrueNAS Scale server. Where are you seeing these options?

According to the docs, these settings are applied based on the usage selected for the dataset. But, if you're just importing a dataset created in the CLI, perhaps these settings are applied, automatically. IDK.

@toot
Copy link

toot commented Jan 30, 2025

@sthames42 thanks a lot for the detailed description.
I discovered using a SSD that I had to adjust the command of step 10:
"zpool create ssd-pool /dev/sda4" worked for me, ".../sdap4" would end in an error "cannot resolve path"

@e-minguez
Copy link

Just in case, here it is the source code of the installer on the truenas repository https://github.com/truenas/truenas-installer/blob/master/truenas_installer/install.py#L81

@sthames42
Copy link

@toot, I'm guessing you were not using NVMe drives. The name of the device and it's partitions depends on the driver, which is the SCSI driver most of the time. The NVME driver names the drives and partitions differently.

@kstepien3
Copy link

@Kordrad, I've never worked with L2ARC but it looks like you don't create another pool but, rather, add a VDev to your existing pool as a cache. So you would want to remove the nvme-pool and then add a 1x1 VDev with the nvme0n1p4 disk partition to the target pool. Like I said, I've never done this and gleaned this from some brief research. I suggest some more thorough research before proceeding.

Thanks!

@Kalua1001
Copy link

Kalua1001 commented Feb 1, 2025

I get following error, when I tried to install on a SATA drive.
Command sgdisk n3:0:+32768M -3:BF01 /dev/sda failed:
Problem opening n3:0:+32768M for reading! Error is 2.
The specified file does not exist!
can someone help?
Thanks
Tried to install scale 24.10.1

EDIT: with 24.10.2 it seems to work now

@gpz1100
Copy link

gpz1100 commented Apr 11, 2025

Want to thank the OP for writing this guide.

I understand it's not recommended or supported by iXsystems, but is adequate for some situations. I'm setting up a remote replication target using an old dell with a single drive. I can shoe horn a ssd in there but seems awfully wasteful. Realize if the disk crashes I lose not only the pool but the OS as well ( no mirror, literally single disk). I think its a risk im willing to take.

@maxgomez89
Copy link

maxgomez89 commented Jun 14, 2025

Thanks for the write ups. I had to tweak the instructions a little bit to include additional context as I did some head->desk to figure out why i was getting some errors like mount failed to create mountpoint read-only file system or parted error unable to satisfy all constraints on the partition. Alas, here's my write up:

Creating a partition within your boot drive in order to use the wasted/unused/leftover space on your drive for apps, VM's, etc.

As of X(todo) version, the install uses a python3 script. dont listen to anyone who says use /usr/sbin/truenas-install
I got most of my information from here. Homie is just a lil too vague for c&p instructions:
https://gist.github.com/gangefors/2029e26501601a99c501599f5b100aa6?permalink_comment_id=5297295#gistcomment-5297295

AMD Ryzen 5 2600X
16GB DDR4 RAM
256gb NVME (32gb boot partition / the rest is for apps)
2x 14tb HHD mirrored

  1. Download TrueNAS Scale and burn to USB with Rufus (DD Mode).
  2. Boot from USB and select *Start TrueNAS SCALE Installation from the Grub loader menu.
  3. Select Shell from the Console Setup menu.
  4. From @transfairs, modify the installer to create 16GB boot partition instead of using the entire drive.
    sed -i 's/-n3:0:0/-n3:0:+32G/g' /usr/lib/python3/dist-packages/truenas_installer/install.py
  5. exit to return from the shell
  6. Select Install/Upgrade from the Console Setup menu (without rebooting, first) and install to NVMe drive.
  7. Remove the USB and reboot.

Next we re-partition the nvme. You must perform this next setp on console. SSH will not give you the correct access.

  1. Login to the linux shell. (currently option 8. was option 7)
  2. use the parted tool to enter the parted CLI
  3. set unit to KiB with unit KiB
  4. print list to show your current nvme layout. (it'll look something like this. C&P from @sthames42)
Model: HP SSD EX900 Plus 1TB (nvme)
Disk /dev/nvme0n1: 1024GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start        End            Size          File system  Name       Flags
 1      2048kiB      3072kiB        1024kiB                               bios_grub, legacy_boot
 2      3072kiB      527360kiB      524288kiB     fat32                   boot, esp
 3      527360kiB    17304576kiB    16777216kiB   zfs
  1. for clarity, rename the boot pool so you know exactly what is what
    name 3 boot-pool
  2. Confirm the change
print list
 Model: HP SSD EX900 Plus 1TB (nvme)
 Disk /dev/nvme0n1: 1024GB
 Sector size (logical/physical): 512B/512B
 Partition Table: gpt
 Disk Flags: 
 
 Number  Start        End            Size          File system  Name       Flags
  1      2048kiB      3072kiB        1024kiB                               bios_grub, legacy_boot
  2      3072kiB      527360kiB      524288kiB     fat32                   boot, esp
  3      527360kiB    17304576kiB    16777216kiB   zfs          boot-pool
  1. reduce the size of that boot pool using resizepart <number> <end>. In this example i took 32GB + <start addr of #3> to get 31777350
    example: resizepart 3 31777350kib
    Description: This command will resize the boot-pool partition ID #3 to end 32gb after its start address.
  2. print list to confirm the resize
print list
 Model: HP SSD EX900 Plus 1TB (nvme)
 Disk /dev/nvme0n1: 1024GB
 Sector size (logical/physical): 512B/512B
 Partition Table: gpt
 Disk Flags: 
 
 Number  Start        End            Size          File system  Name       Flags
  1      2048kiB      3072kiB        1024kiB                               bios_grub, legacy_boot
  2      3072kiB      527360kiB      524288kiB     fat32                   boot, esp
  3      527360kiB    <somewhere     16777216kiB   zfs          boot-pool
                               around 32gb 
                               + 527360kiB>
  1. Now we can make the new partition. just call mkpart and follow the prompt:
    for name use apps-pool
    for type: i used zfs
    for start address: i used the end of block 3 + 2048
    for end: i used 100%
    Note: somewhere around this step you'll get a warning for not being byte aligned. i think that's due to the end address. I ignored it
  2. one more print list to confirm the new partition exists. I wont post an example of this one.
    Congrats you've just created /dev/nvme0n1p4 in this example!!!
  3. exit to return to the linux console

Now we need to setup the zpool for the UI to pick it up

  1. Lets check what zpools we have first with zpool status
   pool: boot-pool
  state: ONLINE
 config:
 
         NAME         STATE     READ WRITE CKSUM
         boot-pool    ONLINE       0     0     0
           nvme0n1p3  ONLINE       0     0     0
 
 errors: No known data errors
  1. lets create a new one pointing at the new partition we created called nvme0n1p4
    zpool create apps-pool /dev/nvme0n1p4
  2. another zpool status to make sure its there
   pool: boot-pool
 state: ONLINE
config:

        NAME         STATE     READ WRITE CKSUM
        boot-pool    ONLINE       0     0     0
          nvme0n1p3  ONLINE       0     0     0

errors: No known data errors

  pool: apps-pool
 state: ONLINE
config:

        NAME         STATE     READ WRITE CKSUM
        nvme-pool    ONLINE       0     0     0
          nvme0n1p4  ONLINE       0     0     0

errors: No known data errors
  1. Finally, we export it to get it on the TrueNAS UI
    zpool export apps-pool
  2. Viola! you're done. Just need to reboot for it to take affect.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment