Reviving Sun Ray: Setting Up on OpenIndiana Hipster
A guide to setting up a Sun Ray server on OpenIndiana Hipster 2025.10, exploring legacy thin-client technology on modern systems.

Tired of manually provisioning each server’s OS, only to face the inevitable drive failures and imaging headaches? You’ve landed on the right page if you’re aiming to modernize your deployment infrastructure.
Traditional server deployments often rely on local storage, leading to a decentralized, repetitive, and error-prone process. Managing updates, patching, or recovering from hardware failures becomes a manual, time-consuming ordeal. Diskless booting, particularly when leveraging robust technologies like ZFS and iSCSI, offers a powerful solution by centralizing your operating system’s root filesystem onto a network-accessible storage server.
Achieving a diskless Linux boot involves a coordinated dance between several key components: a DHCP server, a TFTP server, an iSCSI target (ideally powered by ZFS), and an iSCSI initiator on the client.
1. Network Boot (PXE/iPXE): The Ignition Sequence
The client’s Network Interface Card (NIC) initiates the process. It broadcasts a DHCP request, which your DHCP server answers not only with an IP address but also with the location of a TFTP server. This TFTP server hosts the initial bootloader files, typically undionly.kpxe for compatibility with standard PXE ROMs, which then loads iPXE. iPXE is a more advanced network bootloader capable of advanced scripting.
Your DHCP configuration might look something like this (simplified for illustrative purposes):
# DHCP server configuration (e.g., ISC dhcpd.conf)
option space pxelinux;
option pxelinux.magic code width 16 = array of unsigned integer 8;
option pxelinux.magic magic-s
....
next-server 192.168.1.100; # TFTP server IP
filename "undionly.kpxe"; # Initial PXE bootloader
The iPXE script, delivered via TFTP, is where the magic truly begins. It will instruct the client to connect to your iSCSI target:
dhcp
# Example iPXE script
sanboot iscsi:<iSCSI_TARGET_IP>:::<LUN_ID>:<IQN>
For example: sanboot iscsi:192.168.1.200:::0:iqn.2026-05.com.example.storage:my-server-root
2. Centralized Storage (ZFS + iSCSI): The Root of All Boot
This is where ZFS shines. Instead of raw disks, you’ll carve out block devices for your clients from your ZFS storage pool. This can be done using ZFS zvols (zfs create -V <size> <pool>/<dataset>/<zvolname>) or by creating image files within a ZFS dataset and exposing them via iSCSI.
Your iSCSI target daemon (e.g., LIO via targetcli) will then present these zvols or image files as block devices over the network.
# Example targetcli configuration for LIO
/> cd /iscsi/iqn.2026-05.com.example.storage:target0/
/> create mapped_lun <LUN_ID> <zvol_path_or_image_file>
3. Client Bootstrapping: The Kernel and Initramfs
The client’s Linux initramfs is the crucial component that bridges the network boot and the iSCSI storage. It must be compiled with the necessary network drivers and iSCSI initiator modules. Kernel command-line parameters are essential for telling the kernel how to find and mount its root filesystem over iSCSI.
You’ll pass parameters like these to the kernel:
root=/dev/sda # Or the device name assigned by the initiator
rootdelay=10
ip=<client_ip>:<server_ip>:<gateway_ip>:<netmask>:<hostname>::<interface>:<dhcp_client_id>
netroot=<iSCSI_TARGET_IP>::<LUN_ID>:<IQN>
Note: Some distributions might use different parameter names or require specific modules to be loaded in the initramfs.
Crucially, you must prevent the OS from reconfiguring the network interface post-boot to avoid losing its iSCSI connection.
The r/sysadmin and r/homelab communities often sing the praises of this setup for its centralized management benefits, especially in scenarios like GPU compute nodes where managing local disks would be a nightmare.
While iSCSI offers block-level access, its complexity leads some to consider NFS for diskless roots. NFS is simpler to set up but operates at the filesystem level, which can have performance implications and lacks the direct block device abstraction. Quasi-diskless setups, using a small local drive for /boot, are another compromise.
Diskless Linux booting with ZFS and iSCSI is a powerful, albeit complex, solution for modernizing OS deployment. It enables rapid provisioning, simplifies management through ZFS snapshots and replication, and centralizes your root filesystems.
However, this is not for the faint of heart or unstable networks. The setup is intricate, and troubleshooting “inaccessible boot device” errors requires a deep understanding of DHCP, PXE, iSCSI, and the Linux boot process. iSCSI is sensitive to network disconnections, and default configurations lack encryption, necessitating VLANs or IPsec for security. Performance can be a bottleneck on 1GbE, though 10GbE significantly alleviates this. Distributions that don’t natively support direct iSCSI installation (like older Debian/Ubuntu releases) will require additional tooling.
Verdict: Use this if you demand centralized control, rapid deployment, and the data integrity features of ZFS for your root filesystems, and have a highly reliable, high-performance network infrastructure to back it up. If network stability is questionable or your team lacks deep networking and storage expertise, look elsewhere.