Cluster setup
The cluster nodes are used for message passing devlopment work. The nodes are diskless, and boot off of a floppy containing 9load and fetch the kernel from the fileserver. GigE nodes also have 100bT interface connected becuase 9load does not contain the GigE driver. In contrast to the powerwall nodes there are no hard drives in the cluster nodes. Each node boots from a floppy containing just 9load and plan9.ini for 100bt attached nodes (ps1-ps8) the plan9.ini is:
distname=plan9 partition=new ether0=type=i82557 bootfile=ether0 bootargs=il mouseport=ps2 console=0 baud=9600for GigE attached nodes (p1-p18) the plan9.ini is:
distname=plan9 partition=new ether0=type=ga620 ether1=type=i82557 bootfile=ether1 bootargs=il mouseport=ps2 console=0 baud=9600the boot floppies are made with the command
pc/bootfloppy /dev/fd0disk /tmp/plan9.iniAs with the powerwall machines the bootfile arg tells the machines that it will get it's kernel using bootp/tftp over the network from the auth server. All machines use the 100bt to fetch the kernel as 9load does not contain the GigE driver. The auth server knows which kernel to send based on the bootf arg in /lib/ndb/local /386/9p for machines p1-p18 and /386/9ps for machines ps1-ps8. Actually both of these kernels are the same, largely default cpu server kernels using the cpurc.
4th Ed. Upgrade
Upgrading to the 4th Ed. required updating the 9load on the floppies as the pci detection code has changed. In the 3rd Ed. the second i82557 interface was detected first, with the 4th Ed. they are detected in the correct order. This also meant that we had to change the ethernet cable from the secondary to the primary interface on all nodes and update the /lib/ndb/local file to reflect the new mac adresses.Last Modified: May 27 2002
dpx@acl.lanl.gov