Remote collectors only need a minimal disk footprint, but when deployed contain 250GB of disk space. The design of the remote collector is as follows:
- No database of its own
- Only contains some local files
- Needs to have space for the adapters that are installed and any temp data the adapter needs.
So, why is it set at 250GB? One of the down sides of having a single OVF/OVA for the deployment is that due to ovf limitations, the disk size cannot vary for different conditions.
If a user doesn’t want to use thin provisioning for the remote collectors, or you just want to make the disk smaller for any reason then follow the below procedures.
Note: This process must be done after the virtual appliance has been deployed and before it has been powered on.
There is no easy way in vSphere to reduce the size of a disk. The best alternative is to modify the disks of the vROps virtual appliance before it is started for the first time.
If you are running a remote collector and don’t want or need the 250 GB data disk, the following procedure can be used. Note, the data disk size should never be lower than 60 GB. This is because the disk allocation is:
- Core partition: 20 GB
- Log partition: 20 GB
- Data partition: grow
This procedure should only ever be done on Remote Collectors.
What we want Virtual Device Nodes to be:
- Binary Disk: SCSI (0:0) – 12 GB
- Data Disk: SCSI (0:1) – 60 GB
- Boot Disk: SCSI (0:2) – 4 GB
The steps to make this happen:
- Deploy the OVF or OVA, do NOT power it on
- Edit the VM settings, and remove Hard Disk 2, I would also recommend deleting the files from disk
- Add a new hard disk with a minimum size of 60 GB, for the Virtual Device Node, ensure the value is: SCSI (0:1)
- Note 1: If you click ‘OK’ to leave the edit configuration you will need to update the Virtual Device Node for the Boot disk (4 GB) to move that back to SCSI (0:2), IE I would recommend you do not click okay between step 2 and 3.
- Note 2: This will change the label for the data disk from ‘Hard Disk 2’ to ‘Hard Disk 3’ and the inverse for the boot disk, this is okay, the virtual device node is what matters
- Power on the node
- To verify things work correctly, after the node is up and running connect to the console or SSH in and run ‘df -h’ and ensure you see /dev/mapper/data-core, /dev/mapper/data-log, and /dev/mapper/data-db. It should look similar to: