Update:
Luckily, one of my controllers was still in a good "joined status". I eventually fixed my problem by deleting my other two controllers and redeploying them. I noticed that when deploying a controller to one or two hosts, it continued to show that I did not have a cluster to join to. So I redeployed the two other controllers to a single ESXi host that wasn't giving the error. My problem is now fixed and my controllers are all showing "connected".
Worth mentioning, the original controllers that were showing as disconnected, after troubleshooting I noticed that upon virtual appliance reboot (controllers), an error message was showing [host SMBus controller not enabled] during the boot up process of the controller virtual appliances. After issuing the [show status] command, I could see that the system partitions (mount points) were not present. After redeploying the controllers a second time, everything seemed to be working.
*****to clarify, I redeployed the clusters 3 times. My original problem was that upon powering on the NSX infrastructure, the controllers for some reason ran out of disk space and couldn't properly start up. After rebuilding the controllers, I ran into issues with the host SMBus controllers not loading, which lead the virtual disks to not be loaded (hence the mount points not showing up...since the VM couldn't connect to them). After a 2nd reploy, I ran into the "no cluster to join" error, which was resolved with redeploying the controllers on a known good host. Now that my controllers are all deployed and synchronized with each other, I'm able to vMotion the controllers across all the hosts.
Message was edited by: James__M
*added the clarification dialogue at the bottom of the post