Could you please help with RDP setup on Ubuntu ? I followed documentation for Windows, but it doesn't work for linux box. Clients can't mount smb share.
Is it possible to add NFS support as it was on CWD ?
Ok, I found how to make it work. Here is background and solution.
We are using customized CWDS as a VM. In our case images have to be stored and pulled from local NAS using NFS protocol. This setup helps us to deploy images at maximum speed (cluster has 15 sfp+ connections), so each node can easily handle multiple deployments at full gig. Also VM easily managed because it is light linux machine. This setup was working perfectly with Dell and SMicro servers we are running.
Recently I had to build new VR box with Asus MB based on win8.1. CWDS couldn't handle nic on this machine. That is why I decided to try CD. I know we I can build a new kernel with updated drives to use with CWDS but ... it is a good time to update.
Here is 2 usable options I found best for my setup.
1. Use default distribution point and mount smb shares from NAS on linux box. Not ideal because clients will mount smb share from VM itself and deployment speed will be limited to VM throughput.
2. So far best option I found - create smb share on NAS. Add rw/ro users and setup this share as primary dis.point for CD. The tricky part is that I still have to mount share locally on VM to use as physical path for CD to create and manage files. But clients are mounting share directly from NAS to upload and download images.
Workaround with SMB works fine but it will be nice to have NFS option available for linux version ...
And thanks a lot for your hard work and offering this product for free!
Option 1 is the default setup. An SMB share on the same server as CloneDeploy.
Option 2, this user changed the default distribution point to a different server. So the smb user's are created on that server, not the CloneDeploy server.
I would personally probably leave things as default. Distribution points haven't really been fully developed yet. That's what I'm focusing on for the 1.3.0 release.
Yes, basically the same as what you just said, but setup the smb share somewhere else, NAS, etc. Then mount that share on the CD server. Update the DP's physical path and share settings.
Clients will connect directly to that share, and the clonedeploy web interface has access to create and delete images.
Not much to say really. Just setup a share anywhere else. Could be a windows machine, linux, a nas device, DFS, SAN, etc. It doesn't matter. Then the smb server on your clonedeploy server won't be used. I can't give commands because it depends on where you set it up.
I ran the commands I've mentioned earlier.
It *did* use the Server (in between) rather than connect directly to the external storage.
I know because the file transfers were like 10 times slower (verified this by scp-ing from /cd_dp/ vs /mnt/external_storage).
So what is missing to make this connection direct?
The distribution point share settings are the key. They tell the client how to get to the share. Assuming that /mnt/external_storage is a share. You would put the necessary ip, username, password etc, in the distribution point to connect to the share. By default it points to clonedeploy in order to cut clonedeploy out you need to make those changes.
/mnt/external_storage is a local mount point on CloneDeploy Server to the external storage.
If I understand correctly, you're suggesting that I keep this mount point, keep smb.conf pointing at it, but change\add the distribution point with the address of the external storage instead of the CloneDeploy server?
Yes, keep the mount point.
smb.conf doesn't matter, it won't be used in any way
Update the dp with the address of the external storage, sharename, etc, change the users and passwords to match
change the dp's physical path to the mount point
CloneDeploy server uses the physical path to create and delete images
Everything else is only used by the client