Ubuntu Remote Distribution Point

  1. last year

    Could you please help with RDP setup on Ubuntu ? I followed documentation for Windows, but it doesn't work for linux box. Clients can't mount smb share.

    Is it possible to add NFS support as it was on CWD ?

    Thank you.

  2. Ok, I found how to make it work. Here is background and solution.

    We are using customized CWDS as a VM. In our case images have to be stored and pulled from local NAS using NFS protocol. This setup helps us to deploy images at maximum speed (cluster has 15 sfp+ connections), so each node can easily handle multiple deployments at full gig. Also VM easily managed because it is light linux machine. This setup was working perfectly with Dell and SMicro servers we are running.

    Recently I had to build new VR box with Asus MB based on win8.1. CWDS couldn't handle nic on this machine. That is why I decided to try CD. I know we I can build a new kernel with updated drives to use with CWDS but ... it is a good time to update.

    Here is 2 usable options I found best for my setup.

    1. Use default distribution point and mount smb shares from NAS on linux box. Not ideal because clients will mount smb share from VM itself and deployment speed will be limited to VM throughput.

    2. So far best option I found - create smb share on NAS. Add rw/ro users and setup this share as primary dis.point for CD. The tricky part is that I still have to mount share locally on VM to use as physical path for CD to create and manage files. But clients are mounting share directly from NAS to upload and download images.

    Workaround with SMB works fine but it will be nice to have NFS option available for linux version ...

    And thanks a lot for your hard work and offering this product for free!

  3. clonedeploy

    21 Oct 2016 Administrator

    Your workaround is the correct way to do it.
    Mount share on server for clonedeploy server to use
    create users directly on the share for clients to connect to

    There is no longer an option for NFS, it is not secure.

  4. Can you please elaborate on the difference between #1 and #2?
    And what does it meant to "create users directly on the share for clients to connect to"?

  5. clonedeploy

    6 Apr 2017 Administrator

    Option 1 is the default setup. An SMB share on the same server as CloneDeploy.

    Option 2, this user changed the default distribution point to a different server. So the smb user's are created on that server, not the CloneDeploy server.

    I would personally probably leave things as default. Distribution points haven't really been fully developed yet. That's what I'm focusing on for the 1.3.0 release.

  6. So if I understand #2 correctly, it should be done like this?

    mkdir /mnt/external_storage
    mount external.storage.adress:/designated_folder /mnt/external_storage
    # replace /cd_dp with /mnt/external_storage in /etc/samba/smb.conf and WebUI distribution config
  7. clonedeploy

    6 Apr 2017 Administrator

    Yes that is another way to do it.

  8. But if I'm correct, this way is "Not ideal because clients will mount smb share from VM itself and deployment speed will be limited to VM throughput".
    Is there a way to make the client connect directly to the external storage?

  9. clonedeploy

    6 Apr 2017 Administrator

    Yes, basically the same as what you just said, but setup the smb share somewhere else, NAS, etc. Then mount that share on the CD server. Update the DP's physical path and share settings.

    Clients will connect directly to that share, and the clonedeploy web interface has access to create and delete images.

  10. What exactly do you mean by "setup the smb share somewhere else, NAS, etc."?
    Can you be more specific with command lines etc.?

  11. clonedeploy

    6 Apr 2017 Administrator

    Not much to say really. Just setup a share anywhere else. Could be a windows machine, linux, a nas device, DFS, SAN, etc. It doesn't matter. Then the smb server on your clonedeploy server won't be used. I can't give commands because it depends on where you set it up.

  12. Edited last year by eliadl

    I ran the commands I've mentioned earlier.
    It *did* use the Server (in between) rather than connect directly to the external storage.
    I know because the file transfers were like 10 times slower (verified this by scp-ing from /cd_dp/ vs /mnt/external_storage).
    So what is missing to make this connection direct?

  13. clonedeploy

    6 Apr 2017 Administrator

    The distribution point share settings are the key. They tell the client how to get to the share. Assuming that /mnt/external_storage is a share. You would put the necessary ip, username, password etc, in the distribution point to connect to the share. By default it points to clonedeploy in order to cut clonedeploy out you need to make those changes.

  14. /mnt/external_storage is a local mount point on CloneDeploy Server to the external storage.
    If I understand correctly, you're suggesting that I keep this mount point, keep smb.conf pointing at it, but change\add the distribution point with the address of the external storage instead of the CloneDeploy server?

  15. clonedeploy

    6 Apr 2017 Administrator

    Yes, keep the mount point.
    smb.conf doesn't matter, it won't be used in any way
    Update the dp with the address of the external storage, sharename, etc, change the users and passwords to match
    change the dp's physical path to the mount point

    CloneDeploy server uses the physical path to create and delete images
    Everything else is only used by the client

  16. Edited last year by eliadl

    It worked, thanks!

 

or Sign Up to reply!