Last active last year
I don't know if this is a bug or simply unavoidable.
I have a USB 3 512 GB SSD that I use for certain tools. As I was running a test with deployment, I forgot to remove it from the computer I was deploying to. When the deployment was finished it turned out that it had overwritten my USB drive instead of the internal drive.
Since the internal drive is a 256 GB NVME I am thinking that it simply chose the larger drive or first in boot order maybe?.
Is it possible to make CD chose internal drives as first priority as I imagine that accidents could happen? I also wonder if you deploy a CloneDeploy image from USB would it overwrite the CD USB too?
Sorry if I am being unclear.
I know that is an option and one I have considered.
However the option of detecting a model and do it dynamically is important for us, since we are few people with a lot of users and a lot of models. The ability to push out files based on computer model would be very valuable for us, not just for drivers, but potentially other files.
I know that this is maybe outside the scope of what you are willing to support and outside the scope of how CD was intended, which is completely okay.
We are just glad that there are people like you who make this kind of software.
I have almost completed my setup and it is working as it should.
I am currently using the filecopy feature to move a directory of drivers to the PC after deployment where I then use a powershell script to check the model and install the correct drivers.
There is one problem with this: The driver directory has to be copied in full, with drivers for all models, which gives 12 GB extra spread over a large number of files. This takes quite a while.
I was thinking that if the filecopy could check the model and then just copy that specific directory for that specific machine, it could cut down the time severely.
I know a little about what to do:
The model name for Lenovo laptops is located in /sys/devices/virtual/dmi/id/product_version (for many other models it would probably be located at /sys/devices/virtual/dmi/id/product_name )
This gives me the string "ThinkPad T460s". The Driver directory would only contain the "T460s" part though, so I would separate the string and only copy the directory that contains the remaining string.
The first part I think I can do with this:
IFS=',' read -a model <<< "$computer_model"
I can also copy a folder containing a specific string, but I am stumped as to how I can combine it with CD's filecopy process.
How do I check if one of the folders in the filecopy is called "Drivers" and then how can I make it skip that single directory and launch my own code?
As you can tell I am very new to Bash scripting (and Linux in general), so I appreciate any help you can give me.
The first part about the domain joining I think is caused by trying to add the -server parameter to the add-computer command. The command simply will not work if I specify a server and I don't know if it is a bug in powershell or something I do wrong. Removing it works tough but gives a little less control.
Okay. I was wondering if there was a specific reason and it simply wasn't possible. I will try and switch them in the scripts.
Regarding the sysprep tags: So you don't need to have it defined in sysprep tags on CD with the $computer_name variable?
I actually have my setup a little different than usual. I use CloneDeploy to copy all the files over to the machine on deployment and a shortcut for the StartSysprep.bat in the Administrator Startup folder. I remove the password for the account before upload so when it is deployed, it will log in automatically, do a sysprep, install drivers and join domain. I know it takes a little longer but with a /reboot instead of /shutdown in the sysprep command it will all be automatic so that doesn't matter much.
The advantage is that you don't have to worry about audit mode or rearming limits and so on which means that you only need 1 machine for updating. Another advantage is since all files are copied with CD's file copy feature, it is easy to edit everything directly on the server, though most of this could also be done if you sysprep before.
Is there a disadvantage to doing it this way that you know of? I was just wondering if the 5-10 minutes is the only reason for sysprepping before uploading or if I am making a huge mistake.
Thank you for your time and answers. I am quite new to deployment so it is nice to get some help.
Very nice T3chGuy007.
I do have a question though: Are you able to make sure that you have network when trying to join domain?
Mine just skips this step even though I make sure to install drivers before joining, with another script. The drivers work when I get into Windows.
I would also like to know if and how you have gotten Sysprep tags to work with the Computer name given when deployment starts. Is it just to have an unattend.xml in c:\system32\sysprep when uploading?
A small suggestion for the CloneDeploy developer: Is it possible to do filecopy before sysprep tags? That way we can have the uattend.xml file on the deployment server, change what we want, use filecopy to copy it to the target computer after deployment and still use the Sysprep tags.
Thank you for a great program and the service you provide.
I believe that I have found a bug and a possible workaround for others in the same situation. I has happened on 2 different servers.
I had a version 1.0.1p3 before the update.
My image storage was moved to D:\clonedeploy\ and worked fine.
Edit: Running on Windows Server 2012R2.
Followed instructions to the letter and update to 1.1 was successful.
When I tried to pxeboot to On Demand deployment, all of my images were gone.
However, in the web interface they were all still there and seemingly nothing had changed. I tried creating a new image in the web interface and found out that on the image search overview, the old images had nothing below Imaging Environment while the newly created one said linux.
I tried to update the images with new names, but it didn't help. The drop-down menu for Client Imaging Environment was not responsive.
The workaround for this is to create new images and then copy the files and folders from the old images into the folders of the new ones. It will then work and you can boot from them again.
In CrucibleWDS we had it working so that when selecting Upload from On-demand, we could type in a new name for a new image and it would be created. That was a nice feature which I would very much like to see in CD.
Any chance of this feature returning?
Thank you for a wonderful tool.
You hit it right on the head there.
I had of course forgotten to install the patches. I knew right away when I saw the patch notes, because they described the exact problem I was having.
This was a complete bungle on my part.
Thanks for getting back to me so quickly and thanks for an excellent program.
I am having a problem with a new model we are trying to deploy. A couple of days ago, we got the Lenovo T460s and it was the first time we had to deploy Windows 7 to these.
Usually CrucibleWDS had worked for us, but in this case it did not and I decided that it was time to make a clonedeploy server.
Everything worked fine and the problem with the unsupported network drivers for the new model were fixed in the new kernel.
Now I have a different problem though. I will have to write it down by hand since it is not logged anywhere.
Partclone v0.2.86 http://partclone.org
Starting to restore image (-) to device (/dev/nvme0n12)
device (/dev/nvme0n12) is mounted at
partclone fail, please check /var/log/partclone.log !
/bin/cd_global_functions: line 164: - : syntax error: operand expected (error token is "- ")
If i do a cat /var/log/partclone.log then I just get the same message up until error exit.
I assume it has something to do with the SSD in the new model, but is it unsupported or is there something I can do?