Please use this thread to discuss the mission statement found here. http://clonedeploy.org/mission/
I look forward to your feedback.
CloneDeploy has been absolutely critical to my school district's imaging workflow, and is absolutely more intuitive and extensible than the Clonezilla strategy that was previously in place.
With ultrabooks becoming more prevalent, we can't always expect new devices to have an ethernet jack. USB ethernet adapters are definitely a possibility, and work well enough for one-off reimaging.
This summer, I had to come up with a strategy to reimage 600 new laptops as quickly as possible, and in a location without adequate network connectivity (a gymnasium). I remembered an earlier forum post asking about direct offline USB imaging, and I believe the hangup was supporting SecureBoot + OS portability + images >4GB.
I was able to resolve this for my own purposes by splitting up a CloneDeploy-created image into 4GB segments, and then cat'ing them together during the deploy process. I had to gut and mangle a couple relevant CloneDeploy scripts to provide the support structure for offline imaging.
With some inexpensive USB3 sticks, our small team was able to image these computers in record time. In fact, much faster than gigabit ethernet, either unicast or multicast, could hope to perform.
See a time-lapse video of our imaging process here:
I'd love to see an officially-supported offline imaging workflow formally developed. If you're interested, please let me know, and when I return to the office, I can send you (and explain) the modified files/scripts we have in use.
Thank you very much for this tool!
Thank you for your story, support, and donation. I would love to see how you managed this without any network connectivity. You would have needed to significantly change the scripts to avoid all the calls to the server I would imagine. I've been thinking how I could implement this, and I think it should be possible. There is a lot of logic that is performed on the server during imaging. Moving that logic to the imaging environment would be almost impossible, at least with bash, maybe something else. Instead, I could probably have some type of form that asks for the info ahead of time, that is normal sent automatically by the client. Then the results could be saved to a text file and read locally by the client instead of needing to communicate with the server. Like you said the biggest hold up has been secure boot and file sizes. I can't remember but I thought that a user did get it working by creating a separate ntfs partition on the device. I'll need to some searching.
I definitely had to "launder" some information, such as the drive schema and fdisk (gdisk?) commands which I grabbed from the log of a basic normal CloneDeploy run.
In fact, I'm noticing that the partitioning script used by CloneDeploy (at least in 1.2.x) leaves a couple GB unused at the end of some imaging runs. In my modified offline version, I simply replaced one of the last lines with another blank return line to trust the utility's end-of-disk guess.
All this is going by memory :) I'll show you my work on Monday.
For a "real" implementation, I'm thinking that there would need to be a "prepare for offline mode" option on a proper-sized USB stick, where the server can go through all the motions of detecting disk size, and then downloading the partitions as stdin into the linux split command into 4GB chunks.
Yes, I like your idea, prepare for offline mode, and create some type of answer file. It would require network connection at least 1 time, which I think is reasonable.
As far as leaving a couple GB left at the end of the drive, I think i fixed that in 1.3.x, at least for most situations.