Last active 3 months ago
Thank you. Option 1 seems to have worked. After forcing dynamic partitions I was able to deploy the image to a new machine and it worked perfect the first time.
I deployed to another machine and was able to replicate the filesystem issue. I ran three deploy tasks and have attached the log for each one.
The first log is when I first booted it into clonedeploy and made a workstation account for it. The first deploy task completed after less than 10 seconds. Obviously something went wrong there, so I rebooted it and ran another deploy task.
The second log completed the deploy task after about 2 minutes, and when I logged into the workstation it had the 99% full hard drive issue.
The third log everything went good and when I logged into the workstation the hard drive had 298 GB free and was only 5.7% full.
Let me know if you want me to do any more testing.
I added that bit to the lie_deploy script and then deployed the image again. Here is the log.
This time the filesystem expanded properly.
I will deploy to a couple more machines and if I run into the issue again I will post the log.
When I try to update the lie_deploy script I get this error:
A potentially dangerous Request.Form value was detected from the client (ctl00$ctl00$Content$SubContent$scriptEditor="#!/bin/bash
Description: HTTP 500.Error processing request.
Details: Request validation detected a potentially dangerous input value from the client and aborted the request. This might be an attemp of using cross-site scripting to compromise the security of your site. You can disable request validation using the 'validateRequest=false' attribute in your page or setting it in your machine.config or web.config configuration files. If you disable it, you're encouraged to properly check the input values you get from the client.<br>
You can get more information on input validation <a href="http://www.cert.org/tech_tips/malicious_code_mitigation.html">here</a>.
Exception stack trace:
at System.Web.HttpRequest.ThrowValidationException (System.String name, System.String key, System.String value) [0x00041] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
at System.Web.HttpRequest.ValidateNameValueCollection (System.String name, System.Collections.Specialized.NameValueCollection coll, System.Web.Util.RequestValidationSource source) [0x00053] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
at System.Web.HttpRequest.get_Form () [0x00025] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
at System.Web.UI.Page.DeterminePostBackMode () [0x0003a] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
at System.Web.UI.Page.InternalProcessRequest () [0x0001b] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
at System.Web.UI.Page.ProcessRequest (System.Web.HttpContext context) [0x0005f] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
The account I'm logged into has the Administrator role.
Sorry about the delay. Here is the log file from the image.
Hi I set up CloneDeploy on a Linux Mint VM to deploy images on my network. I followed the instructions to set it up on Ubuntu and everything worked fine. I uploaded an image (another Linux Mint machine) and deployed it to another computer. It came down and booted up fine.
When I uploaded the image, it shrunk the image beforehand to a 15GB image file. Both uploading it and deploying it took about 2 minutes, which was pretty fast and I was happy with that.
After I deployed it to the new machine I noticed it had an issue with the free space available. It has a 320GB hard drive. When I look at disk information:
/dev/sda1 Linux (Bootable) 316GB - 1.8 GB free (99.4% full)
/dev/sda2 Extended 4.1GB
/dev/sda5 Linux Swap 4.1GB
Do you have any idea what happened to my hard drive, and why it thinks it's 99% full?
Is there any way to fix that? It couldn't possibly be that full. It would take more than 2 minutes to write that much data. It has to be some kind of display error or read error.
Thanks for the help.