Filesystem issue after deployment

  • Hi I set up CloneDeploy on a Linux Mint VM to deploy images on my network. I followed the instructions to set it up on Ubuntu and everything worked fine. I uploaded an image (another Linux Mint machine) and deployed it to another computer. It came down and booted up fine.

    When I uploaded the image, it shrunk the image beforehand to a 15GB image file. Both uploading it and deploying it took about 2 minutes, which was pretty fast and I was happy with that.

    After I deployed it to the new machine I noticed it had an issue with the free space available. It has a 320GB hard drive. When I look at disk information:

    /dev/sda1 Linux (Bootable) 316GB - 1.8 GB free (99.4% full)
    /dev/sda2 Extended 4.1GB
    /dev/sda5 Linux Swap 4.1GB

    Do you have any idea what happened to my hard drive, and why it thinks it's 99% full?

    Is there any way to fix that? It couldn't possibly be that full. It would take more than 2 minutes to write that much data. It has to be some kind of display error or read error.

    Thanks for the help.

  • It sounds like the filesystem didn't expand and deploy. Can you attach the deploy log?

  • Sorry about the delay. Here is the log file from the image.

  • It looks the filesystem is not being expanded, the strange thing is that it isn't even trying to.
    Can you do the following?
    In admin settings->core scripts-> select lie_deploy from the dropdown
    Around line 161 you should see
    [code]partition_size_bytes=$(parted -s $hard_drive unit b print all | grep " $this_number " -m 1 | awk -F' ' '{print $4}' | sed 's/B//g')[/code]
    On the next line can you add:
    [code]log "Partition Bytes: $partition_size_bytes"
    parted -s $hard_drive unit b print all >> "/tmp/clientlog.log"[/code]

    Then deploy again, and attach the new log

  • When I try to update the lie_deploy script I get this error:

    A potentially dangerous Request.Form value was detected from the client (ctl00$ctl00$Content$SubContent$scriptEditor="#!/bin/bash

    Description: HTTP 500.Error processing request.

    Details: Request validation detected a potentially dangerous input value from the client and aborted the request. This might be an attemp of using cross-site scripting to compromise the security of your site. You can disable request validation using the 'validateRequest=false' attribute in your page or setting it in your machine.config or web.config configuration files. If you disable it, you're encouraged to properly check the input values you get from the client.<br>
    You can get more information on input validation <a href="">here</a>.

    Exception stack trace:
    at System.Web.HttpRequest.ThrowValidationException (System.String name, System.String key, System.String value) [0x00041] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
    at System.Web.HttpRequest.ValidateNameValueCollection (System.String name, System.Collections.Specialized.NameValueCollection coll, System.Web.Util.RequestValidationSource source) [0x00053] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
    at System.Web.HttpRequest.get_Form () [0x00025] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
    at System.Web.UI.Page.DeterminePostBackMode () [0x0003a] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
    at System.Web.UI.Page.InternalProcessRequest () [0x0001b] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0
    at System.Web.UI.Page.ProcessRequest (System.Web.HttpContext context) [0x0005f] in <6bd7a846f9aa4f0bae143ad0f36ee3bd>:0 [/quote]

    The account I'm logged into has the Administrator role.

  • That's an issue specific to Mono.

    Open Web.config in the frontend folder, not the api folder.

    [code]<httpRuntime targetFramework="4.5" />[/code]

    Change it to

    [code]<httpRuntime targetFramework="4.5" requestValidationMode="2.0" />[/code]

    Restart apache

  • I added that bit to the lie_deploy script and then deployed the image again. Here is the log.

    This time the filesystem expanded properly.

    I will deploy to a couple more machines and if I run into the issue again I will post the log.

  • I deployed to another machine and was able to replicate the filesystem issue. I ran three deploy tasks and have attached the log for each one.

    The first log is when I first booted it into clonedeploy and made a workstation account for it. The first deploy task completed after less than 10 seconds. Obviously something went wrong there, so I rebooted it and ran another deploy task.

    The second log completed the deploy task after about 2 minutes, and when I logged into the workstation it had the 99% full hard drive issue.

    The third log everything went good and when I logged into the workstation the hard drive had 298 GB free and was only 5.7% full.

    Let me know if you want me to do any more testing.

  • Thanks for the info, I think I see the pattern.
    It has something to do with the extended partition not being created / restored early enough. In the additional logging I had you add you can see the following:
    Partition Bytes:
    Model: ATA WDC WD3200BPVT-0 (scsi)
    Disk /dev/sda: 320072933376B
    Sector size (logical/physical): 512B/4096B
    Partition Table: unknown
    Disk Flags:

    The partition bytes value is empty and the partition table is unknown. As soon as the extended partition table is restored those values are populated as seen below

    Partition Bytes: 4118807552
    Model: ATA WDC WD3200BPVT-0 (scsi)
    Disk /dev/sda: 320072933376B
    Sector size (logical/physical): 512B/4096B
    Partition Table: msdos
    Disk Flags:

    Number Start End Size Type File system Flags
    1 1048576B 315952726015B 315951677440B primary ext4 boot
    2 315953773568B 320072581119B 4118807552B extended
    5 315953774592B 320072581119B 4118806528B logical

    By then it's too late. Here are the options right now.

    1. In the image profile deploy options, check the box to force dynamic partitions. This will create the extended partition before the first partition is deployed and should fix the issue.
    2. In the core scripts select lie_deploy, around line 410 find:
      [code]elif [ "$partition_method" = "original" ]; then # create partitions from original mbr / gpt
      log " ** Creating Partition Table On $hard_drive From Original MBR / GPT ** " "display"[/code]
      immediately after add
      [code]partprobe &>/dev/null[/code]

    Option 2 may or may not work. If it does, you won't need to do option 1

  • Thank you. Option 1 seems to have worked. After forcing dynamic partitions I was able to deploy the image to a new machine and it worked perfect the first time.