Image scan failed -No such file or directory when uploading

  1. last year

    When i try to upload an image when using Netboot i get "Image scan failed -No such file or directory /storage/images/Test2/hd0/part3.dmg". then i see it do what looks like an FSCK and the computer reboots. I also looked on the server and there is nothing in the image folder so i know it's not uploading any partitions to the server.

    Reading what i wrote to myself and wondering if it is something as stupid as the server running out of room....let me increase the drive size in VMware and test....

  2. It seems like the netboot imaging isn't saving logs either so here are some screen shots of what i'm seeing.

  3. clonedeploy

    22 Jun 2016 Administrator

    Ok, so a lot going on here. I think we need to see the log. I'm not sure why it wouldn't be sending a log. Is it on demand? Did you check the on demand logs? But if it's not there, you can grab it from /tmp/clientlog.log it put it on a usb drive or whatever. Just close the terminal window before the task reboots the computer.

    Is it even saving the schema? If not could just be an smb permission issue.

  4. Yep it was On Demand. Here is the log i grabbed from an attempt i just made.

  5. clonedeploy

    22 Jun 2016 Administrator

    Yep you can see it here.
    mkdir: /storage/images/jaystest3/hd0: Read-only file system

    Cannot write to smb, it's read only.

  6. clonedeploy

    22 Jun 2016 Administrator

    This could actually be a bug. Can try adding to web ui, and uploading that way.

  7. clonedeploy

    22 Jun 2016 Administrator

    So this is what happens when you rush things. In my dev environment I was using an smb share with only read write, no read only user, so I didn't see the issue. Simple fix, hang on.

  8. clonedeploy

    22 Jun 2016 Administrator

    Here is the culprit

    smbInfo=$($curlAuth --data "dpId=$dp_id&task=$task" "${web}DistributionPoint" $curlEnd)

    Should Be

    smbInfo=$($curlAuth --data "dpId=$dp_id&task=$image_direction" "${web}DistributionPoint" $curlEnd)

    c:\program files (x86)\clonedeploy\web\private\clientscripts\osx_global_functions

    Or you can edit it directly in the webui Global->Imaging Scripts->Edit Core Scripts

  9. Ok so that seemed to work but now i'm having another issue. When i try to upload an image to the server it looks like it's working when watching the client, even see the little percentage decimals count up, like 1.096586
    2.237474 etc.

    But after it finishes the image on the server is only 487mb. Here is a log of when i try to image a client with the supposed image it uploaded. I can't seem to use the same trick you gave me earlier to copy the clientlog.log as it finishes way faster and reboots before i can even attempt to close terminal. It's strange the server is not showing the logs of the upload but it shows the log of the deploy's?

  10. clonedeploy

    22 Jun 2016 Administrator

    The deploy log looks fine. The problem must be the upload, I have no idea why your image would only be 400MB. It's Apple's own utility that makes the image. That fact that it starts and stops after 400MB is probably going to be difficult to track down. I can try to replicate this, but so far all of my attempts have been successful with El Capitan 10.11.3. Maybe something is different with 10.11.5 but I kind of doubt it.

  11. I was just able to capture this on my phone, i assume this may have something to do with it at the bottom? Looks like it is having trouble resizing going by the verbose output.

  12. clonedeploy

    22 Jun 2016 Administrator

    Even if it doesn't shrink, it still shouldn't stop it from imaging. I you want to keep messing with it, I would set the image profile task completed action to exit to shell, so the computer doesn't reboot when it's done. Then you can look at logs or whatever, you can also restart the CloneDeploy process any time, it's in the applications folder. Otherwise, I'll keep researching to see what else I can find out.

  13. Thanks for the reply. Something else to add....so while the image was uploading i noticed that my hard drive on the server shows space being used by the image. Example, right now my drive has 48G free, but when i start the upload it shows 13GB, meaning it looks like it's allocating the space on the server. However once the upload process finishes and the client reboots the drive shows the full 48GB available again. It's almost as if it's uploading but then deleting after? I'll poke around with some things and see if i can figure out what's going on.

  14. Reading what i wrote to myself and wondering if it is something as stupid as the server running out of room....let me increase the drive size in VMware and test....

  15. Yup, looks like there wasn't enough storage. i added more to the VM and now it's fine. The funny thing is all said and done, the image only uses 18g on the server. But during the imaging process it seems to take up about 50gb...then just "releases" the storage when it's done uploading and in the end does only use 18g.

    thanks!

  16. clonedeploy

    23 Jun 2016 Administrator

    Thanks for being the guinea pig for all of this. I'm sure we will run into some other issues along the way, obviously the os x imaging environment has very limited testing.

  17. No problem i enjoy doing it. I have to say i'm surprised. Netboot is notoriously very slow compared to PXE, but between the PXE clone deploy server i have setup and now the second clone deploy Netboot test server setup...imaging is pretty consistent between the two, very quick! Obviously netboot takes a little longer to boot into being that it loads the whole gui...but still like i said they are both about the same in speed. But having the apple diskutil/asr tools available gives the netboot server the advantage in my opinion.

  18. I'm going to have about 10 interns imaging 3000 laptops over the summer. I'll get their feedback and see what they thought about using the 2 systems and which they like better when imaging in bulk.

  19. clonedeploy

    23 Jun 2016 Administrator

    Sounds good, i will probably move most of my efforts with osx, macos to use the new process, since it's all done with Apple utilities it should be able to handle any problem that may come up, such as blessing the drive. Using the linux environment there will always be that gap of what you can and can't do.

  20. So I had this same issue and after replacing that line of script, I think I broke something. Also, I didn't save the orginal globalfunctions script so I'm not sure what I messed, I attached some pics. Think you can spot where I sent wrong?

  21. Newer ›
 

or Sign Up to reply!