choose which NIC to use on client side



  • Hi All,

    I have machines with 10gbit extension cards and onboard 1gbit NICs. Clonedeploy can be reached through both, and the onboard NIC is used by default. I'd like to try the 10gbit ports first. How do I configure that? Searching the forum carefully, i came across the kernel parameter "net_if=eth1", but i am not fully sure what to do with that. I assume i should build a custom kernel, and set that (but the right NIC name) in a config file somewhere before compiling? Just confirming i'm heading in the right direction before i waste a lot of time (I'm new to Linux, so this won't be easy anyway 😉

    All the best and thanks!



  • Are you pxe booting? If so just specify

    net_if=eth1
    

    in the image profile pxe boot options kernel arguments.



  • Yes, i am PXE booting. Will try, thanks!!

    However, it seems the NIC is not known to the boot environment. Searching the forum more i found the right commands for the client command line, see hmm, upload didn't work. Here's this image for the output. The 10gbit NIC (it has two ports) is not listed in ifconfig. Guess here i do need to build a custom kernel to have it show up? (Though NIC works out of the box on Ubuntu)



  • This post is deleted!


  • have you tried the latest kernel, 5.0rc7



  • Thanks for the quick replies!
    net_if=eth1 works to select the other onboard port, but the two 10gbit port are not selectable, also with the 5.0rc7 kernel. The driver needed is qede. I guess I should build my own kernel?

    That said, I think i need to switch to WinPE to get enough performance out of the NIC when copying from an SMB share. I will open a separate question around this.



  • Yes you would probably need your own kernel to add those drivers. By the way, the Linux environment is much faster than WinPE because of the block image format. 1GB is typically more than enough, I can usually achieve 5-7Gb when using SSD's over 1GB and lz4 compression. Most likely a 10Gb wouldn't be utilized anyway. Does the destination PC have SSD or spinning?



  • Ok, will try that. The server side has dual 40gbit, very fast storage and lots of ram, the client side have nvme ssd's that can handle 10gbit plus data streams easily. So the data can be delivered and written to disk fast enough (don't know how much decompression would be a bottle-neck, but they're powerful brand new machines so hopefully ok). I'll stick with Linux then unless really not feasible, lets see if i can get a kernel compiled with up to date drivers and configured for throughput. Thanks again!



  • Interesting read. Was wondering how fast it would be to image PC's? What would be the best settings for speed?



  • Hi Ricardo,

    The biggest limit on upload is the compression setting, and then of course NIC bandwidth. There is probably some trade-off there. Try, measure and know.

    I haven't managed to build the right linux kernel yet, difficult stuff for me. I notice that the qede driver is included in the kernel, but its an outdated version. How to update that i am not sure, the new driver downloaded from the manufacturer's website appears to be significantly different. Will get there 😉



  • Hmm, i thought: "since the NIC is working on an Ubuntu install, why don't i just transplant the working kernel and initrd image and use those to PXE boot from?" Nice idea (I think :p), but doesn't work. I run into this problem. Will post back once/if i figure this out



  • All solved! It turns out one has to embed the firmware into the kernel because hardware support compiled directly into the kernel means devices get initialized before a file system is available, and thus any firmware on a filesystem would not be accessible. So i compiled my own 5.1.6 kernel using the excellent info in the docs, and then used "CONFIG_EXTRA_FIRMWARE" to embed the required firmware image in the kernel.

    I then used net_if=eth2 (in my case) to make sure the right NIC is selected upon boot, and lastly i changed the default gzip to lz4 as gzip is way too slow to support the speeds I'd like.

    Its running like a beast now! See here:
    https://imgur.com/a/tLdAvZ7
    A direct transfer through rsync does double this speed, so i guess there is still room for improvement. The receiving disk is also by far fast enough to take in double that. But then, this is fast enough and returns are diminishing at this point (boot time takes over and becomes the major part of things).

    Thanks a lot for the advice!