Welcome to the community forums for Theopenem. All responses are provided by other users that wish to help out. Theopenem will not respond to these posts. If you require more assistance than what the community can provide, we offer various paid support options.

  • @theopenem_admin could you please help with multicasting. I've checked other posts, and there are various combinations of things to try. Do I need to specify any arguments for multicasting? When I attempted multicasting, it became stuck. Normal imaging via SMB is functioning correctly. Any additional documentation you can provide would be greatly appreciated.

  • @jithinpsk
    Multicast can't work with SMB... either you have Upload / Deploy Direct to SMB on and no multicast or you have Upload / Deploy Direct to SMB disabled and then you can use multicasting...
    Also my multicast config looks like this after I was troubleshooting its ins and outs when imaging between VLANs, BUT it's not properly tested by me, because I started to use the Direct SMB feature:

  • @eruthon Thank you very much for your reply. Sorry for my ignorance. if we disable SMB, what other transmission protocol can we use? HTTP? Also, after enable multicast in the server side. how those client workstations know which multicast address to join?

  • @jithinpsk
    Your SMB storage can work in two modes:

    • Direct - you upload and deploy your images to/from your SMB storage without the need of the images being present on the server
    • Non-direct - you have your SMB as replication server, i.e. you upload your image to a server, that server sends it to the SMB and SMB sends it to other servers you can have

    And how clients know what address to join - you either create a multicast for a group you created or you create an on-demand task in Imaging Task menu and you choose how many clients need to join etc. Then you boot your clients to the LIE/WIE environments, choose multicast and choose a multicast session you want 🙂

  • @eruthon does that mean, to make multicast working, we have to configure SMB to "Non-direct" mode? In our current setup, we followed these steps and have a SMB share on the D drive.

  • @jithinpsk
    I hope I'm not wrong here, but essentialy I understand it as SMB server doesn't support multicasting, because it doesn't use UDP. At least that's the problem I had, but my SMB is not on the same server as TOEMS, but on my TrueNAS.
    Your client PCs connect either to your HTTP server on TOEMS, which can use udpcast-sender.exe, or to your SMB, which doesn't support udpcast (probably).

    I think the problem with documentation is, that not many people use it with TOEMS and documentation is still being migrated and getting completed. So multicast wasn't on wiki creators' to-do list very high.

    Also some info here, specifically last response from hodgesc: https://forum.theopenem.com/topic/175/multicast

  • The concept of multicast doesn't make a lot of sense to SMB, since it's a client accessed file system, it never actually knows what's happening, just that a system asked for a file it is authorized to receive. At the very least I'm not finding any evidence that it's supported by Samba, the SMB server of note in linux and BSD environments like TrueNAS (I use truenas myself).

    From an architectural perspective, the goal of multicast is to let the switches do the duplication work so the sending server's link speed isn't the bottleneck. It only has to send the image out once.

    The alternative is faster links. We use 10gig out of the server to switches with several SFP+ ports and many 1gig ports. In this situation if you had 10 clients to image who could actually sink 1gig each, then the network would bottleneck at 10 clients. In my experience the receiving NIC and SSDs can do at best a little over half a gig, but many do far worse.

    In practice, since Theopenem's regular imaging process is not batch based, you'll need two or more people to reliably have enough machines pulling their images concurrently to hit the 10gig bottleneck.

    TrueNAS's ZFS RAM ARC Cache makes it practical to actually provide 10gig outbound even with a disk pool that isn't close to doing that, as long as the image is in the cache already. Lastly, the readily available SFP+ cards out there generally have 2 ports, and TP-link has an "affordably priced" 24port switch with 4x SFP+ ports, so you could aggregate it as a 20 gig link.

    With multicast, it shouldn't generally matter if the Theopenem server can only do gigabit speeds, so SMB Direct from a more capable fileserver is just unnecessary.

  • @eruthon Checked multicast functionality after disabling SMB, and set a queue size of 2. However, imaging started without waiting for the second PC and failed, as indicated in the screenshot below.

    11-21-23 12:24 Starting Multicast Session With The Following Command:
    cmd.exe /c ""C:\Program Files\Theopenem\Toec-API\\private\apps\udp-sender.exe" --file "D:\toems_local_storage\images\dev_01_MyApps_Win11_CP941\hd0\part2.winpe.wim" --portbase 9028 --min-receivers 2 --ttl 32 --interface --mcast-rdv-address --log D:\multicast.log & "C:\Program Files\Theopenem\Toec-API\\private\apps\udp-sender.exe" --file "D:\toems_local_storage\images\dev_01_MyApps_Win11_CP941\hd0\part3.winpe.wim" --portbase 9028 --min-receivers 2 --ttl 32 --interface --mcast-rdv-address --log D:\multicast.log & "C:\Program Files\Theopenem\Toec-API\\private\apps\udp-sender.exe" --file "D:\toems_local_storage\images\dev_01_MyApps_Win11_CP941\hd0\part4.winpe.wim" --portbase 9028 --min-receivers 2 --ttl 32 --interface --mcast-rdv-address --log D:\multicast.log"