Now that i had the model finished I was able to bring it into unreal to be a representation of a physical ATEM that is on the network or at least able to be connected to in any way. I did end up sticking to using companion as a way to connect the real world ATEM to the one in unreal and vice versa.
Luckily for me things worked out pretty smooth. I was able to get a lot of the function that I was looking at getting and was able to record a video showcasing the functionality that it has already. I didn't know before I started exactly how I was going to get everything working exactly but I knew I would use OSC to communicate between unreal and companion and started there.
I was able to just set up triggers inside of companion and that would send OSC commands into unreal which would then update the material for the buttons to either glow red, green, white, or not glow at all. In unreal any time I have something update in there on that ATEM I send out an OSC message which will activate a button in companion to then update the physical one. It isn't ideal having this middle man but as of right now I don't have a better way of getting around that but that would be something I want to not have.
As for all of the voice commands stuff I knew how to set that up already and I used the meta plugin for unreal which allows for these commands. I am not too happy with it though and may keep exploring alternatives because as seen in the youtube video, there is a lot of delay from when I actually give a command and when the plugin is able to get it processed. I do like the control and how good of a job it does otherwise which is why I went with that for now but in a production setting I can't see it being the best solution.
I am glad I was able to get ahead of things a little bit and make this progress. There are still a few more small things I want to get going like linking up all of the buttons including the mics, as well as adding more voice commands to maybe control some of those audio sources.
When I return from my trip in LA I might start to look into adding a second ATEM model and allowing that to also be voice controlled and get all of the same treatments as the current one. At the very least I want to be able to have voice commands and realtime visualization for the camera switching aspect of the next one I do. I amd likely going to do the larger ATEM Mini or maybe a constilation if I can get access to testing with one.