It is an understatement to say that “streaming is complicated.”
Live streaming, especially involving multiple participants, is an intricate technology dance. Each puzzle must harmonize to ensure a seamless, interactive, high-quality viewer experience.
The complexity increases even more when we decide to share not just our faces or voices but our desktops too. This is the path we have chosen in our ongoing TalkSPIFFE series: The live office hours we dedicate to the SPIFFE and SPIRE community.
This article provides a comprehensive guide for designing an intricate multi-platform streaming model. I want this to be a resource for my future self and the fantastic community interested in replicating such a setup. It’s an insider look into the dynamic and complex world of multi-streaming, showing how technology, when used innovatively, can foster powerful and engaging digital experiences.
TalkSPIFFE? Is That Something New? Can I Eat It?
Before I continue, it’s a perfect moment to delve deeper into what TalkSPIFFE truly is.
TalkSPIFFE is more than just a live stream; it’s a platform designed to demystify and elucidate the intricacies of the SPIFFE (Secure Production Identity Framework for Everyone) technology. It’s an opportunity for SPIFFE/SPIRE enthusiasts, professionals, and those who want to learn more about SPIFFE and SPIRE to come together and engage in enlightening discussions, learn from each other, and help foster a community.
We’ve aimed for TalkSPIFFE to be a space that feels both welcoming and informative rather than being strictly academic or intimidatingly technical. We believe in a more dynamic interaction that allows anyone and everyone to gain knowledge and contribute unique insights.
In our weekly sessions every other Friday, we tackle various SPIFFE-related topics, discuss the latest updates, share best practices, and do live demos.
Whether you’re just dipping your toes into the SPIFFE waters or are a seasoned professional looking to stay updated and share your wisdom, TalkSPIFFE offers you a place to do just that.
So join us in our upcoming streams, and let’s make TalkSPIFFE a thriving platform for learning, sharing, and collaborative growth in SPIFFE and SPIRE.
That was the segue 🙂 Now, back to our setup.
A Deep Dive Into the TalkSPIFFE Streaming Setup
Since my streaming setup is far from ordinary, I wanted to give you an inside look into the process.
In this article, I’ll share how we’ve used applications like Audio Hijack, Loopback, Airfoil Satellite, Farrago, Zoom, Discord, Twitch’s Guest Star feature, Open Broadcaster Software (OBS), and BetterDisplay to create a compelling and interactive live streaming experience.
Each platform plays a vital role, from integrating our desktops through Zoom and routing our audio via a Discord voice channel to broadcasting our video feed on Twitch. The added twist of background music takes the viewer experience to another level, immersing them in our conversational and instructional narrative.
Twitch’s Guest Star Feature is a Life Saver
Navigating the turbulent waters of streaming a guest’s audio and video on a live stream can be challenging. This is where Twitch’s innovative “Guest Star” feature shines.
The Guest Star tool lets you establish “guest slot” browser sources you can deftly arrange in your broadcasting software. With ease, you can swiftly add or change the guests appearing in that slot directly through Twitch.
The beauty of this feature is its simplicity: Streamlining the process of integrating a guest’s video feed into your live stream, and all without needing to dabble with complex configurations or multiple third-party tools.
Using this feature, I can stream the camera of my guest directly as a browser source into my Open Broadcaster Software (OBS).
If I want, I can also change the guests appearing in that slot directly through Twitch, effortlessly integrating a guest’s video feed into my live stream without juggling complex configurations or multiple third-party tools.
So far, everything about Guest Star works like a charm.
A Sneak Peek at the Backstage
Here’s how we typically initiate our stream:
Before the event starts, there is a 15-minute interval where the stream will be live and a “starting soon” scene with a countdown timer is displayed life (the `63:46` countdown timer in the image below is just for demonstration purposes; consider it something like 11:25 or similar—starting from 15:00 and counting backward).
In addition, I initiate a Discord voice chat session from which we air Eli’s voice feed.
After I ensure the guests’ camera and sound work fine, using the Studio Mode of OBS, I make sure that all the scenes and camera feeds are up and running and there is no missing source anywhere.
This step is significant because when you reboot your system or restart OBS, sometimes OBS fails to detect some of the input sources, and you’ll need to update them manually.
I use Audio Hijack for chaining, filtering, and mixing various audio streams. And I use Loopback to create virtual audio interfaces that I can set as audio input sources for Zoom and Discord. As you’ll see shortly, the loopback interfaces I make in Audio Hijack will find their respective places as “sinks” in Audio Hijack.
The TalkSPIFFE Audio Chain
Here follows our Audio Hijack stream setup. You can tap on the image to see an enlarged version. I will zoom in and explain each box in this layout below:
It might look a bit involved, though fear not; we’ll dissect the entire audio chain piece by piece.
Let’s zoom into the topmost browser source first:
The input source named “MUSIC IN” is Safari playing a playlist of instrumental music as background music for TalkSPIFFE. I split the audio into three parts:
- The top row is adjusted to a lower volume level and is to be used as the background music when we talk,
- The middle row has a higher volume level, and it’s to be played before the stream starts or during breaks,
- And the bottom one is a separate track that will be streamed to my and my guest Eli’s headphones.
Then, there is an A/B switch (see the image above) where the top two rows meet for me to toggle between two different volume levels and send the virtual output devices named “Stream Background Pass” and “Zoom Mic PassThru,” respectively.
These virtual devices are loopback audio interfaces that I created using Loopback.
- Stream Background Pass is used by OBS as a background music source for the stream,
- and Zoom Mic PassThru is sent to the Zoom video chat session that we share this stream simultaneously.
If you look closely, you’ll see that the top set of volume filters is lower than the bottom ones on the second row. This way, we can have quieter background music to talk over during the stream without becoming too loud to distract the audience. During breaks or before the stream begins, I can switch to the bottom lane to let the audience enjoy the music to its fullest.
Then comes the third row. Follow the arrows in the image below:
The third row is what Eli and I hear in our headphones as a piece of background music so that we can bob our heads to the same beats during the event 🙂.
Let’s move on to our next audio source: Discord. Discord is where I capture Eli’s voice and forward it into two mixers.
You’ll also see that “Guest Star” audio is forwarded to the same mixer. I aim to use the “Guest Star” feature as a backup audio source. So, if anything happens to Discord, I can disable Discord and swap it to Guest Star instead to capture Eli’s audio.
Or an alternative way is to use an A/B switch and keep both of the input sources enabled and switch between them using the A/B switch:
Then, in the middle of the setup, I have two audio sources:
- ELI AUDIO IN (Discord) is a Discord voice chat that Eli and I will use. I take Eli’s audio from that source to push it into OBS (Open Broadcast Software) as a stream source. While at that, I’ll also clone it and send it to my headphones through a separate channel.
- GUEST STAR (Chrome) is where Eli’s Twitch Guest Star browser source is. I keep it as a backup source in case Discord’s audio fails. Though I’ve had minor glitches while testing it as an audio source, I’m reluctant to use it. For now, it’s just a backup for emergencies.
The orange lines above pass through my microphone (Shure SM7B), a general-purpose noise filter, a vocal-specialized noise filter, and an equalizer before it reaches an A/B switch, respectively.
I use the A/B switch to quickly prevent my voice from being sent to the stream. That way, I can tell something Eli behind the scenes without my voice broadcast live.
In the image below are my equalizer settings for the curious; this setup makes more voice subtly deeper. Overdo it, and you’ll sound like Darth Vader or an accented version of Morgan Freeman. Be very careful.
Since everyone’s audio gear, vocal cords, room, and vocal modulation needs differ, your equalizer setting will likely vary significantly from mine. So, I’m leaving this just as a reference point. Trust your ears and experiment with the settings until you are satisfied.
Remember: Subtler is better. Be so subtle that the difference is almost unnoticeable to an untrained ear. If you realize what you generate sounds considerably different than you, then you have overdone it. And, trust me, it’s very easy to overdo with one’s own vocals. When you set it up, take a 30-minute break, do something else, and then listen to your audio again. If it sounds odd, you have overshot some settings; try readjusting it.
And below is my noise gate. Again, this is a representative snapshot and may not meet your needs. Adjust the configuration yourself, and always use your ears.
The green line has the same setup except for the intermediate filtering. It’s the sound coming from our dear guest, Eli. I’m sharing the image below again for your convenience:
Then my voice and Eli’s are combined, compressed, gain-adjusted, and set to “SPIFFE Loopback Audio” as a sink (see below)—SPIFFE Loopback Audio is a virtual audio device.
I created “SPIFFE Loopback Audio” using Loopback; I use it in OBS as an audio source to send to the stream.
Then the background music (arrows coming from the top) and our combined vocals (arrows coming from the bottom) are mixed into a single stream and sent to the “Zoom Mic PassTru” Loopback virtual device to be used as an audio source in Zoom (see below).
This final image above shows how my vocals are combined with the background music and pushed toward the “Volkan Just Mic” virtual audio device. This is what Eli hears.
Pinning Important Audio Controls
One helpful feature of Audio Hijack is I can float and pin almost everything in the UI. Here are my most frequently used controls dragged to the top corner of my widescreen.
This feature allows me to hide the convoluted chain of filters from my view during the stream and focus on the few essential elements that matter most.
Recent Changes After Our First Pilot
After our first pilot today, we decided it would be much easier to simplify things by not streaming Zoom altogether. In addition, using Twitch’s Guest Star feature as an audio source worked better for Eli than Discord’s audio. So, for now, we’ll use Discord only to share Eli’s Desktop, and for his camera and audio, we’ll use Twitch’s Guest Star feature.
Considering this, here’s our (quote) “simplified” audio pipeline:
A Dedicated Streaming Machine
To ensure a smooth streaming experience I use two machines:
- A Mac Mini that I use as a demo/screencast machine. I’ll call it the demo machine.
- And a Macbook Pro that I use specifically for OBS and streaming and nothing else. I make sure that nothing unnecessary runs on it. I’ll call it the streaming machine.
But why would I need a different machine to stream? Can’t it be done on the same box? Well, short answer: If you are using Apple devices, you’ll have to cut many corners to make streaming on a single machine work flawlessly. Using two machines provides better stream quality and will make my life much easier.
For a longer answer: One of the challenges we tackled in our TalkSPIFFE setup is managing the computational load of running demos while live streaming. Since some of our demonstrations can be quite resource-intensive, performing them on the same machine as the live stream could potentially degrade the stream’s quality, leading to lower bitrates, missing frames, and a sub-optimal viewing experience.
To set this dual-machine configuration up, firstly, I connected the HDMI output of my demo machine to an Elgato Camlink 4K capture card, which is plugged into my streaming machine. I created a 1080p virtual display on my demo machine to share the required content using BetterDisplay.
After connecting it to the streaming machine, too, Camlink 4K identifies itself as an additional display on my demo machine.
I mirrored the virtual monitor onto the Camlink 4K to have a separate view that I can enlarge on my larger 32” monitor if I need to. In addition, the mirroring allows me to use a copy of the demo window right in front of me so that I won’t have to tilt my unnecessarily on the live stream.
The following set of images shows what the overall setup looks like.
The feed from the Camlink 4K is then directed into OBS on my streaming machine. You can check the “Physical Setup” section for additional details on how things connect at the physical layer.
Once things are set up this way, the demo machine shoulders the burden of resource-intensive demos. In contrast, my streaming machine remains dedicated to delivering a smooth streaming experience without system degradation.
Making Stream Management Easier
However, this dual-machine arrangement necessitates fast switching between the two systems during a live stream. To overcome this challenge, I adopted a few strategies:
All of my connections are wired to ensure stable network connectivity. I learned the hard way long ago never to rely on WiFi for such a critical setup.
Remote Connection to the Streaming Machine
To streamline the control process, I remotely connect to my streaming machine from my demo machine. This way, I can use a single keyboard to interact with both machines.
Keyboard and Mouse that Can Connect to Multiple Devices
There can be tricky moments when I need to move my mouse pointer into the virtual monitor – a task that’s impossible when you are remotely connected. To circumnavigate this, I use a Logi MX Master Wireless Mouse, which can be connected to up to three machines.
Logi MX Master’s ability to instantly switch from one machine to another allows me to directly control the cursor on the streaming machine when required. As a fail-safe, I also keep an old (but in perfect working condition) wired mouse connected to my streaming machine (the MacBook Pro). I never fully trust remote connectivity!
I’ve crafted a resilient system that handles resource-heavy demonstrations while maintaining high-quality live streaming by implementing these strategies.
This is a testimony to the idea that you can overcome the challenges and constraints inherent in live streaming with careful planning and a bit of technological creativity.
The video setup is comparably simpler than the “auido” one.
First of all, I only need to modify my camera’s properties. I need to change them only because I am using a green screen. Otherwise, I could very well use my camera feed as is.
I’ll also stream Eli’s camera feed as is without any modification.
Besides, I have a pretty good camera (Panasonic HC-V180K) that makes my life much easier.
I just applied some alpha filter and color correction to my camera.
Here is the end result:
And here are the settings for the “color correction” and “chroma-key” filters, respectively:
You remember the orange arrows in the “Audio Device Chain” section? I made them using Presentify.
The benefit of Presentify is it’s not only for annotating static images like this. You can use Presentify for online classes, video presentations, or tutorials by letting you annotate your screen and highlight your cursor, even when the application you are drawing over is in full-screen mode.
It’s a great helper, especially when you are demonstrating technical content. It’s one of those tools you don’t know how much you need before using.
I already showed the camera setup for myself and my guest Eli.
I have two scenes in OBS that include those cameras in them. Having a separate scene for the camera instead of directly embedding the cameras in the scenes gives me the flexibility to modify them and reflect the changes in all the scenes I use.
For example, if I add a blue border around the camera view, it will be reflected everywhere the camera view is used; I won’t have to repeat adding that border around the camera to all of the scenes.
In addition, I can apply filters both to the camera and to the scene itself. This layered composition of color correction and chroma key filters gives more flexibility in configuring the sources.
Also, if I have a problem with a camera source like it is scaled incorrectly or not showing up, I can fix it in one scene instead of jumping into every scene that uses the camera and setting it one by one.
That part clarified, let’s begin with the camera scenes and go all the way up, building larger and larger scenes from sub-scenes.
That’s the source of my camera schene called “my-camera.”
And that’s the source of Eli’s camera view, which is a browser source from Twitch Guest Star. The view is very creatively named “twitch-guest-slot-1”.
The image source named frame is a blue image to give a distinct border around both of the camera feeds similar to the one below:
After setting up the camera primitives, I compose those “my-camera” and “twitch-guest-slot-1” scenes as the camera feeds into larger screens.
Below is the source layout named “talking-head,” containing the two camera scenes mentioned before.
Again, this “talking-head” scene is a sub-scene that will be composed into different scenes.
Here’s how the scene looks:
Side by Side
Here is the source layout of another scene named “talking-heads-lg.”
It creates a larger layout with our names and some turtle embellishments, again to be embedded into more concrete scenes.
Below you can see what the combined view looks like. Note that there are camera feeds inside the boxes. Right now, they are disabled, so the boxes look black and empty.
Demo Over Zoom
Here’s how the “talking-heads” scene is embedded into a view that shares a full-screen zoom session (which, on my OBS Scenes panel, is very creatively named… … … “Zoom” 🙂) so that both of our camera feeds will be visible when either Eli, or I am demonstrating something over Zoom.
Here is the list of sources in this scene:
As you see, even the Asus|Dummy(Zoom) view (highlighted above) is a scene in itself. Here’s its source layout.
The naming suggests that the view reflects a dummy 1080p virtual monitor (created using BetterDisplay), which a real 1080p Asus monitor mirrors.
A virtual display is helpful; I can mirror one of my main displays to it and have it as a small overview on my large central display, like a rear-view mirror of sorts so that I don’t miss the action in my peripheral vision when I focused managing scene transitions and other stream properties.
Keeping the screens bound to 1080p resolution helps me quickly share anything on the stream at 1080p resolution. This includes full-screen windows too. I can drag a Zoom desktop sharing session or a remote desktop connection to a different device or a browser window. Literally, I can easily show anything I can see on my streaming machine to the audience in 1080p resolution. Having a spare monitor to stream things gives a lot of flexibility.
I also have a third 1080p monitor that I also pair with a dummy monitor and Elgato Cam Link 4k so that I can share my demo machine’s desktop with ease:
Here is how to compose my desktop into any view:
And here are the details of the “(my desktop) cam link4k” view highlighted above:
It has Elgato Cam Link 4K as the input source and nothing else. Having a view with a single source might look unnecessary; however, this setup gives the flexibility to manage the camera, and if something happens to the camera view, then I’ll only have to fix it in a single place.
The Fireside Chat
Here’s the “Fireside Chat” screen, where my and Eli’s cameras are displayed side-by-side. Again, the cameras are off in this screenshot, but both of our camera feeds will be there in the live stream.
Here is the source layout of the above scene.
And here is what the twitch-guest-slot-1 sub-scene looks like:
It has a teal frame and a browser source from Twitch Guest Star.
The Big Red Clock
Yes, the clock is green to fit the overall theme, but the application’s name is The Big Red Clock, and it does its job amazingly well. It’s an app I use as a countdown timer before the show starts or whenever we need a break.
To make it blend in better, I add the following color key filters:
And for extra flexibility, I manage the big red clock in its own scene as well:
We use these “break screens” during the show whenever needed.
Aside from positioning the Big Red Clock, not much configuration is needed. I’ll share them here for the sake of completeness.
Streaming from Twitch to Zoom
Sharing the Zoom session on Twitch is easy: There is a scene for that. However, if you wanted to send a copy of the Twitch stream back to zoom, it’s not that straightforward.
Luckily, there is a quick hack: OBS Virtual Camera.
Press this big fat button, and you can send the entire stream as a camera source to any application that accepts virtual camera sources.
Here’s how the camera feed looks on a movie player that can display camera views too:
My Mini Studio
Here’s how things look from the outside:
Let me unpack the mess for you:
- I have a green screen behind me.
- A light source to my left is designed to diffuse light gently. And since I have a white and low ceiling, it creates evenly-spread natural-looking lighting on me and on the curtain without creating solid shadows. When possible, bouncing light off of surfaces is always a better option than directly pointing it to the subjects (the “subject” being me sitting on the chair in this case).
- There’s a warm fill light below, and you can get the fixture (ISJAKT led floor uplighter) from Ikea.
- There are two Neewer Key Lights in front of me. They are dirt cheap and are as effective as their expensive counterparts. But of course, you get what you pay for: there are no remote controls, you cannot change the color of the LEDs mid-stream, and you need to adjust its levels manually by pressing the buttons. But other than that, the lights’ quality, spread, and intensity are excellent. And nothing can compete with its price-over-performance ratio.
- There’s a camera directly focused on me at an angle from above. It’s my faithful Panasonic HC-V180K.
- I have one sizeable curved monitor as my main stage and two 1080p monitors to display my guest’s screen and my demo machine’s desktop, respectively.
- The large monitor is connected to my demo machine, and I established a remote desktop connection to the stream machine.
- If needed, there is a button below the monitor that I can switch to view the desktop of the streaming machine without having to remotely connect to it as a backup measure.
Video and Audio Capture Layout
To put things into more perspective, here’s a schematic of the stream-related hardware that I use:
I like Cam Link 4K for its small form factor, but HD60S+ offers a pass-through HDMI output, which can sometimes be helpful.
There’s also CloudLifter as a signal booster. I use it to boost the gain of the audio signal, clean it, and improve the overall signal-to-noise ratio.
But why do I need that gain boost? Because while Shure SM7B is known for its rich and detailed sound (and hefty price tag), it also has a relatively low output level compared to many other microphones. This means that it requires more gain from the preamp in the audio interface or mixer to produce a strong, clean signal.
Camera and Lighting Setup
I use a Panasonic HC-V180K camcorder to stream myself. It’s known for its superior performance under good lighting and has never failed me once so far. The Panasonic HC-V180K feeds directly into the Elgato HD60S+ capture card, ensuring a crystal-clear, high-definition video feed.
Lighting is essential to any high-quality video setup, and ours is no exception. My Neewer Key Lights, positioned strategically behind the camera as you see below:
But… but… but! There’s something else. What’s this little dude?
Well, meet Perry the Platypus!
I use him as an odd trick to complement my stream.
But how, or before that, why?!
Because to connect to your audience and establish an intimate streaming experience, you have to make eye contact with the camera.
The problem is: Making eye contact with a camera feels weird; it feels like looking in the eye of a Cyclops, and it gets irritating (for me, at least).
That’s where Perry helps. Instead of the camera, I see eye-to-eye with him, and I look around the vicinity of the camera. Since the camera is far away, and Perry sits within a few degrees of my line of sight, it’s as good as looking into the camera itself.
Just look at him; how can you say no to this little trouble maker 🙂:
Aside from the key lights, I use a white oval light source to achieve a more natural and even light spread with reduced shadows.
This light bounces off the ceiling, subtly illuminating me and the green screen. It allows greater flexibility when adjusting the camera’s chroma key and color correction filters in OBS.
And finally, to add depth and warmth to the video, I use a small fill light. Emitting a warm off-white yellow light, it delicately balances any shadows on my face and adds a touch of warmth to the overall video.
Of course, this is only possible with the final piece of our video setup: the green screen. Acting as a backdrop, it allows me to overlay any desired background during the streaming process, lending an additional layer of professionalism and versatility.
For your needs, any green screen would do. I use Instahibit Retractable Green Screen Backdrop.
Smoothing the Acoustics
Sound in the rooms gets bouncy, especially when that room is not a professional recording studio. In the quest for crisp and clear sound, I have taken measures to address potential acoustic challenges too. One of the essential steps in this direction is using acoustic canvas panels.
Strategically placed around the room, these panels are vital in managing the sound environment. Muffling the sounds bouncing off the walls significantly reduces any potential echo or reverb. This results in a clearer and more professional-sounding audio output for our stream.
Playing the right sound effects at the right moment during the stream is also a lot of fun. For that, I use Farrago, which, to me, is the best rapid soundboard ever: Robust, easy to use, and integrates with almost everything, including your midi keyboard.
After all, no stream is fun without a bleating goat in it!
Bonus: Game Streaming
This one is not directly related to TalkSPIFFE, but I use it when I stream games. And Twitch is not much fun if you don’t stream video games every once in a while.
Since video games use many system resources, I follow a similar strategy and use my capture card to stream my demo computer’s desktop to OBS on my streaming computer.
And for the audio, I use Audio Hijack to redirect the game audio to the capture card and then from the capture card to one of the virtual audio devices that OBS is listening to.
The audio chain below links my game audio onto the capture card:
And this chain links the sound from the capture card to the loopback audio that OBS takes as an input source.
Sending audio through the capture card does not have to be game-related. You can get clever with it. For example, I’m thinking of using the Discord audio in my demo machine instead of the streamer machine and shoving its audio through the capture card:
Then, I can use it as an audio source, instead of the guest start audio that comes from Twitch, and disable the “guest star” Chrome audio input entirely as shown below:
That would mean running one less application on the stream machine, which would ensure an even smoother streaming experience.
Finally, here is how the game looks on OBS, the game audio streaming from the Vocals track is seen below:
Remember that every stream is different, and every streamer’s’ setup, needs, and wants will differ too. Though I hope that this article can help some of you folks inspire and that you can use it as a starting point in your streaming journey.
And things change. There is no perfect setup; you keep on adjusting. For example, after our first pilot we realized that streaming to Zoom caused some audio echo/feedback issues on the “guest” end of things, and we decided it’s not worth it at the moment for two reasons:
- First, since we don’t have a very large audience (yet), it’s not worth splitting it to multiple venues. We can use Twitch as the one single place to stream things, and we can use SPIFFE slack workspace for questions. That would unite the community and will result in people exchanging comments and ideas in the same medium instead of spreading the creative juice to several mediums.
- Second, it makes things simpler to manage, and with less moving parts there’s less possibility to make errors too.
That said, the world of live-streaming is an ever-evolving landscape, one that’s limited only by the boundaries of your imagination. Whether you’re a seasoned streamer or just dipping your toes into these waters, remember that every stream is an opportunity to learn, grow, and create unforgettable experiences for your viewers.
Keep pushing the boundaries, and may the source be with you 🦄.