My voice chat solution is a peer to peer embedded voice chat solution for unreal engine. It has two ways to manage audio, Client or Audio Capture with Audio Component or WS Group Component, and three ways to manage connections, Connection and Channel or Group Component or WS Group Component.
initialize audio devices
Audio devices should be initiated to record and playback for voice chat.
We need to construct an embedded voice chat client.
Embedded voice chat client is used to manage audio devices.
Then, we need to call initialize for the embedded voice chat client.
After initializing successfully, we can call get input / output devices to get a manager of input devices and output devices.
We can get available devices, get default device, get / set active device, refresh with input / output devices.
We can also get whether the user is talking after we set active device for input devices from embedded voice chat client.
If we don't want audio any more, we can call terminate to turn off audio devices.
Audio capture is a new solution to capture players' audio input for voice chat. It's based on unreal audio capture and has broader device support than embedded voice chat client, including usb headphone.
To use audio capture, we need to construct an embedded voice chat audio capture first.
We can then get all input device by calling get capture devices available with the audio capture we just created.
Then, we can use a for each loop to iterate all available devices.
We can also call get capture device info with the audio capture and a device index to get details of an input device.
After the player select an input device he wants to use, we can call open capture stream and start capturing audio with audio capture and the device index to start capturing player's audio input.
Once we don't need audio input anymore, we can call stop capturing audio and close stream with the audio capture to stop capturing player's audio.
Audio component is a solution for audio output. It needs to work with group component. It uses unreal audio system to render audio. So, we can apply attenuation, sound class and sound effect chain to it. We can use the same settings from other unreal audio component to make the output of voice chat sounds consistent with environment sound we designed.
Audio component uses unreal 3D engine to render positional audio base on its position, each audio component represent the audio from a player, a connection, a channel. We can add an audio component on each player character to play their voice. So, if we don't need positional or proximity audio, we can just place all audio component on the player pawn that belongs to the client.
To use audio component, we need to bind it to a group component by calling bind audio component with group component and audio component.
peer to peer network
This plugin uses peer to peer network to connect each two clients.
Peer to peer means clients connect to each others directly. But it doesn't mean peer to peer doesn't need server at all.
To make peer to peer works, we need a stun server. The stun server will try to figure out what kind of network the clients are under and each layer of router the clients have. It's called description and candidate.
Then, we need to transfer description and candidates between two client to connect them.
In unreal engine, we can use unreal rpc to transfer these data.
If the stun server finds that the two client can't establish a peer to peer network, we will need a turn server as a relay to connect both clients.
These are automatically done in my voice chat plugin.
To establish peer to peer network between two clients, we need to call create connection first.
Then, we need to construct an connection handler.
We need to register the connection handler to the connection by calling register connection handler with connection handler.
After that, we need to call connect with ice servers, port range begin, port range end.
Ice servers should contains url of stun and turn servers. Port range begin and port range end indicates turn server's port range.
We can set up our stun and turn server with our dedicated server in gamelift. So that as long as our client can connect to the dedicated server, it should be able to connect stun and turn servers.
The client should receive a description whose description type is offer from on local description event in the connection handler after we call connect.
We should override on local description to get send the description to another client.
After the second client get the remote offer description, we need to call create connection, construct connection handler, register connection handler, connect as same as the first client.
Then, we need to call set remote description with description type, description from the first client.
The second client will get description from on local description event in connection handler as well. We need to send the description to the first client and call set remote description with description type and description.
After transfer descriptions for both clients, stun server will try to get candidate for both clients until they meet.
On local candidate event in connection handler will be triggered when the client receive a candidate. We need to override the on local candidate event in connection handler to receive the candidate and send it to the other client.
Once a client receive a candidate from the other client, we need to call add remote candidate with mid, candidate from the other client.
We can call close to close a connection.
Data channel is used to transfer data between two clients inside connection.
We can create data channel after we create connection in the first client.
We need to call create channel from a connection with channel id.
Then, we need to construct a channel handler.
After that, we need to call register channel handler with channel handler.
Once the connection status become connected, the on data channel event in the second client's connection handler will be triggered with channel id and channel. We just need to construct channel handler and register channel handler for the second client.
We can also call close to close a channel.
We can call is talking to get whether the other client is talking or not.
We may also want to send text message to the other client, we can call send with data(message you want to send).
The other client will get the message from channel handler's on message event. We need to create a subclass of channel handler and override on message event to get the message.
We can also set / get volume for the channel.
We can set channel type and 3d properties for the channel.
There are 3 channel types. Non positional, proximity. positional.
Non positional means the voice volume will be constant and won't be rendered with distance.
Proximity means the voice will be rendered with distance. 3d properties will indicated how the voice is rendered.
Positional means the voice not only will be rendered with distance, but also will be rendered with direction with 7.1 channels effect.
If we use proximity or positional channel type, we need to call update 3d position and update listener 3d position for the channel.
The last version of voice chat plugin introduces group component feature. It makes it much easier and simpler to connect players. We can get rid of tricky rpc functions and just use join / leave group to connect / disconnect clients.
To use group component, just add an embedded voice chat group component to player character and call set ice servers with ice servers, port range begin, port range end.
Then we can call join / leave group to with the client's player character's embedded voice chat group component with group name. Player characters with same group name establish a p2p connection.
ws group component
Ws group component is similar with group component. It handles connection and channel for us with an addition server.It doesn't rely on unreal rpc. So that we can use it without unreal rpc.
When we develop a game with a big team, someone may use possess function or other functions that may break rpc functions. That's why we design ws group component. Using ws group component is much easier and more convenient and reliable to build up our in-game voice chat. Which also means, players don't even need to join a same game server to connect voice chat with each others.
Ws group component also helps us handle audio output. It uses unreal audio system to render audio as same as audio component. So, we can apply attenuation, sound class and sound effect chain to it. We can use the same settings from other unreal audio component to make the output of voice chat sounds consistent with environment sound we designed.
To use ws group component, we need to setup an addition server and add ws group component to the player pawn that belongs to the client.
To use ws group component, we need to setup an addition server first.
We have provided a cloud formation template with auto scaling supports. We can just create the addition server by a few clicks and just leave it there. Or if we want to go further, we can setup some addition security settings for the voice chat addition server.
After setting up addition server, We can call connect ws server with the DNS name of the load balance from the cloud formation stack resources we just created.
Then, we can call set ice servers, join / leave group as same as group component to connect with others in the same group.
Be aware that the group name is global. Which means if we just let it as "Default", other players in other server will also join the same group and players can hear voice chat from other servers.
If we want to get all channels handled by the ws group component, we can call get channels with the ws group component.