openvidu-android 🔗

Check it on GitHub

A client-side only Android native application built with Java and using official Google WebRTC library.

If it is the first time you use OpenVidu, it is highly recommended to start first with openvidu-hello-world tutorial due to this being a native Android app and being a little more complex for OpenVidu starters.

OpenVidu does not provide an Android client SDK yet, so this application directly implements OpenVidu Server RPC protocol. In other words, it internally implements what openvidu-browser library does. Everything about this implementation is explained in section Using OpenVidu Server RPC protocol

Understanding this tutorial 🔗

OpenVidu is composed by the three modules displayed on the image above in its insecure version.

  • openvidu-android: Android application built with Java, connected to OpenVidu through WebSocket.
  • openvidu-server: Java application that controls Kurento Media Server.
  • Kurento Media Server: Server that handles low-level operations of media flow transmissions.

Running this tutorial 🔗

This tutorial is compatible with Android >= 5.0 (API level >= 21)

To deploy the Android APK you need to have Android Studio, an Android device (recommended) or an Android emulator and Android SDK installed. You can download Android Studio here. You also can check the official Android Studio guide.

After we have set Android Studio up, we must continue with the following commands:

1) Clone the repo:

git clone -b v2.22.0

2) Open Android Studio and import the project (openvidu-tutorials/openvidu-android).

3) Now you need the local IP address of your PC in your LAN network, which we will use in points 4) and 5) to configure OpenVidu Server and your app. In Linux/OSX you can simply get it by running the following command on your shell (will probably output something like

awk '/inet / && $2 != ""{print $2}' <(ifconfig)

4) OpenVidu Server must be up and running in your development machine. The easiest way is running this Docker container which wraps both of them (you will need Docker CE). Set property DOMAIN_OR_PUBLIC_IP to the IP we just got in point 3). In the example below that would be replacing -e DOMAIN_OR_PUBLIC_IP=YOUR_OPENVIDU_IP with -e DOMAIN_OR_PUBLIC_IP=

# WARNING: this container is not suitable for production deployments of OpenVidu Platform
# Visit

docker run -p 4443:4443 --rm -e OPENVIDU_SECRET=MY_SECRET -e DOMAIN_OR_PUBLIC_IP=YOUR_OPENVIDU_IP openvidu/openvidu-server-kms:2.22.0

5) In Android Studio, you must also indicate the OpenVidu Server URL to the app. To do that, on the Project Files view, open the file app/src/main/res/values/strings.xml. The value of default_openvidu_url (that's here) must be the URL of your OpenVidu Server. Complete URL is https://DOMAIN_OR_PUBLIC_IP:4443/, where DOMAIN_OR_PUBLIC_IP is the IP address configured in your OpenVidu Platform service. In this example that would be:

6) Connect the Android device to the same LAN than your PC.

7) Connect the Android device to the PC with an USB cable. You must enable USB Debugging and give permissions (check out official Android docs).

8) Run the tutorial. In Android Studio, select the app from the run/debug configurations drop-down menu in the toolbar. In the Select Deployment Target window, select your device, and click OK. Finally, click Run.

Understanding the code 🔗

This is an Android project generated with Android Studio, and therefore you will see lots of configuration files and other stuff that doesn't really matter to us. We will focus on the following files under app/java folder:

  • this class defines the only Android activity of the app.
  • it is related to the participants info, such as connection information and their UI elements. This is the parent class of RemoteParticipant and LocalParticipant.
  • this manages the collection of Participant objects, the behavior of SessionActivity layout and takes care of the creation of PeerConnection objects.
  • the negotiation with openvidu-server takes place in this class. Its responsibility is to send RPC methods and listen to openvidu-server events through a websocket connection. To sum up, it implements OpenVidu Server RPC protocol.

WebSocket address, session name and participant name 🔗

As stated above in Running this tutorial, you have to modify the value of default_openvidu_url with the IP of your PC in file res > values > strings.xml. For example:

<string name="default_openvidu_url"></string>

Besides, you can change the default values for the local participant name (default_participant_name) and session name (default_session_name). These will appear as default values in the form to connect to a session.

<string name="default_session_name">SessionA</string>
<string name="default_participant_name">Participant</string>

Get a token from OpenVidu Server 🔗

WARNING: This is why this tutorial is an insecure application. We need to ask OpenVidu Server for a user token in order to connect to our session. This process should entirely take place in our server-side, not in our client-side. But due to the lack of an application backend in this tutorial, the Angular front itself will perform the POST operations to OpenVidu Server


private void getToken(String sessionId) {
    // See next point to see how to connect to the session using 'token'

Now we need a token from OpenVidu Server. In a production environment we would perform this operations in our application backend, by making use of the REST API, OpenVidu Java Client or OpenVidu Node Client. Here we have implemented the POST requests to OpenVidu Server in a method getToken(). Without going into too much detail, this method performs two POST requests to OpenVidu Server, passing OpenVidu Server secret to authenticate them. We use an http-client we have wrapped in class CustomHttpClient.

  • First request performs a POST to /openvidu/api/sessions (we send a customSessionId field to force the id of the session to be the value retrieved from the view's form. This way we don't need a server side to connect multiple users to the same session)
  • Second request performs a POST to /openvidu/api/sessions/<sessionId>/connection (the path requires the sessionId to assign the token to this same session)

You can inspect this method in detail in the GitHub repo.

When token available start the process to connect to openvidu-server 🔗

Once we have gotten the token, we can set up our session object, our camera and the websocket. We create our session, our localParticipant and capture the camera:


private void getTokenSuccess(String token, String sessionId) {
    // Initialize our session object
    session = new Session(sessionId, token, views_container, this);

    // Initialize our local participant and start local camera
    String participantName = participant_name.getText().toString();
    LocalParticipant localParticipant = new LocalParticipant(participantName, session, this.getApplicationContext(), localVideoView);
    runOnUiThread(() -> {
        // Update local participant view
        main_participant.setPadding(20, 3, 20, 3);

    // Initialize and connect the websocket to OpenVidu Server

To configure the session, we are going to initialize and build the PeerConnectionFactory. This is the way to initialize WebRTC peer connections with the official Google WebRTC library for Android.


//Creating a new PeerConnectionFactory instance
PeerConnectionFactory.InitializationOptions.Builder optionsBuilder = PeerConnectionFactory.InitializationOptions.builder(activity.getApplicationContext());
PeerConnectionFactory.InitializationOptions opt = optionsBuilder.createInitializationOptions();
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();

// Using sofware encoder and decoder.
final VideoEncoderFactory encoderFactory;
final VideoDecoderFactory decoderFactory;
encoderFactory = new SoftwareVideoEncoderFactory();
decoderFactory = new SoftwareVideoDecoderFactory();

peerConnectionFactory = PeerConnectionFactory.builder()

Capture the camera 🔗

Android provides us a very easy way to use Camera API. This API includes support for various cameras and camera features available on devices, allowing you to capture pictures and videos in your application. In the end, we need to store the video track.


public void startCamera() {

    final EglBase.Context eglBaseContext = EglBase.create().getEglBaseContext();
    PeerConnectionFactory peerConnectionFactory = this.session.getPeerConnectionFactory();

    // Create AudioSource
    AudioSource audioSource = peerConnectionFactory.createAudioSource(new MediaConstraints());
    this.audioTrack = peerConnectionFactory.createAudioTrack("101", audioSource);

    surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", eglBaseContext);

    // Create VideoCapturer
    VideoCapturer videoCapturer = createCameraCapturer();
    VideoSource videoSource = peerConnectionFactory.createVideoSource(videoCapturer.isScreencast());
    videoCapturer.initialize(surfaceTextureHelper, context, videoSource.getCapturerObserver());
    videoCapturer.startCapture(480, 640, 30);

    // Create VideoTrack
    this.videoTrack = peerConnectionFactory.createVideoTrack("100", videoSource);

    // Display in localView

Since API level 21, Camera class was deprecated. Android introduced Camera2 with Android 5.0 (API level 21) and above. As the Android official documentation recommends, we should use Camera2 on supported devices.


private VideoCapturer createCameraCapturer() {
    CameraEnumerator enumerator;
        enumerator = new Camera2Enumerator(this.context);
    } else {
        enumerator = new Camera1Enumerator(false);
    final String[] deviceNames = enumerator.getDeviceNames();

    // Try to find front facing camera
    for (String deviceName : deviceNames) {
        if (enumerator.isFrontFacing(deviceName)) {
            videoCapturer = enumerator.createCapturer(deviceName, null);
            if (videoCapturer != null) {
                return videoCapturer;
    // Front facing camera not found, try something else
    for (String deviceName : deviceNames) {
        if (!enumerator.isFrontFacing(deviceName)) {
            videoCapturer = enumerator.createCapturer(deviceName, null);
            if (videoCapturer != null) {
                return videoCapturer;
    return null;

We also have to think about the media permissions. You can take a look to the Android permissions under section Android specific requirements.

Connect the websocket to OpenVidu Server 🔗

Ath this point, we will establish a connection between openvidu-server and our Android app through the websocket. This way will be able to consume OpenVidu Server RPC protocol to interact with the session (in the future, when OpenVidu Android SDK is available, this won't be necessary). We do so in background as an async task so the main execution thread is not blocked.


protected Void doInBackground(SessionActivity... sessionActivities) {
    try {
        WebSocketFactory factory = new WebSocketFactory();

        //Returns a SSLContext object that implements the specified secure socket protocol
        SSLContext sslContext = SSLContext.getInstance("TLS");
        sslContext.init(null, trustManagers, new;

        // Set the flag which indicates whether the hostname in the server's certificate should be verified or not.

        // Connecting the websocket to OpenVidu URL
        websocket = factory.createSocket(getWebSocketAddress(openviduUrl));
    } catch ( ... )

Using OpenVidu Server RPC protocol 🔗

Taking the references from OpenVidu Server RPC protocol, we will be able call to the OpenVidu Server methods and receive events from OpenVidu Server.

Listening to OpenVidu Server events 🔗

The app implements a method to handle event messages received from openvidu server. This will be essential in order to know when ice candidates arrive, when a new user joined the session, when a user published a video to the session or when some participant left the session.


private void handleServerEvent(JSONObject json) throws JSONException {
    if (!json.has(JsonConstants.PARAMS)) {
        Log.e(TAG, "No params " + json.toString());
    } else {
        final JSONObject params = new JSONObject(json.getString(JsonConstants.PARAMS));
        String method = json.getString(JsonConstants.METHOD);
        switch (method) {
            case JsonConstants.ICE_CANDIDATE:
            case JsonConstants.PARTICIPANT_JOINED:
            case JsonConstants.PARTICIPANT_PUBLISHED:
            case JsonConstants.PARTICIPANT_LEFT:
                throw new JSONException("Unknown method: " + method);
  • iceCandidate: this event brings a new ICE candidate generated in openvidu-server. We must include it in the proper PeerConnection object (we receive ICE candidates for our local PeerConnection and for each remote PeerConnection). To avoid timing problems, the application stores the received ICE candidates until that PeerConnection state is STABLE. Whenever it is reached, it processes all of them at once.
  • participantJoined: this event tells us a new participant has joined our session. We initialize a new PeerConnection object (so we may receive the new user's camera stream) and a new video element in the UI.
  • participantPublished: this event tells us a user has started sending a video to the session. We must start the ICE negotiation for receiving the new video stream over the proper and already initialized PeerConnection object. We do so by simply following WebRTC protocol: creating and setting a local SDP offer, sending it to openvidu-server with RPC method receiveVideoFrom and setting the answer received as remote SDP description of this PeerConnection.
  • participantLeftEvent: dispatched when some user has left the session. We simply dispose the proper PeerConnection and update our view.

Sending methods to OpenVidu Server 🔗

Below we list all the RPC methods that this Android app sends to OpenVidu Server. Each one of them will be answered by OpenVidu Server with a specific response. They must be properly processed and usually a new flow of method calls will follow the reception of these answers. We will not explain in detail every one of them to keep the length of this tutorial under control, but you can easily follow the flow of method calls in the source code.

Joining a session with joinRoom method 🔗

Once the websocket connection is established, we need to join to the session. By sending a JSON-RPC method joinRoom with the followings parameters we'll be able to connect to the session:


public void joinRoom() {
    Map<String, String> joinRoomParams = new HashMap<>();

    // Setting the joinRoom parameters
    joinRoomParams.put(JsonConstants.METADATA, "{\"clientData\": \"" + this.session.getLocalParticipant().getParticipantName() + "\"}");
    joinRoomParams.put("secret", "");
    joinRoomParams.put("session", this.session.getId());
    joinRoomParams.put("platform", "Android " + android.os.Build.VERSION.SDK_INT);
    joinRoomParams.put("token", this.session.getToken());

    //Sending JSON through websocket specifying 'joinRoom' method.
    this.ID_JOINROOM.set(this.sendJson(JsonConstants.JOINROOM_METHOD, joinRoomParams));

As response we will receive an object with all the existing participants in the session and all the published streams. We first process them as explained in events participantJoined and participantPublished in previous section Listening to OpenVidu Server events. And we must publish our own camera by initializing our local PeerConnection and MediaStream and calling publishVideo RPC method (see next point).

Publishing the camera with publishVideo method 🔗

We need to send a JSON-RPC message through the websocket with the required params as shown below:


public void publishVideo(SessionDescription sessionDescription) {
    Map<String, String> publishVideoParams = new HashMap<>();

    // Setting the publishVideo parameters
    publishVideoParams.put("audioActive", "true");
    publishVideoParams.put("videoActive", "true");
    publishVideoParams.put("doLoopback", "false");
    publishVideoParams.put("frameRate", "30");
    publishVideoParams.put("hasAudio", "true");
    publishVideoParams.put("hasVideo", "true");
    publishVideoParams.put("typeOfVideo", "CAMERA");
    publishVideoParams.put("videoDimensions", "{\"width\":320, \"height\":240}");
    publishVideoParams.put("sdpOffer", sessionDescription.description);

    //Sending JSON through websocket specifying 'publishVideo' method.
    this.ID_PUBLISHVIDEO.set(this.sendJson(JsonConstants.PUBLISHVIDEO_METHOD, publishVideoParams));

Subscribing to a remote video with receiveVideo method 🔗

We need to send a JSON-RPC through the websocket with the required params as shown below:


public void receiveVideoFrom(SessionDescription sessionDescription, RemoteParticipant remoteParticipant, String streamId) {
    Map<String, String> receiveVideoFromParams = new HashMap<>();
    receiveVideoFromParams.put("sdpOffer", sessionDescription.description);
    receiveVideoFromParams.put("sender", streamId);
        this.sendJson(JsonConstants.RECEIVEVIDEO_METHOD, receiveVideoFromParams),

Leaving the session with leaveRoom method 🔗

We need to send a JSON-RPC through the websocket (empty parameters in this case):


public void leaveRoom() {

Android specific requirements 🔗

Android apps need to actively ask for permissions in the code to access camera and microphone. By following steps below we have been able to properly set up the permissions your app will need to work along OpenVidu. You have a great complete guide here.

These configurations are already included in this openvidu-android project, so if you start from here no further configurations are needed. Otherwise, if you want to start a new Android project, you should follow these simple steps:

1) Add required permissions to your manifest file

<manifest xmlns:android=""


    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.INTERNET" />


2) Check if your application has already the necessary permissions. To do so, call the ContextCompat.checkSelfPermission() method. For example:

if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
        != PackageManager.PERMISSION_GRANTED) {
    // Permission for camera is not granted

If the app already has the permission, the method returns PackageManager.PERMISSION_GRANTED and the app can proceed with the operation. If the app does not have the permission, the method returns PackageManager.PERMISSION_DENIED, and the app has to explicitly ask the user for permission.

3) Android provides several methods to request for a permission, such as requestPermissions(), as shown in the code snippet below. Calling these methods brings up a standard Android dialog so the user may accept or decline the permissions.

// Here, "this" object is the current activity
if (ContextCompat.checkSelfPermission(this,
        != PackageManager.PERMISSION_GRANTED) {

    // Permission is not granted
    // Should we show an explanation?
    if (ActivityCompat.shouldShowRequestPermissionRationale(this,
            Manifest.permission.READ_CONTACTS)) {
        // Show an explanation to the user *asynchronously* -- don't block
        // this thread waiting for the user's response! After the user
        // sees the explanation, try again to request the permission.
    } else {
        // No explanation needed. Request the permission
                new String[]{Manifest.permission.READ_CONTACTS},

        // app-defined int constant. The callback method gets the
        // result of the request.
} else {
    // Permission has already been granted