Deploying OpenVidu Pro on AWS



Before deploying OpenVidu Pro you need to create an OpenVidu account to get your license key.
There's a 15 day free trial waiting for you!

Deployment instructions 🔗

1) Access to the console of AWS Cloud Formation 🔗

Go to CloudFormation



2) Select Create StackWith new resources 🔗


3) Option Specify templateAmazon S3 URL with the following URL 🔗

https://s3-eu-west-1.amazonaws.com/aws.openvidu.io/CF-OpenVidu-Pro-latest.yaml

To deploy a fixed version, including previous ones, replace latest with the desired version number.
For example: https://s3-eu-west-1.amazonaws.com/aws.openvidu.io/CF-OpenVidu-Pro-2.29.0.yaml



4. Specify stack details 🔗

First of all, indicate a name for your deployment. Next fill each section of the Parameters formulary:

Domain and SSL certificate configuration 🔗

Configuration for your CloudFormation stack certificate. We provide 3 different scenarios: you can use the default SELF-SIGNED CERTIFICATE stored in the application (users will need to accept the browser security alert) or if you have a custom domain, either allow LET'S ENCRYPT to automatically generate a valid and free certificate for your domain or use your own CUSTOM CERTIFICATE if you already have one (and for some unknown reason you still want to use that).

Self-Signed certificate Let's Encrypt certificate Custom certificate
Certificate Type selfsigned letsencrypt owncert
AWS Elastic IP (EIP) One AWS Elastic IP you generated
(check AWS Docs to generate a new one)
One AWS Elastic IP you generated
(check AWS Docs to generate a new one)
Domain Name pointing to Elastic IP Your fully qualified domain
For example: openvidu.company.com
Your fully qualified domain
For example: openvidu.company.com
URL to the CRT file URL to your public key file
The CloudFormation stack must have access to this URL, at least temporarily
URL to the key file URL to your private key file
The CloudFormation stack must have access to this URL, at least temporarily
Email for Let's Encrypt Your choice

If you have questions about how to configure your Domain and SSL certificates, you can check these examples:

OpenVidu configuration 🔗

OpenVidu Pro License key
Your purchased license key from your OpenVidu account. There's a 15 day free trial waiting for you!
Your OpenVidu Pro License key
Initial number of Media Node in your cluster
How many Media Nodes do you want on startup (EC2 instances will be launched)
Your choice
Openvidu Secret
Secret to connect to this OpenVidu Platform. Cannot be empty and must contain only alphanumeric characters [a-zA-Z0-9], hypens "-" and underscores "_"
Your choice

There are many other configuration values that can be set once the deployment has completed. Check out section Updating OpenVidu Pro configuration once the deployment is done.

OpenVidu Recording Configuration 🔗

Configure if you want or not to enable OpenVidu Recordings and what type of persistence do you want.

OpenVidu Recording Possible values:
  • disabled Recordings will not be active.
  • local Recordings will be active and saved locally.
  • s3 Recordings will be active and saved in s3.
S3 Bucket where recordings will be stored Name for the bucket you want to use. If empty, a new bucket will be created with the cloudformation stack id

Elasticsearch configuration 🔗

You have three options for configuring the deployment with Elasticsearch and Kibana:

  • 1) Using an external Elasticsearch and Kibana deployment.
  • 2) Using an Elasticsearch and Kibana deployed next to the OpenVidu Server master node.
  • 3) Not deploying Elasticsearch and Kibana at all.

The next sections will take a closer look at these three options.

Requirements to use an external Elasticsearch and Kibana are:

  • A running Elasticsearch and Kibana deployment. If you don't have any Elastic Stack deployed, check this guide on how to deploy an Elastic Stack as a service in AWS or Elastic Cloud.
  • An user configured in your Elastic Stack to be used in the OpenVidu configuration. You can use a normal user with all privileges or just use a fine-grained one. Check this guide on how to create a fine-grained user.

After that, just fill this section of the form with these parameters:

Enable Elasticsearch and Kibana Parameter which enables or disables the use of Elasticsearch and Kibana by OpenVidu Pro. In this case, it must be set to true.
Elasticsearch URL URL of the Elasticsearch service. For example: https://elk.example.com
Kibana URL URL of the Kibana service. For example: https://elk.example.com/kibana
Elasticsearch and Kibana username Username of the user configured in your Elastic Stack.
Elasticsearch and Kibana password Password of the user configured in your Elastic Stack.


Option 2: Elasticsearch and Kibana deployed next to OpenVidu 🔗

Configuring Elasticsearch and Kibana next to OpenVidu is convenient sometimes because the cloudformation template is prepared to deploy automatically such services. But this option can have it downsides because Elasticsearch, Kibana and OpenVidu Server Pro will be running in the same machine. These downsides are:

  • You will need to monitor disk space: OpenVidu generates events and all logs and metrics are sent to Elasticsearch. You will need to take special care of the OPENVIDU_PRO_ELASTICSEARCH_MAX_DAYS_DELETE parameter in the /opt/openvidu/.env file of your deployment so you don't run out of disk space.
  • Resources used By OpenVidu Server Pro are shared with Elasticsearch and Kibana: It is well known that Elasticsearch and Kibana can consume a lot of resources. If you want to keep OpenVidu Server Pro free of this resource consumption, it is recommended to deploy Elasticsearch and Kibana externally.
Enable Elasticsearch and Kibana Parameter which enables or disables the use of Elasticsearch and Kibana by OpenVidu Pro. In this case, it must be set to true.
Elasticsearch URL Empty. You don't want to use any external Elasticsearch service
Kibana URL Empty. You don't want to use any external Kibana Service.
Elasticsearch and Kibana username Your choice. It will be configured while deploying.
Elasticsearch and Kibana password Your choice. It will be configured while deploying.


Option 3: No Elasticsearch and Kibana 🔗

If you don't want to use Elasticsearch and Kibana, just configure the following parameters:

Enable Elasticsearch and Kibana Parameter which enables or disables the use of Elasticsearch and Kibana by OpenVidu Pro. In this case, it must be set to false.
Elasticsearch URL Empty. You don't want to use any external Elasticsearch service
Kibana URL Empty. You don't want to use any external Kibana Service.
Elasticsearch and Kibana username Empty. You don't need to configure any username.
Elasticsearch and Kibana password Empty. You don't need to configure any password.


EC2 Instance configuration 🔗

These properties configure specific details of the EC2 machines that will be launched by CloudFormation.

Instance type for Master Node
Type of EC2 Instance where to deploy the Master Node
Choose from the drop-down button
Instance type for Media Nodes
Type of EC2 Instance where to deploy the Media Nodes
Choose from the drop-down button
SSH Key
SSH key for the EC2 Instances of the cluster
Choose from the drop-down button
(check AWS Docs to create a new one)

Networking configuration 🔗

OpenVidu VPC
Dedicated VPC for the OpenVidu Pro cluster
All of the EC2 instances of the cluster will connect to this VPC
Choose from the drop-down button
OpenVidu Subnet
Subnet of the VPC where to deploy the OpenVidu Pro cluster
Choose from the drop-down button

Other configuration 🔗

These properties configure some other options of your stack.

Deploy OpenVidu Call application
Choose if you want to deploy OpenVidu Call application alongside OpenVidu platform
Choose from the drop-down button
Deploy Coturn in Media Nodes. (Experimental)
Now TURN/STUN (Coturn) service can be deployed at media nodes. Using Media nodes for Coturn implies better performance and scalability for the Coturn service deployed with OpenVidu. If true, Coturn will be deployed at media nodes. More info.
Choose from the drop-down button



5. Create your stack 🔗

No extra options are necessary. Click on NextNextCreate stack

CREATE_IN_PROGRESS status will show up. You will now have to wait for a few minutes (about 10) until it shows CREATE_COMPLETE. If status reaches CREATE_FAILED, check out this section.

To connect to OpenVidu Inspector and the Kibana dashboard, simply access Outputs tab after CREATE_COMPLETE status is reached. There you will have both URLs to access both services.

To consume OpenVidu REST API, use URL https://OPENVIDUPRO_PUBLIC_IP/. For example, in the image above that would be https://ec2-34-244-193-135.eu-west-1.compute.amazonaws.com/ using AWS domain. When deploying with a custom domain name (which you should do for a production environment), of course you must use your domain name instead.

If you have deployed OpenVidu Call you can also access to it through that same URL. You can now add your own application to your instance. To learn how check out section Deploy OpenVidu based applications.

While deploying the stack, you will see a warning in Cloudformation with this message:

The following resource(s) require capabilities: [AWS::IAM::Role]

You need to accept it for OpenVidu PRO deployment to work. OpenVidu PRO needs two IAM Roles:

  • The CloudformationLambdaRole only used by a Lambda resource to copy original AMIs of OpenVidu to your account. In this way, we can ensure that your deployment will still work even if the AMI is deprecated or removed officially, so your deployment will always work.

    The AMI will be copied once, and their names start with [ OpenVidu PRO Master Node AMI Copy ] and [ OpenVidu PRO/ENTERPRISE Media Node AMI Copy ]. This is the AMI that will be used in your deployment. Also, the CloudformationLambdaRole is used to remove all media nodes when the Cloudformation is removed.

  • Another role which OpenVidu PRO needs to create, remove and autodiscover media nodes deployed. This role is defined in the cloudformation template as OpenViduManageEC2Role

You can check both roles in the Cloudformation template.

6. Administration 🔗

AWS deployments of OpenVidu Pro are internally identical to on premises deployments. This means that you can manage OpenVidu platform very easily by connecting to your instances through SSH.

  • Master Node: located at the default installation path /opt/openvidu as root user ($ sudo su), you will be able to manage the services as explained in on premises Master Node administration.
  • Media Nodes: located at the default installation path /opt/kms as root user ($ sudo su), you will be able to manage the services as explained in on premises Media Nodes administration.



Domain and SSL Configuration Examples 🔗

These examples are focusing in the Domain and SSL certificate configuration section of the Deploying OpenVidu Pro on AWS instructions to clarify any doubt on how to configure it.

As OpenVidu Pro is deployed with default sane configuration, your domain and SSL certificate configuration are the most important parameters to deploy your stack correctly.

Let's see all different scenarios:

1) Self-signed certificate 🔗

This example should be used only for development environments. Don't use it in production.

This scenario is meant for you if you want to:

  • Deploy OpenVidu Pro quickly for testing or developing purposes.
  • Deploy OpenVidu Pro without a Fully Qualified Domain Name (FQDN).

1.1) Cloudformation parameters 🔗 🔗

Let's see an example of this scenario for the Cloudformation parameters section:

  1. Select as Certificate type: selfsigned
  2. Keep all the parameters in the Domain and SSL certificate configuration section empty, because we don't have any Elastic Ip, domain or other SSL configuration to specify in this scenario.

2) Let's Encrypt certificate 🔗

This scenario is meant for you if you want to:

  • Deploy OpenVidu Pro for production or even developing purposes.
  • Deploy OpenVidu Pro with a Fully Qualified Domain Name (FQDN).
  • Use a valid SSL certificate.

For this specific scenario you will need to:

2.1) Create an Elastic IP 🔗

  1. Go to your EC2 AWS section, and click here:
  1. Click on Allocate Elastic IP address
  1. This will generate an Elastic IP that you will be able to use for your OpenVidu Pro deployment with letsencrypt

2.2) Register a FQDN pointing to the Elastic IP 🔗

This step will depend of the DNS register you want to use. You need to create a DNS register of type A pointing to the Elastic IP created before. For the next steps, let's suppose that our domain is: example.openvidu.io.

2.3) Cloudformation parameters 🔗

Let's see an example of this scenario for the Cloudformation parameters section:

The important fields of this section are:

  • The AWS Elastic IP (EIP) with the Elastic IP created in step 2.1
  • The Domain Name pointing to Elastic IP with the FQDN created at step 2.2
  • The Email for Let's Encrypt with the email you want to use for your Let's Encrypt certificate.

3) Custom Certificate (Commercial CA) 🔗

This scenario is meant for you if you want to:

  • Deploy OpenVidu Pro for production.
  • Deploy OpenVidu Pro with a Fully Qualified Domain Name (FQDN).
  • Use a valid SSL certificate from a Commercial CA.

For this specific scenario you will need to:

3.1) Generate certificates files 🔗

To use this kind of certificate, you need to generate two files, certificate.cert (public keys of the certificate) and certificate.key (private key), and upload them to an HTTP server to make it available for the Cloudformation parameters. But first, follow these steps to generate these files:

1) Create a CSR and a private key. This can be easily done by executing:

openssl req -newkey rsa:2048 -nodes -keyout certificate.key -out certificate.csr

While executing this command, you will be asked to enter some information to generate the files certificate.key and certificate.csr. Ensure that all these information are correctly inserted (Common Name, Organization Name, etc...). The most important parameter is the Common Name field which should match the name that you want to use for your certificate.

For example, let's suppose that your domain is example.openvidu.io. The parameter Common Name could be: example.openvidu.io or www.example.openvidu.io. Otherwise, if you're using a WildCart certificate, the Common Name parameter would be ** .openvidu.io*.

2) The previous command generated the certificate.key and certificate.csr files. The certificate.csr is the one that you need to provide to your CA. Depending of your CA this step can differ. Check your CA documentation about this topic.

3) Usually the files to generate the certificate.cert can be downloaded or are sent via email from the CA. These files are:

  • The intermediate certificate. (It usually have more than one key with ---BEGIN CERTIFICATE--- This file will be called as intermediate.cert in following steps.
  • Your ssl certificate. An unique certificate key with ---BEGIN CERTIFICATE---. This file will be called as public.cert in following steps.

4) You need to concat these two files in an unique certificate.cert file in this way:

cat public.cert intermediate.crt > certificate.cert

5) Now you have the certificate.key generated in step 1) and the certificate.cert generated in step 4).

If you're still having doubts about how to generate the certificates files, you can follow this guide for a further understanding.



3.2) Upload your certificate files into an HTTP server. 🔗

Now that you have both certificate files, you need to make it available via HTTP for the Cloudformation template. Let's suppose that you upload both files and the URLs are:

  • http://example-http-server.io/certificate.cert
  • http://example-http-server.io/certificate.key

3.3) Create an Elastic IP and a FQDN pointing to it. 🔗

Just follow the same steps of the Let's Encrypt section: 2.1 and 2.2

3.4) Cloudformation parameters 🔗

Let's see an example of this scenario for the Cloudformation parameters section:

These are the important fields of the cloudformation parameters:

  • The AWS Elastic IP (EIP) with the Elastic IP created in step 2.1
  • The Domain Name pointing to Elastic IP with the FQDN created at step 2.2
  • The URL to the CRT file (owncert) with the URL to the certificate.cert file created at step 3.1 and uploaded to an HTTP server in step 3.2.
  • The URL to the key file (owncert) with the URL to the certificate.key file created at step 3.1 and uploaded to an HTTP server in step 3.2.

3.5) Remove your certificates files from the HTTP server of step 3.2 🔗

It is very important after the deployment to invalidate the URLs created at step 3.2 after the stack is successfully deployed. These files available via HTTP are only necessary for CloudFormation EC2 instances to be able to download the certificate files and configure it into the system and are no longer necessary after the deployment process.

Common problems 🔗


Scalability 🔗

Set the number of Media Nodes on startup 🔗

When filling the CloudFormation form, simply set the desired number in section OpenVidu configuration.

In section EC2 Instance configuration you can choose the size of your Master Node and your Media Nodes.

Change the number of Media Nodes on the fly 🔗

You can launch and drop Media Nodes dynamically in two different ways:

From OpenVidu Inspector 🔗

In Cluster page you can launch and drop Media Nodes just by pressing buttons.

With OpenVidu Pro REST API 🔗

You can programmatically launch and drop Media Nodes from your application by consuming OpenVidu Pro REST API.

WARNING: there are some important aspects to keep in mind when launching and dropping Media Nodes in AWS deployments, especially through OpenVidu Pro REST API (OpenVidu Inspector UI is quite self-descriptive):

  • Trying to drop a Media Node which is currently hosting an OpenVidu Session will fail by default. You can manage the drop policy when calling DELETE /openvidu/api/media-nodes through parameter deletion-strategy.

  • Launching/Dropping Media Nodes in AWS OpenVidu Pro deployments will automatically start/terminate EC2 instances. The termination of an EC2 instance that was hosting a removed Media Node will be done only when it is safe. This moment is reached when OpenVidu Webhook event mediaNodeStatusChanged is triggered with value terminated.



Updating OpenVidu Pro configuration 🔗

You may want to change the current configuration of an existing OpenVidu Pro cluster. This configuration includes all of the parameters listed in these pages:

Once the cluster is running, there are different ways you can update the value of the configuration parameters. Take into account that all of them require restarting your OpenVidu Server Pro process, so any active OpenVidu Session will be terminated.

1) With OpenVidu Inspector 🔗

OpenVidu Inspector allows you to restart the OpenVidu Server Pro process from Config page just by filling a formulary.
More information here.

NOTE 1: take into account that not all configuration properties are able to be updated this way
NOTE 2: new values will be stored and remembered, so they will be used when OpenVidu Server Pro is restarted in the future

2) With OpenVidu Pro REST API 🔗

You can consume REST API method POST /openvidu/api/restart to programmatically restart the OpenVidu Server Pro process and update its configuration values.

NOTE 1: take into account that not all configuration properties are able to be updated this way
NOTE 2: new values will be stored and remembered, so they will be used when OpenVidu Server Pro is restarted in the future

3) Manually connecting through SSH 🔗

The ultimate and most definitive way of updating the configuration parameters of an OpenVidu Pro cluster is connecting to the Master Node through SSH and changing the desired values:

  1. SSH to the Master Node machine using your private rsa key
  2. Using root user with sudo su command, go to OpenVidu Pro installation folder (default and recommended is /opt/openvidu)
  3. Update file .env with the new configuration values
  4. Restart OpenVidu Server Pro with ./openvidu restart

Keep an eye on the OpenVidu logs that will automatically display after restarting the service to check that everything went well.



Troubleshooting 🔗

AWS deployments of OpenVidu Pro work under the hood in the exact same manner as on premises deployments. So everything explained in Troubleshooting section of on premises deployments also applies to AWS deployments. There you have detailed instructions on how to debug all of OpenVidu services in case some unexpected problem appears.

CREATE_FAILED CloudFormation stack 🔗

First of all, an AWS CloudFormation stack may reach CREATE_FAILED status for missing a default VPC.

You can inspect your default VPCs like this: https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#view-default-vpc
And you can create a default VPC like this: https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-vpc

If that is not the problem, then follow these steps:

  • 1) Try to deploy again, but this time disabling option Rollback on failure (Configure stack options 🡆 Advanced Options 🡆 Stack creation options). This will prevent the instance to be terminated in case of failure so logs can be gathered. Once you re-deploy with this option, the stack will still fail but you’ll be able to access instances through SSH and retrieve some files to debug the problem.
  • 2) We will also need the parameters you've used to deploy, to check possible problems in their values
  • 3) Once you have performed step 1) and the stack creation has failed, please SSH into the instances created and share with us Cloudformation logs

    • /var/log/cloud-init.log
    • /var/log/cloud-init-output.log

  • 4) Get also the log output of all the services. Check this section to see services logs:



Kurento Media Server crash 🔗

Sometimes Kurento Media Server (the service in charge of streaming media inside of Media Nodes) may crash. If this happens on a regular basis, or better, you have isolated a specific use case where KMS always crashes, then perform the following steps to collect a crash report that will help us fix the issue.

In AWS deployments of OpenVidu Pro, KMS crash reports are enabled by default. You can directly get them with the following steps:

1) Download the KMS crash reports 🔗

ssh -i AWS_SSH_KEY ubuntu@MEDIA_NODE_IP "sudo tar zcvfP ~/core_dumps.tar.gz /opt/openvidu/kms-crashes/*"
scp -i AWS_SSH_KEY ubuntu@MEDIA_NODE_IP:~/core_dumps.tar.gz .

Replace AWS_SSH_KEY with the path to the SSH key of your Media Node EC2 instance and MEDIA_NODE_IP with its IP address. This only applies to a single Media Node. If you have more Media Nodes experiencing KMS crashes, perform these same steps in all of them. Send us the resulting zipped report files.

2) Clean the KMS crash reports 🔗

So as not to consume too much hard drive, delete the crash reports once you have downloaded them. IMPORTANT: obviously, do NOT do this before downloading the report.

ssh -i AWS_SSH_KEY ubuntu@MEDIA_NODE_IP "sudo rm /opt/openvidu/kms-crashes/* && sudo rm ~/core_dumps.tar.gz"

Replace AWS_SSH_KEY with the path to the SSH key of your Media Node EC2 instance and MEDIA_NODE_IP with its IP address. This only applies to a single Media Node and must be performed for each Media Node from which you downloaded a crash report.