Deploying OpenVidu Enterprise in AWS



Intro 🔗

OpenVidu Enterprise offers two different deployment models:

  • Single Master deployment: one Master Node, multiple Media Nodes. This is the same architecture used by OpenVidu Pro.
  • High Availability deployment: multiple Master Nodes, multiple Media Nodes. See High Availability documentation.



Single Master deployment 🔗


OpenVidu Enterprise with a single Master Node can be deployed with the same Cloudformation used for OpenVidu PRO.

You just need to specify enterprise at the OpenVidu Edition section while deploying the Cloudformation.



High Availability deployment 🔗


OpenVidu Enterprise can be deployed with multiple Master nodes to have High Availability and be fault tolerant. In this section, we will explain step by step, how to deploy OpenVidu Enterprise with such capabilities. If you want to read more about OpenVidu Enterprise High Availability architecture, check it out here.

Deployment 🔗

In this section there is a detailed explanation on how to deploy OpenVidu Enterprise and what is needed previously. You can follow this guide and if you have some doubts, you can also check this tutorial:



1) Previous requirements 🔗


To deploy OpenVidu Enterprise in AWS with High Availability you need at least:

  • A FQDN (Fully Qualified Domain Name). The domain name will be configured at the end of the instructions to point to the Load Balancer URL created by the CloudFormation Stack.
  • A valid certificate for your FQDN installed in AWS. The CloudFormation automatically launches a Load Balancer to be used as entry point to the OpenVidu cluster. The CloudFormation needs the ARN of the certificate as a parameter.
  • (Optional) A running Elasticsearch and Kibana deployment. If you do not have any Elastic Stack deployed, check this guide on how to deploy an Elastic Stack as a service in AWS or Elastic Cloud. If you don't want to use Elastic Stack, you just need to specify it while deploying the CloudFormation.
  • A user configured in your Elastic Stack to be used in the OpenVidu configuration. You can use a normal user with all privileges, or just use a fine-grained one. Check this guide on how to create a fine-grained user.

2) Access to the console of AWS Cloud Formation 🔗

Go to CloudFormation



3) Select Create StackWith new resources 🔗


4) Option Specify templateAmazon S3 URL with the following URL 🔗

https://s3-eu-west-1.amazonaws.com/aws.openvidu.io/CF-OpenVidu-Enterprise-latest.yaml

To deploy a fixed version, including previous ones, replace latest with the desired version number.
For example: https://s3-eu-west-1.amazonaws.com/aws.openvidu.io/CF-OpenVidu-Enterprise-2.29.0.yaml


While deploying the stack, you will see a warning in Cloudformation with this message:

The following resource(s) require capabilities: [AWS::IAM::Role]

You need to accept it for OpenVidu Enterprise deployment to work. OpenVidu Enterprise needs three IAM Roles:

  • The LambdaOnCreate only used by a Lambda resource while the Cloudformation is deploying resources. Its purpose is to let a Lambda resource to copy original AMIs of OpenVidu Enterprise into your account. In this way, we can ensure that your deployment will still work even if the AMI is deprecated or removed officially, so your deployment will always work.

    The AMI will be copied once, and their names start with [ OpenVidu ENTERPRISE Master Node AMI Copy ] and [ OpenVidu PRO/ENTERPRISE Media Node AMI Copy ]. This is the AMI that will be used in your deployment.

  • The LambdaOnDeleteRole is used by a Lambda which is executed when the Cloudformation is deleted. Its purpose is to configure autoscaling groups so media nodes and master nodes can be deleted safely.

  • The OpenViduProMasterRole which allows master nodes, so it can work properly with its own S3 bucket and interact correctly with Autoscaling groups.

You can check all these roles in the Cloudformation template.

5) Specify stack details 🔗

First, indicate a name for your deployment. Next fill each section with the Parameters formulary. Read carefully all parameters because all of them are important:

5.1) OpenVidu Configuration Parameters 🔗
Domain Name
This is the FQDN that will be used to access OpenVidu Enterprise. This parameter will be configured at the end of the instructions to point to the Load Balancer URL in this section: 6. Configure your domain when the stack has been created.
Your fully qualified domain
For example: example-multimaster.openvidu.io
OpenVidu Pro Cluster Id
This parameter is used by OpenVidu Pro to send indexed statistics to ElasticSearch, and can be used as a way to distinguish different clusters.
Your choice
For example: openvidu-multimaster
OpenVidu Pro License
Your purchased license key from your OpenVidu account. While in beta, you will not be charged.
Your OpenVidu Pro License key
Openvidu Secret
Secret to connect to this OpenVidu Platform. Cannot be empty and must contain only alphanumeric characters [a-zA-Z0-9], hypens "-" and underscores "_"
Your choice
Media Server
The Media Server implementation you want to use
Possible values:
  • mediasoup
  • kurento
OpenVidu S3 bucket
S3 bucket for storing configuration and recordings
If empty, a new bucket will be created while launching the Cloudformation. If defined, the specified S3 bucket will be used in your deployment. If you define it, make sure it is in the same AWS region as your deployment. Don't specify neither the ARN or S3 URL, just the bucket name.
For example: my-s3-bucket
Enable OpenVidu Recording
Whether to enable OpenVidu recording module or not
Possible values:
  • true to enable recording capabilities. All recordings will be saved in the OpenVidu S3 Bucket
  • false to disable recording capabilities.
Deploy Coturn in Media Nodes. (Experimental)
Now TURN/STUN (Coturn) service can be deployed at media nodes. Using Media nodes for Coturn implies better performance and scalability for the Coturn service deployed with OpenVidu. If true, Coturn will be deployed at media nodes. More info.
Choose from the drop-down button


5.2) Elasticsearch and Kibana Configuration 🔗
Enable Elasticsearch and Kibana Parameter which enables or disables the use of Elasticsearch and Kibana by OpenVidu Enterprise. If false is selected, the rest of the parameters in this section will be ignored and Elasticsearch and Kibana will not be used. If true is selected, the rest of the parameters in this section will be used to connect to an existing Elasticsearch and Kibana service.
Elasticsearch URL
URL to use the Elasticserch service.
Your Elasticsearch URL.
For example: https://example.elasticsearch.com
Kibana URL
URL for Kibana.
Your Kibana URL.
For example: https://example.kibana.com
Elasticsearch and Kibana username
Elasticsearch username for OpenVidu
Your choice
Elasticsearch and Kibana password
Password of the previous username
Your choice


5.3) EC2 and Autoscaling configuration 🔗

All of these parameters will create two AutoScaling groups with its correspondant parameters:


Master Nodes properties 🔗

This autoscaling group will control the number of master nodes you will have in your deployment. Master nodes do not autoscale automatically, they are created just by changing the Desired Size of its Autoscaling Group. These are the initial parameters that you need to set up:

Master Nodes instance type:
The type of instance you want to use for master nodes.
Your choice
For example: c5.xlarge
Minimum Master Nodes
Minimum number of Master nodes that you want to have configured.
Your choice
For example: 1
Maximum Master Nodes
Maximum number of Master nodes that you want to have configured.
Your choice
For example: 2
Desired Master Nodes
Number of Master nodes you want to run on deploy.
Your choice
For example: 2



Media Nodes properties 🔗

This autoscaling group will control the number of media nodes you will have in your deployment. Autoscaling will be enabled by default. These are the initial parameters that you need to set up:

Media Nodes instance type:
The type of instance you want to use for your Media Nodes.
Your choice
For example: c5.xlarge
Minimum Media Nodes
Minimum number of Media Nodes that you want to have configured.
Your choice
For example: 2
Maximum Media Nodes
Maximum number of Media Nodes that you want to have configured.
Your choice
For example: 4
Desired Media Nodes
Number of Media Nodes you want to run on deploy.
Your choice
For example: 2
Scale Up Media Nodes on Average CPU
Average CPU necessary to scale up Media Nodes
Your choice
For example: 70
Scale Down Media Nodes on Average CPU
Average CPU necessary to scale down Media Nodes
Your choice
For example: 30



Common properties 🔗

This is the SSH key that you want to use for your EC2 instances for both, Master Nodes and Media Nodes instances created by their respective Autoscaling Groups

SSH Key Name
EC2 Key to be used in future for administrative tasks.
Your choice


5.4) Load Balancer Certificate configuration 🔗
ARN of the AWS Certificate
ARN of the Certificate imported in your AWS account for your FQDN
Your choice
For example: arn:aws:acm:<region>:<user-id>:certificate/<certicate-id>


5.5) Networking configuration 🔗
OpenVidu Pro VPC:
Which VPC you want to deploy the cluster.
Your choice
OpenVidu Pro Subnets:
Which Subnets you want to use for OpenVidu Pro Cluster. You need to select minimum 2 subnets!
Your choice
The parameter "OpenVidu Pro Subnets" must contain two subnets , each of them from a different Availability Zone. This is because Autoscaling Groups in AWS needs at least two subnets to ensure High Availability.

6) Configure your domain when the stack has been created 🔗

When everything is deployed, you should see this in the Outputs section of CloudFormation:

Now you need to point the configured Domain Name (which was pointing to a "Dummy IP" before the stack was deployed) to the Load Balancer URL with a CNAME in your DNS. Wait until the domain name is replicated and then, you will be able to reach OpenVidu Enterprise using your Domain name.


Administration 🔗

Check cluster after deploy 🔗

If you want to check that everything is set up correctly in AWS after deploying the CloudFormation Stack you can follow the next subsections:

1) Check Master Nodes 🔗


1.1) Go to the Target Groups panel which name is the same as the deployed CloudFormation Stack. You can find Target Groups in the EC2 Panel of AWS. The following image shows how the target group may look if we named the CloudFormation Stack as ov-example:

1.2) If all Master Nodes are deployed correctly, you should see something like this:

You can check that master nodes are deployed correctly if:

  • There are master nodes deployed at different availability zones (Only If you have configured more than one master node).
  • Status of master nodes is "healthy"


1.3) Check Openvidu API Load Balancing:

Execute a GET request to /openvidu/api/config. You can do this with curl:

curl -u OPENVIDUAPP:<OPENVIDU_SECRET> https://<DOMAIN_NAME>/openvidu/api/config

This request will return a JSON with all the OpenVidu Pro configuration parameters. You should look at the parameter AWS_INSTANCE_ID of the returned JSON. This parameter should be different on each request if you have multiple healthy master nodes, and each of the different values should be the id of the master node which has received the request.

2) Check Media Nodes 🔗

As media nodes are not attached to any Load Balancer, the health of these nodes is directly shown in the Autoscaling Group section and managed by Master Nodes. To check all media nodes are correctly setup:

2.1) Go to the Autoscaling groups section and check the autoscaling group which starts with your CloudFormation Stack name and ends with "ASGMediaNode"

2.2) If all Media Nodes are deployed correctly, you should see something like this:


2.3) Check OpenVidu has registered all media nodes:

Execute a GET request to /openvidu/api/media-nodes for more information about this request). You can do this with curl:

curl -u OPENVIDUAPP:<OPENVIDU_SECRET> https://<DOMAIN_NAME>/openvidu/api/media-nodes

This request will return a JSON with all registered media nodes and related information.

3) Check AWS Events reaching Master Nodes 🔗

OpenVidu Enterprise depends on some AWS Events to be able to register/deregister media nodes in the cluster and for autoscaling events. To check that all events are working properly, SSH into one of your master nodes, and go to /opt/openvidu/ directory:

sudo su
cd /opt/openvidu

Now we will check the logs of a service used by OpenVidu Enterprise called replication-manager.

NOTE: As AWS Events are sent into an SQS Queue, if you have more than one master node, you need to check the logs of all master nodes. To consider that such events works correctly, you must see the mentioned events below at least once in one master node.

3.1) Check for autoscaling events 🔗

To check for autoscaling events, just execute:

docker-compose logs -f replication-manager | grep custom.autoscaling_schedule

After some minutes with your stack deployed, you should see a trace log like this one:

openvidu-replication-manager-1  | 2022-05-17 11:55:58.863  INFO 8 --- [           main] i.o.r.m.s.SQSNotificationListenerAWS     : Notification content: {"source":"custom.autoscaling_schedule","detail":{"time":"2022-05-17T11:55:14Z"}}

This means that autoscaling events are reaching master nodes, so media nodes will autoscale accordingly.


3.2) Check for media nodes Autoscaling Group events 🔗

To check for media nodes Autoscaling Group events, you need to increase/decrease the desired capacity of Media Nodes in the Autoscaling group or wait until Cloudwatch rules modifies the number of media nodes.


Check for new media nodes events 🔗
docker-compose logs -f replication-manager | grep 'launched in autoscaling group'

The result of the log should be:

openvidu-replication-manager-1  | 2022-05-17 12:19:56.656  INFO 8 --- [           main] i.o.r.m.s.SQSNotificationListenerAWS     : New Media Node (i-0ed87803133aaca76,172.31.41.202) launched in autoscaling group
Check for drop media nodes events 🔗
docker-compose logs -f replication-manager | grep 'terminated in autoscaling group'

The result of the log should be something like:

openvidu-replication-manager-1  | 2022-05-17 12:23:41.002  INFO 8 --- [           main] i.o.r.m.s.SQSNotificationListenerAWS     : Media Node (i-022d3b9d2d0f42cf7,172.31.10.206) terminated in autoscaling group

Autoscaling Configuration 🔗

OpenVidu Enterprise Autoscaling is managed by AWS Autoscaling Groups. In consequence, all autoscaling parameters can be changed through Cloudformation parameters.

To change those parameters you just need to go to ** AWS Cloudformation Panel ⇨ Select Your Stack ⇨ Update ⇨ Use current template**.

First, go to the Cloudformation Panel and Select your Stack. Then click in the Update button.

Select Use current template and click in the Next button.

Only these parameters can be changed through Cloudformation.


OpenVidu Enterprise Configuration 🔗

Technically, you can connect to any instance through SSH to change OpenVidu Enterprise configuration, but this will lead to inconsistencies in the configuration, because master nodes and media nodes are now volatile objects of the infrastructure. They exist temporary, they can be destroyed and new nodes can be created, so the configuration can not live on any EC2 instance. For this reason, administrative task are done via API Rest or by changing a persisted configuration file in a S3 bucket.


1) Change the configuration via API Rest 🔗

While OpenVidu Enterprise is running, you can change some parameters of OpenVidu by calling /openvidu/api/restart. All OpenVidu master nodes will restart automatically and the configuration will be persisted in an S3 bucket. All modifiable parameters are documented.

Take into account that not all parameters can be changed via API Rest, so in case you need to change something which can not be changed using this method, you must change the S3 configuration file


2.1) Go to the S3 configuration bucket of your CloudFormation. You can find it as a resource in the CloudFormation panel:

2.2) Modify the .env configuration in the S3 bucket: In this S3 bucket you will see a file named .env. Any change you want to do which is not possible to do using the API Rest request to /openvidu/api/restart will be done by modifying the content of the .env file in this S3 bucket.


2.3) (Optional) Restart Master Nodes via AWS: By default, OpenVidu Enterprise is configured with this parameter:

  • OPENVIDU_ENTERPRISE_S3_CONFIG_AUTORESTART=true

This means that any change which happens in the .env file of OpenVidu Enterprise bucket, will restart automatically all master nodes. You need to restart your master nodes only if OPENVIDU_ENTERPRISE_S3_CONFIG_AUTORESTART=false. In this case, you must restart all your master nodes through your AWS EC2 Panel, or terminating all Master Nodes and wait for the Autoscaling Group to create those instances. New EC2 instances created by the Autoscaling Group will download the updated configuration .env file from the S3 bucket.


Troubleshouting 🔗

If your master nodes do not reach a healthy state as described here, you may need to check the logs of the running services in your master nodes to check what the problem could be.

Usually, the error should appear in the replication-manager service, or openvidu-server service. SSH into one of your unhealthy master nodes and check the logs of both service to search for possible missconfiguration errors:

sudo su
cd /opt/openvidu
docker-compose logs openvidu-server
docker-compose logs replication-manager

Also, make sure that all events are processed correctly. Check the section: Check cluster after deploy, to verify that the cluster is correctly set up.


Deploying an OpenVidu application 🔗

To deploy an OpenVidu application which uses our recently deployed stack, you can use any application developed for OpenVidu. You just need to point your application to the configured Domain Name and the OpenVidu Secret used in CloudFormation deployment. Additionally, remember that your app needs to be deployed with a valid certificate for WebRTC to work.

If you want to see and example of an application that automatically reconnects users after a node crashes, take a look to the openvidu-fault-tolerance demo.