Deploying OpenVidu Enterprise in AWS
- Regular deployment
- High Availability deployment
- 1) Previous requirements
- 2) Access to the console of AWS Cloud Formation
- 3) Select Create Stack 🠚 With new resources
- 4) Option Specify template 🠚 Amazon S3 URL with the following URL
- 5) Specify stack details
- 6) Configure your domain when the stack has been created
Regular deployment 🔗
OpenVidu Enterprise is very easy to enable from an existing OpenVidu Pro deployment. While in beta, you just need an OpenVidu Pro cluster up and running. If you don't have an OpenVidu Pro cluster yet, follow the instructions to Deploy OpenVidu Pro in AWS.
To change from OpenVidu Pro to OpenVidu Enterprise you just need to:
1) Configure the following property in the
.env file at your Master Node installation path (default to
2) Restart OpenVidu as usually:
sudo su cd /opt/openvidu ./openvidu start
3) That's it, now you're running OpenVidu Enterprise with a unique master node!. All documentation related with administration, OpenVidu configuration, etc... which are present in the OpenVidu Pro AWS Deployment documentation applies to OpenVidu Enterprise with a Single Master Node.
High Availability deployment 🔗
OpenVidu Enterprise can be deployed with multiple Master nodes to have High Availability and be fault tolerant. In this section, we will explain step by step, how to deploy OpenVidu Enterprise with such capabilities. If you want to read more about OpenVidu Enterprise High Availability architecture, check it out here.
1) Previous requirements 🔗
To deploy OpenVidu Enterprise in AWS with High Availability you need at least:
- A FQDN (Fully Qualified Domain Name). The domain name will be configured at the end of the instructions to point to the Load Balancer URL created by the CloudFormation Stack.
- A valid certificate for your FQDN installed in AWS. The CloudFormation automatically launches a Load Balancer to be used as entry point to the OpenVidu cluster. The CloudFormation needs the ARN of the certificate as a parameter.
- A running Elasticsearch and Kibana deployment. If you do not have any Elastic Stack deployed, check this guide on how to deploy an Elastic Stack as a service in AWS or Elastic Cloud.
- A user configured in your Elastic Stack to be used in the OpenVidu configuration. You can use a normal user with all privileges, or just use a fine-grained one. Check this guide on how to create a fine-grained user.
2) Access to the console of AWS Cloud Formation 🔗
3) Select Create Stack 🠚 With new resources 🔗
4) Option Specify template 🠚 Amazon S3 URL with the following URL 🔗
To deploy a fixed version, including previous ones, replace
latestwith the desired version number.
5) Specify stack details 🔗
First, indicate a name for your deployment. Next fill each section with the Parameters formulary. Read carefully all parameters because all of them are important:
5.1) OpenVidu Configuration Parameters 🔗
|Domain NameThis is the FQDN that will be used to access OpenVidu Enterprise. This parameter will be configured at the end of the instructions to point to the Load Balancer URL in this section: 6. Configure your domain when the stack has been created.||Your fully qualified domainFor example: example-multimaster.openvidu.io|
|OpenVidu Pro Cluster Id
This parameter is used by OpenVidu Pro to send indexed statistics to ElasticSearch, and can be used as a way to distinguish different clusters.
|Your choiceFor example: openvidu-multimaster|
|OpenVidu Pro License
Your purchased license key from your OpenVidu account. While in beta, you will not be charged.
|Your OpenVidu Pro License key|
Secret to connect to this OpenVidu Platform. Cannot be empty and must contain only alphanumeric characters [a-zA-Z0-9], hypens "-" and underscores "_"
The Media Server implementation you want to use
5.2) Elasticsearch and Kibana Configuration 🔗
|Elasticsearch URLURL to use the Elasticserch service. It is very important to specify the port, even if it is 443.||Your Elasticsearch URL.For example: https://example.elasticsearch.com:443|
URL for Kibana. It is very important to specify the port, even if it is 443.
|Your Kibana URL.For example: https://example.kibana.com:443|
|Elasticsearch and Kibana username
Elasticsearch username for OpenVidu
|Elasticsearch and Kibana password
Password of the previous username
5.3) EC2 and Autoscaling configuration 🔗
All of these parameters will create two AutoScaling groups with its correspondant parameters:
Master Nodes properties 🔗
This autoscaling group will control the number of master nodes you will have in your deployment. Master nodes do not autoscale automatically, they are created just by changing the Desired Size of its Autoscaling Group. These are the initial parameters that you need to set up:
|Master Nodes instance type:The type of instance you want to use for master nodes.||Your choiceFor example: c5.xlarge|
|Minimum Master Nodes
Minimum number of Master nodes that you want to have configured.
|Your choiceFor example: 1|
|Maximum Master Nodes
Maximum number of Master nodes that you want to have configured.
|Your choiceFor example: 2|
|Desired Master Nodes
Number of Master nodes you want to run on deploy.
|Your choiceFor example: 2|
Media Nodes properties 🔗
This autoscaling group will control the number of media nodes you will have in your deployment. Autoscaling will be enabled by default. These are the initial parameters that you need to set up:
|Media Nodes instance type:The type of instance you want to use for your Media Nodes.||Your choiceFor example: c5.xlarge|
|Minimum Media Nodes
Minimum number of Media Nodes that you want to have configured.
|Your choiceFor example: 2|
|Maximum Media Nodes
Maximum number of Media Nodes that you want to have configured.
|Your choiceFor example: 4|
|Desired Media Nodes
Number of Media Nodes you want to run on deploy.
|Your choiceFor example: 2|
|Scale Up Media Nodes on Average CPU
Average CPU necessary to scale up Media Nodes
|Your choiceFor example: 70|
|Scale Down Media Nodes on Average CPU
Average CPU necessary to scale down Media Nodes
|Your choiceFor example: 30|
Common properties 🔗
This is the SSH key that you want to use for your EC2 instances for both, Master Nodes and Media Nodes instances created by their respective Autoscaling Groups
|SSH Key NameEC2 Key to be used in future for administrative tasks.||Your choice|
5.4) Load Balancer Certificate configuration 🔗
|ARN of the AWS CertificateARN of the Certificate imported in your AWS account for your FQDN||Your choiceFor example:
5.5) Networking configuration 🔗
|OpenVidu Pro VPC:Which VPC you want to deploy the cluster.||Your choice|
|OpenVidu Pro Subnets:Which Subnets you want to use for OpenVidu Pro Cluster. You need to select minimum 2 subnets!||Your choice|
6) Configure your domain when the stack has been created 🔗
When everything is deployed, you should see this in the Outputs section of CloudFormation:
Now you need to point the configured Domain Name (which was pointing to a "Dummy IP" before the stack was deployed) to the Load Balancer URL with a CNAME in your DNS. Wait until the domain name is replicated and then, you will be able to reach OpenVidu Enterprise using your Domain name.
Check cluster after deploy 🔗
If you want to check that everything is set up correctly in AWS after deploying the CloudFormation Stack you can follow the next subsections:
1) Check Master Nodes 🔗
1.1) Go to the Target Groups panel which name is the same as the deployed CloudFormation Stack. You can find Target Groups in the EC2 Panel of AWS. The following image shows how the target group may look if we named the CloudFormation Stack as ov-example:
1.2) If all Master Nodes are deployed correctly, you should see something like this:
You can check that master nodes are deployed correctly if:
- There are master nodes deployed at different availability zones (Only If you have configured more than one master node).
- Status of master nodes is "healthy"
1.3) Check Openvidu API Load Balancing:
Execute a GET request to /openvidu/api/config. You can do this with
curl -u OPENVIDUAPP:<OPENVIDU_SECRET> https://<DOMAIN_NAME>/openvidu/api/config
This request will return a JSON with all the OpenVidu Pro configuration parameters. You should look at the parameter
AWS_INSTANCE_ID of the returned JSON. This parameter should be different on each request if you have multiple healthy master nodes, and each of the different values should be the id of the master node which has received the request.
2) Check Media Nodes 🔗
As media nodes are not attached to any Load Balancer, the health of these nodes is directly shown in the Autoscaling Group section and managed by Master Nodes. To check all media nodes are correctly setup:
2.1) Go to the Autoscaling groups section and check the autoscaling group which starts with your CloudFormation Stack name and ends with "ASGMediaNode"
2.2) If all Media Nodes are deployed correctly, you should see something like this:
2.3) Check OpenVidu has registered all media nodes:
Execute a GET request to /openvidu/api/media-nodes for more information about this request). You can do this with
curl -u OPENVIDUAPP:<OPENVIDU_SECRET> https://<DOMAIN_NAME>/openvidu/api/media-nodes
This request will return a JSON with all registered media nodes and related information.
Autoscaling Configuration 🔗
OpenVidu Enterprise Autoscaling is managed by AWS Autoscaling Groups. In consequence, all autoscaling parameters can be changed through Cloudformation parameters.
To change those parameters you just need to go to AWS Cloudformation Panel 🠚 Select Your Stack 🠚 Update 🠚 Use current template.
OpenVidu Enterprise Configuration 🔗
Technically, you can connect to any instance through SSH, but this could lead to inconsistencies in the configuration, because master nodes and media nodes are now volatile objects of the infrastructure. They exist temporary, they can be destroyed and new nodes can be created, so the configuration can not live on any EC2 instance. For this reason, administrative task are done via API Rest or by changing a persisted configuration file in a S3 bucket.
1) Change the configuration via API Rest (Recommended) 🔗
While OpenVidu Enterprise is running, you can change some parameters of OpenVidu by calling /openvidu/api/restart. All OpenVidu master nodes will restart automatically and the configuration will be persisted in an S3 bucket. All modifiable parameters are documented.
2) Change configuration by modifying S3 configuration file (Not recommended) 🔗
2.1) Go to the S3 configuration bucket of your CloudFormation. You can find it as a resource in the CloudFormation panel:
2.2) Modify the .env configuration in the S3 bucket: In this S3 bucket you will see a file named
.env. Any change you want to do which is not possible to do using the API Rest request to /openvidu/api/restart will be done by modifying the content of the
.env file in this S3 bucket.
2.3) Restart Master Nodes via AWS: After changing the
.env file in the S3 bucket, you need to restart all Master Nodes via AWS EC2 Panel, or terminating all Master Nodes and wait for the Autoscaling Group to create those instances. New EC2 instances created by the Autoscaling Group will download the updated configuration
.env file from the S3 bucket.
Deploying an OpenVidu application 🔗
To deploy an OpenVidu application which uses our recently deployed stack, you can use any application developed for OpenVidu. You just need to point your application to the configured Domain Name and the OpenVidu Secret used in CloudFormation deployment. Additionally, remember that your app needs to be deployed with a valid certificate for WebRTC to work.
If you want to see and example of an application that automatically reconnects users after a node crashes, take a look to the openvidu-high-availability demo.