ODC's Runtime it's part of ODC and it's responsible for hosting and running applications developed in the Platform, although independent of the Platform architecture, having its own.
Runtime is a general name for a set of different stages (Development, Test and Production), which run independently from one another.
The staging approach allows multiple teams to deliver independently and in parallel, following Continuous Integration (CI) software development best practices.
The high level architecture of Runtime's stage can be best described in the following diagram:
From the diagram above, it can be noticed that all HTTPS requests done by authenticated and authorized users are handled by the stage's Load Balancer and forwarded to the respective apps running in a Kubernetes cluster.
Similarly to the development Platform, Runtime relies on AWS service, being this Kubernetes cluster an AWS Elastic Kubernetes Service (EKS).
Also from the diagram it's noticeable that the Outsystems analytics pipeline extracts data from Runtime, in order to show analytics data available in ODC.
It's also shown how Key Management is associated to data encryption and also secrets management.
Security
All apps are containarized running in the Runtime cluster with secure REST API endpoints. HTTPS is used to allow secure communication between the client and the browser, meaning that all requests made to an app use this secure protocol to reach the app's secure endpoint.
As a rule of thumb, all apps are usually available through https://<customername>.outsystems.app/appname.
As an example:
Runtime cluster
As mentioned above, the Runtime cluster runs on a AWS EKS and each compiled app generated is a container image.
This means that each instance of a container image is a container.
The Platform's Build Service packages each app as container image storing it and later passing that image to a Runtime stage for deployment, where it will run as a separate container. This approach follows the "Build once, deploy anywhere" continous delivery principle, also making the infrastructure more resilient to potentially resource intensive apps, which can degrade the performance of other apps, impacting them negatively.
Auto-scaling
The compute capacity for each app container is scalable in each non Development Runtime stage, such as Production.
The auto scale controller monitors both CPU and RAM usage in each app container and continuously checks this usage and compares it with the overall available cluster compute capacity allocating additional compute resources, if the usage value exceeds a defined threshold.
This scaling mechanism is adjusted in real time and no user interaction is required and its possible since each Runtime stage is isolated from one another and the overal resources are based in a multi-tenant pool.
The auto scale controller also replicates app containers running in each Runtime cluster across multiple availability zones to ensure High Availability (HA).
These availability zones are distinctly located in the cloud, in order to isolate each from potential failures.
Databases and data stores
Each Runtime stage stores it's data in an AWS Aurora Serverless database.
To keep the information resilient and fault proof, data is written in two availability zones, simultaneously, as seen in the image below.
The AWS Aurora database architecture model decouples compute and storage. The storage volume scales automatically as the amount of data stored increases.
As mentioned previously, secret data, such as apps API Keys, for instance, are stored in a secret manager (AWS Secrets Manager).
Comments
0 comments
Please sign in to leave a comment.