Hardware of Aternity Trial POC Deployment (up to 1000 devices)

The trial POC demo deployment of Aternity on-premise is the smallest configuration which supports up to 1000 monitored devices or virtual desktops, where up to 900 of them are physical devices and 100 are virtual application sessions. It assumes 10 Aternity logged in users updating dashboards and performing in total up to 10 clicks per minute.

Trial POC deployment for Aternity on-premise

The Management Server's computer can house ALL the following: Oracle Database Server, Data Warehouse Server, and Aggregation Server, all on the same computer. The Aternity Docker Components Server hosts the Aternity Vertica Database Server. The standalone computer is the Dashboard Server server.

Important

Synchronize all Aternity components to have the same date, time AND time zone.

You can also set up a trial deployment using VMWare virtual machines, by downloading and deploying the three OVA image files.

Important

Use OVA image files ONLY in the trial POC demo deployment of Aternity on-premise, and not in production. Using OVA image files in production will prevent you from updating your deployment to a new version.

Set up trial POC for Aternity on-premise with OVA image files

Learn more about an OVA trial deployment.

Hardware Specifications for a Trial or POC Deployment

The trial deployment requires the following hardware specifications:

Component Minimum Memory (RAM) Minimum CPU Free disk space (after allocating system requirements) Additional configuration

Aternity Management Server (and with Oracle Database Server, Aggregation Server, Data Warehouse Server)

12GB

Xeon E5 family

4 cores

2.4 GHz

180GB

JVM heap size: (learn more)

-Xmx4096m

-XX:MaxNewSize=2048m

ActiveMQ JVM Heap Size: 1G

Oracle's System Global Area (SGA): 4GB

Program Global Area (PGA): 2GB

Aternity Dashboard Server

16GB (you can ignore the RAM size warning during setup)

Add 16GB RAM for each additional 10 logged users.

Xeon E5 family

8 Cores

2.4-3.6 GHz

15MB cache

Add 8 cores for each additional 10 logged users.

50GB (SSD)

Or I/O per sec (IOPS): 900

For less standard hardware configurations, see the setup instructions and hardware best practices from Tableau. To deploy on a virtual machine, you must guarantee the CPU and memory requirements.

Aternity Vertica Database Server

In the trial deployment, the Aternity Docker Components Server hosts the Aternity Vertica Database Server. See below for spec.

If you deploy on virtual machines, you MUST define all resources as dedicated or reserved (Resource Allocation > Reservation). In addition, you can use a Logical Volume Manager (LVM) in Aternity 11 on any hard drive with no restrictions. See Vertica limitations for LVM.

Aternity Docker Components Server with Aternity Vertica Database Server

24GB

Xeon E5 family

8 Cores

2.4-3.6 GHz

100GB divided as follows:
  • 40GB for partition with Vertica data folder

  • 20GB for partition with Docker engine‚Äôs local storage, by default located at /var/lib/docker/.

  • 40GB for partition with Docker components data folder

    Divide free disk space into several partitions where each partition is dedicated to the data directory of a different component: Messaging Broker(Kafka), and Raw Data Component (Cassandra). Allocate disk space to each partition according to hardware requirements of the relevant sizing model. The rest free disk space is dedicated to log files and REST APIs.

    <cassandra> at least 5GB

    <kafka> at least 5GB

If there is no need for optional components, you can reduce 2.5GB of memory and 0.8 of CPU cores, which is negligible in this deployment size.

In any case, we provide the detailed calculation:
  • If you do not deploy Aternity Data Source for Portal, you can reduce the amount of memory by 500MB and CPU cores by 0.2.
  • If you do not deploy Aternity REST API Server, you can reduce the amount of memory by 500MB and CPU cores by 0.1.
Important

You must also install each server with the required operating system and other software. For details, see the system and software requirements for each server.