Migrate Your Stored Data from Version 9 to a New Database Format of Version 11

To view version 11 dashboards with the historical data from version 9, perform the process of data migration as detailed in this article. To fully collect version 9 data, you must perform data migration a month before upgrade with a zero-downtime to your deployment. Data migration should run as long as the retention period in days you want to see in version 11, but no longer than one month.

In the current release Aternity introduces a new location for the data. In version 9.x all data resided in the Oracle Database Server. In version 11, to provide users with a better quality and performance, most of business analytical data has been moved to the Vertica database.

New Database Architecture in Version 11

Data migration process serves customers interested in keeping historical data after upgrade. To ensure a successful data migration to the latest version, you should deploy the data migration tool (ETL) and run the migration process.

The data migration process runs in parallel to the existing and fully functional Aternity version 9. The Aternity Management Server can be up and running enabling a normal operation of Aternity version 9. The ETL writes to both databases in parallel and transforms Oracle data (collected in version 9) to the Vertica format. Meaning, while writing to an existing version 9 database, at the same time it seamlessly transforms data to a new format database.

As long as the tool runs, data is transformed and loaded to the new data target of Aternity 11. The historical data retention after upgrade depends on duration of the migration process. For each day allocated to the migration, a day of historical data will be available, up to a maximum of 31 days of data. Longer the migration tool works, more data will be transformed and stored into Vertica Database Server. To fully collect version 9 data, let both database servers work in parallel for one month with a zero-downtime to your deployment.

Without data transformation, the dashboards that use a new model for storing data will open empty after upgrade.

Dashboards from version 9.x that use a new model for storing data in version 11 Retention period in version 11 After data migration Without data migration

SLA

400 days

SLA transforms to Vertica in chunks. It takes a few days (3-4) to transform and load the last 400 days of SLA data.

Following data migration, when you open Aternity version 11 for the first time, your historical data will show on the dashboard.

Without data migration, the dashboard opens empty after upgrade.

Only new data collected by the upgraded deployment will appear on this dashboard. It will be fully populated with data in a month after upgrade as new data flows in from devices.

Analyze Applications

Analyze Business Activities

Troubleshoot Application

Commonalities Analysis for a single user activities

Monitor Enterprise Summary

Monitor Application

Boots

Boot Analysis

One month

If migration runs longer than one month, it is the most recent 31-days of data that will be available.

If migration runs for less than one month, the available data will be limited.

As long as the tool runs, each row added to the Aternity version 9 database is also added to the Vertica database.

A complete process requires one day of migration to transform one day of historical data.

Following data migration, when you open Aternity version 11 for the first time, your historical data will show on dashboards.

Without data migration, these dashboards open empty after upgrade.

Only new data collected by the upgraded deployment will appear on these dashboards and APIs. They will be fully populated with data in a month after upgrade as new data flows in from devices.

New in version 11 dashboards, those that did not exist in version 9 at all, will open empty. (For example, Installed Applications, Wi-Fi, Host Resources, Skype Commonalities and more).

One month N/A

N/A

New dashboards that did not exist in previous versions initially appear empty and will be fully populated with data in a month as new data flows in from devices.

As shown on the below diagram, the workflow is as follows:
  1. Install the Vertica server.
  2. Create a new schema on Vertica.
  3. Deploy the migration tool (ETL).
  4. Start the migration process by activating migration cron jobs.
  5. To migrate all historical data, run the migration process for about one month.
  6. When the data migration finishes, stop the process and delete all ETL-related cron jobs.
  7. Now you can start upgrading Aternity.
Important

Stop the data migration tool (ETL) as close as possible to the actual upgrade, to avoid gaps in the retained data. Longer a gap between stopping data migration and starting the upgrade, bigger data gaps will be in the retained data.

Proceed with data migration in this order
Tip

New attributes that did not exist in older versions appear in the new database tables as not available (N/A) (for example, MS Office License Type = N/A, Machine Power Plan = N/A).

Before you begin

Before migrating stored data to Aternity on-premise

  • Make yourself familiar with a new architecture of version 11, as well as with new hardware and software requirements.

  • Always check the sizing before updating. This version uses different database sizes, dashboard requirements and new servers. Learn more. You will need a dedicated server to run the new Vertica Database Server and another one for the Docker Components Server.

  • You must start with a fully functioning deployment of Aternity on-premise 9.x. If you have an older version, we recommend performing a fresh deployment.

  • Prepare a dedicated server for the data migration tool. As the data migration tool runs for a limited time period prior to the Aternity version update, we recommend using the same server that will be used for the Aternity Docker Components Server once Aternity 11 is installed. Therefore, the migration process does not require additional hardware purchases beyond the required for any Aternity 11 deployments.

  • To run a Python script that migrates data, pip 9.0.3 is necessary. Older or newer versions of the pip package will automatically be replaced by pip 9.0.3 as part of the migration ETL component installation.

  • For security reasons, it always better to set up servers as a non-root user with restricted privileges to run only certain predefined commands.

    Sudoers is the configuration file that provides the list of commands and access permissions. It defines who can do what. Only if the file permits the user access, the system invokes the requested command. The access permissions include enabling only the listed commands and only from the specified servers; requiring a password per user or group; or never requiring a password at all for a particular command line. ALL is a special value in the sudoers file meaning “no restrictions.”

    To set up servers as root user, the user must also be predefined in the sudoers file.

  • Ensure you have administrator permissions on Windows machines and sudo permissions on Linux.

  • Check that the dedicated server for the data migration tool conforms to the minimum system requirements:

    Attribute Requirement

    Operating system for ETL

    • Linux CentOS 7.4 - 7.7. To verify your version of CentOS, enter cat /etc/centos-release. For the kernel version, enter uname -r and see the first two numbers.

    • Red Hat Enterprise Linux (RHEL) 7.4 - 7.7. To verify the RHEL version, enter cat /etc/redhat-release

    Virtual memory settings for ETL

    Verify that your system has a swap file of at least 6GB which is active, by entering free -m.

    To create a 6GB swap file and enable it, enter the following commands, one line at a time:

    dd if=/dev/zero of=/tmp/swapfile bs=10240 count=630194
    mkswap /swapfile
    swapon /swapfile
    

    To set the swap file as permanent, edit the system partition settings in /etc/fstab in a plain text editor such as vim, and add the following line at the end of the file:

    /swapfile   swap    swap    defaults    0 0

    Then save and exit the text editor. The system enables the new swap file when you restart the server.

    Resource limits for ETL

    The user who runs the ETL must have an open file limit of 65536 or more.

    Set the number by editing the system's resource limits in etc/security/limits.conf in a plain text editor like vim, and add the following line at the end of the file:

    etl_username -     nofile  65536
    If the username is root, enter
    root -     nofile  65536

    Now enter ulimit -n to verify the limit.

    Then save and exit the editor.

    Permissions

    Only users who are defined in the sudoers file can run the setup. Start as a non-root user and follow the setup procedure. At some point, you will be prompted to ask your IT representative to define permissions in the sudoers file. It is possible to run the setup as root user on the computer if this user is defined in the sudoers file as well.

    The user who runs the server setup need to have sudo privileges.

    The user with permissions to run the ETL must also be allowed to run scheduled (crontab) tasks. The user which runs the ETL process does not need to have root or sudo root privileges.

Procedure

  1. Step 1 Install Vertica Database Server version 11 on a dedicated machine.
    The Aternity Vertica Database Server stores the performance data in the Vertica format, which is most efficient for displaying in Aternity dashboards.

    The number of Vertica Database Servers you require depends on the size of your deployment, and the level of high availability that you need.

    The migration process should be executed prior to Aternity update, and requires a fully functioning Vertica database.

    For the detailed instructions, see Set Up a New Aternity Vertica Database Server.

    Even if you are not interested in migrating your historical data, the Vertica Database Server must still be deployed as part of the general system update because it is part of Aternity 11 architecture.

    1. a Start the Vertica Database Server.
  2. Step 2 Create a new database schema in the Vertica Database Server.
    1. a Extract data_migration.zip which you downloaded as part of the Aternity on-premise server setup package
    2. b In the extracted folder, locate the file ansible_artifact_db_update_<version>.zip
    3. c Unpack the ansible_artifact_db_update_<version>.zip file on any Windows machine that can access the Vertica Database Server via port 5433.
      Several folders with the unpacked content are being created.
    4. d Open a command line in the extracted directory, and navigate to the folder named deployment.
    5. e Execute the following command:
      create_vertica.bat <vertica_ip> <vertica_db_name> <vertica_db_admin_user> <vertica_db_admin_pass> <aternity_user_name> <aternity_user_password>
      

      This is a single command line containing all of the parameters needed to create a new schema in the Vertica Database Server. We recommend that you carefully edit this command in a text editor to ensure that you have correctly entered the parameters, and then paste the text into the command prompt. Note that the parameters are separated with spaces.

      Important

      The parameters you enter for <aternity_user_name> and <aternity_user_password> MUST be identical to the Aternity schema name and password in your deployed Aternity Oracle Database Server. If the Vertica schema name or password are different from the Oracle schema name and password, the upgrade will fail.

      Field Description
      VERTICA_IP

      Enter the hostname or IP v4 address of the Vertica Database Server.

      DB_NAME

      Enter the name of the Vertica database (not the schema).

      DBADMIN_USER

      Enter the username of the Vertica database administrator (dbadmin by default), with privileges to create schemas and prepare the database.

      DBADMIN_PASS

      Enter the password of the Vertica database administrator, with privileges to create schemas and prepare the database.

      ATERNITY_USER_NAME

      Enter the same Aternity username as used in the Oracle database schema.

      ATERNITY_USER_PASSWORD

      Enter the same Aternity password as used in the Oracle database schema.

      An example of the output generated when the Vertica schema creation script starts running

      The script creates Aternity schema in the Vertica Database Server, and then runs a few database updates on this schema. Once all database updates have successfully completed, the script will finish its work and you return to the command line.

  3. Step 3 Install a temporary ETL component for data migration process.
    1. a In the extracted above folder, locate the file data_migration_v9_v11.tar and copy it to a temporary folder on the data migration host.
    2. b Unpack this tar file in the temporary folder by entering tar -xvf ETL.tar.
      Unpack it on the Linux machine dedicated to the Aternity Docker Components Server that temporarily serves as the ETL component for data migration.
      Unpack the ETL setup files
      Field Description
      -x

      Use -x to unzip the contents of the package.

      -v

      Use -v to output all messages (verbose).

      -f

      Use -f to specify the filename.

    3. c Configure the user who will set up the server to have the required sudo privileges.
      1. Edit the file sudoer_cfg.sh (located under the unpacked tar file). At the beginning of the file, replace the values of the parameters specified in the table below with the values relevant for your deployment:
        Field Description
        INSTALLING_USER='etlinstall'

        Replace with a relevant installing user name. This user must already exist in the system.

        ETL_INSTALLATION_DIRECTORY='/home/etluser/migration'

        Replace with location of the installation files (not the ETL target location).

        ETL_USER='etluser'

        Replace with name of the user with permissions to run the migration ETL. This user must have permissions to run scheduled (crontab) tasks. This user must already exist in the system.

        Typically, you would not run the ETL as a root user, which is the default if you do not specify another user.

        ETL_USER_GROUP='etluser'

        Replace with group of the etl_user.

        ETL_HOME_FOLDER='/etl_data'

        Replace with target location for the ETL (where the ETL will run from).

        VERTICA_DB_NAME='aternity' Replace with Vertica Database name.
        VERTICA_SCHEMA_NAME='aternity'

        Replace with Aternity schema name (same as the Aternity schema name in Oracle) – In migrations from 10.0.x, this already exists, in migrations from 9.0.x, this was created in step 2e.

      2. Do not edit additional text below these parameters. Save changes and exit.

      3. Run the following command: # /bin/bash sudoer_cfg.sh

      4. Copy the output and provide it to your IT representative. (IT people should grant access permissions in the sudoers file allowing privileged user(s) to set up ETL. Learn more.)

      5. Follow the below steps, running commands as the user defined under INSTALLING_USER.

    4. d Navigate to the temporary folder containing the unpacked setup files and run this command, replacing the parameters with those from your configuration.
      python ETL_Install.py -etl_home_folder <target_folder> -oracle_user <aternity_user_name> -oracle_db_password <aternity_user_password> -oracle_ip <oracle_server_address> -oracle_port <oracle_port> -oracle_service <oracle_aternity_service_name> -vertica_db_name <vertica_aternity_db_name> -vertica_node <vertica_node_address> -vertica_db_admin_user <dbadmin_user_name> -vertica_db_admin_password <dbadmin_password> -vertica_cluster_ips <list_of_all_vertica_node_addresses> -etl_user <dedicated_linux_account> -etl_user_group_name <group_containing_dedicated_etl_user_account>

      You run the setup as a Python script with a long list of parameters needed. This is a single command line containing all of the parameters needed to run the migration ETL component setup. We recommend that you carefully edit this command in a text editor to ensure that you have correctly entered the parameters, and then paste the text into the command prompt.

      Run the setup script using Python
      Field Description

      ETL_HOME_FOLDER

      Provide a path to a destination folder where setup writes the program files. This folder also stores ETL data files.

      Note

      While you choose the destination folder of the setup, note that it also inserts files in /opt/vertica and /opt/sqlcl.

      ORACLE_USER

      ORACLE_DB_PASSWORD

      Enter the name of the ATERNITY schema (username) and password of the Aternity Oracle Database Server.

      ORACLE_IP

      Enter the hostname or IP v4 address of the Aternity Oracle Database Server.

      ORACLE_PORT

      Enter the port required to access the Oracle Database Server (default is 1521).

      ORACLE_SERVICE

      Enter the Oracle database service name (usually Aternity). This is the alias to the instance of the Aternity database.

      VERTICA_DB_NAME

      Enter the name of the Vertica database (not the schema).

      VERTICA_NODE

      Enter the hostname or IP v4 address of any one of the Vertica Database Servers in the Vertica cluster.

      The ETL attempts to connect to this server first, and if it is unable, it tries to connect with the next server in the cluster.

      If you only have one Vertica Database Server, enter its hostname or IP v4 address.

      VERTICA_DB_ADMIN_USER

      VERTICA_DB_ADMIN_PASSWORD

      Enter the username and password of the Vertica database administrator, with privileges to create schemas and prepare the database.

      VERTICA_CLUSTER_IPS

      Enter a comma-separated list of all of the hostnames or FQDNs (recommended) or IP v4 addresses of all of the Vertica Database Server nodes in the cluster, for example, vertica_db1.mydomain.com,vertica_db2.mydomain.com,vertica_db3.mydomain.com.

      If you only have one Vertica Database Server, enter its name or address.

      ETL_USER

      Enter the name of the user with permissions to run the ETL. This user must have permissions to run scheduled (crontab) tasks. Typically, you would not run the ETL as a root user, which is the default if you do not specify another user.

      ETL_USER_GROUP_NAME

      Enter the group of the etl_user.

      The setup notifies you when it completes successfully.

      Your setup of the ETL Server completed successfully

      Let both database servers work in parallel for one month. If the migration process runs longer than that, only the most recent 31-days data will be available. If the migration process runs for less than that, note that as a result your historical retained data will be limited.

  4. Step 4 Validate that data migration is successful by checking that cron jobs have been created for the user who is running data migration.
    Execute the following command and enter the name of the Linux account dedicated to running data migration, as set when the migration tool was deployed:
    crontab -l -u <dedicated_linux_user>
    Verify the status of data migration process
  5. Step 5 Keep track of the data migration process's state by checking the ETL log files periodically.

    Find the log files under <data_migration_home_folder>/<oracle_service_name>/<aternity_schema_name>/log/etl.log. For example, /data_migration/aternity/customerSchema/log/etl.log .

    The folder names created during the data migration depend on the Oracle service name and Aternity schema name.

    Verify that the log file contains entries and check it for ERROR. If the log contains entries without errors, the data migration is running properly.

  6. Step 6 Stop the migration by removing the migration cron jobs from the Linux server running the ETL component.

    After the data migration tool was running for about one month, start with upgrade. We recommend to stop the data migration tool (ETL) as close as possible to the actual upgrade, to avoid gaps in the retained data. Longer a gap between stopping data migration and starting the upgrade, bigger data gaps will be in the retained data.

    Note

    Not stopping the migration tool may prevent the upgrade from completing successfully.

    On the data migration Linux host, execute the following command to delete the migration cron jobs:
    crontab -r -u <dedicated_linux_user>
    Run the command as a user with root or sudo root privileges. To verify, enter sudo id.
    Note

    This command will delete all cron jobs for any user who runs this command, so it is important to specify the dedicated_linux_user in the command. The dedicated_linux_user

    is the etl_user as defined during the Docker Components installation.

    Alternatively, run the command as a dedicated_linux_user. In this case, the command is as follows: crontab -r.

  7. Step 7 Continue with Aternity upgrade. Learn more.