Migrate Your Stored Data from Version 10 to Version 11

Aternity allows a direct upgrade path from on-premise version 10 to version 11. For better quality and performance, in version 11 Aternity moved most of business analytical data to the Vertica database. To view version 11 dashboards with the historical data from version 10, perform the process of data migration as detailed in this article. To allow a full migration of data, run the process several days before Aternity upgrade. Once migration is complete, the upgrade can begin.
In the current release, most of business data is stored in Vertica database

Data migration process serves customers interested in keeping historical data after upgrade. To ensure a successful data migration to the latest version, you should deploy the data migration tool (ETL) and run the migration process. This process transfers existing data from the Oracle database to Vertica.

Without data migration, some historical data will not be available after upgrade, and several dashboards and APIs will open empty.

Dashboards from version 10.x that use a new database storage in version 11 Retention period in version 11 After data migration Without data migration

SLA

400 days

SLA transforms to Vertica in chunks. It takes a few days (3-4) to transform and load the last 400 days of SLA data.

Following data migration, when you open Aternity version 11 for the first time, your historical data will show on the dashboard.

Without data migration, the dashboard opens empty after upgrade.

Only new data collected by the upgraded deployment will appear on this dashboard. It will be fully populated with data in a month after upgrade as new data flows in from devices.

Boots

Boot Analysis

One month

Following data migration, when you open Aternity version 11 for the first time, your historical data will show on dashboards.

Without data migration, these dashboards open empty after upgrade.

They will be fully populated with data in a month after upgrade as new data flows in from devices.

New in version 11 dashboards, those that did not exist in version 10 at all, will open empty. (For example, Installed Applications, Wi-Fi, Host Resources, Skype Commonalities and more).

One month N/A

N/A

New dashboards that did not exist in previous versions initially appear empty and will be fully populated with data in a month as new data flows in from devices.

As shown on the below diagram, the workflow is as follows:
  1. Update the Vertica server to version 9.1.1-11.
  2. Deploy the migration tool (ETL).
  3. Start the migration process by activating migration cron jobs.
  4. Wait for 3-4 days until all historical data is copied.
  5. When the data migration finishes, stop the process and delete all ETL-related cron jobs.
  6. Now start upgrading Aternity.
Important

Stop the data migration tool (ETL) as close as possible to the actual upgrade, to avoid gaps in the retained data. Longer a gap between stopping data migration and starting the upgrade, bigger data gaps will be in the retained data.

Proceed with data migration in this order
Tip

New attributes that did not exist in older versions appear in the new database tables as not available (N/A) (for example, MS Office License Type = N/A, Machine Power Plan = N/A).

Before you begin

Before migrating stored data to Aternity on-premise

  • Make yourself familiar with a new architecture of version 11, as well as with new hardware and software requirements.

  • Always check the sizing before updating. This version uses different database sizes, dashboard requirements and new servers. Learn more.

  • You will need a new server dedicated for the Docker Components. Purchase a new server that meets hardware requirements for Aternity Docker Components Server.

  • Prepare a dedicated server for the data migration tool. As the data migration tool runs prior to the Aternity version update, we recommend using the same server that will be used for the Aternity Docker Components Server once Aternity 11 is installed. Therefore, the migration process does not require additional hardware purchases beyond the required for any Aternity 11 deployments.

    Note

    Although in version 10 architecture there is already one ETL Server that sends data from the Oracle Database Server to the Vertica Database Server, you need another temporary ETL that is dedicated to the data migration process. Version 10 ETL and the migration ETL tool must run in parallel on two different machines. Do not install the tool for data migration on exiting Version 10 ETL Server. Once version 11 is up and running, you will not need anymore the old ETL Server.

  • You must start with a fully functioning deployment of Aternity on-premise 10.x.

  • To run a Python script that migrates data, pip 9.0.3 is necessary. Older or newer versions of the pip package will automatically be replaced by pip 9.0.3 as part of the migration ETL component installation.

  • Ensure you have administrator permissions on Windows machines and sudo permissions on Linux.

  • Check that the dedicated server for the data migration tool conforms to the minimum system requirements:

    Attribute Requirement

    Operating system for ETL

    • Linux CentOS 7.4 - 7.6. To verify your version of CentOS, enter cat /etc/centos-release. For the kernel version, enter uname -r and see the first two numbers.

    • Red Hat Enterprise Linux (RHEL) 7.4 - 7.6. To verify the RHEL version, enter cat /etc/redhat-release

    Virtual memory settings for ETL

    Verify that your system has a swap file of at least 6GB which is active, by entering free -m.

    To create a 6GB swap file and enable it, enter the following commands, one line at a time:

    dd if=/dev/zero of=/tmp/swapfile bs=10240 count=630194
    mkswap /swapfile
    swapon /swapfile
    

    To set the swap file as permanent, edit the system partition settings in /etc/fstab in a plain text editor such as vim, and add the following line at the end of the file:

    /swapfile   swap    swap    defaults    0 0

    Then save and exit the text editor. The system enables the new swap file when you restart the server.

    Resource limits for ETL

    The user who runs the ETL must have an open file limit of 65536 or more.

    Set the number by editing the system's resource limits in etc/security/limits.conf in a plain text editor like vim, and add the following line at the end of the file:

    etl_username -     nofile  65536
    If the username is root, enter
    root -     nofile  65536

    Now enter ulimit -n to verify the limit.

    Then save and exit the editor.

    Permissions

    The user who runs the server setup need to have sudo privileges.

    The user with permissions to run the ETL must also be allowed to run scheduled (crontab) tasks. The user which runs the ETL process does not need to have root or sudo root privileges.

Procedure

  1. Step 1 Perform an update of Vertica Database Server to version 9.1.1-11. Learn more.
    This process requires shutting down Aternity servers, so expect some downtime.

    The Aternity Vertica Database Server stores the performance data in the Vertica format, which is most efficient for displaying in Aternity dashboards.

    The number of Vertica Database Servers you require depends on the size of your deployment, and the level of high availability that you need.

    Even if you are not interested in migrating your historical data, the Vertica Database Server must still be updated as part of the general system update because it is part of the Aternity 11 architecture.

  2. Step 2 Start all Aternity 10 components.
  3. Step 3 Install a temporary ETL component for data migration process.
    1. a Extract the data_migration.zip file which you downloaded as part of the Aternity on-premise server setup package.
    2. b In the extracted folder, locate the file data_migration_v10_v11.tar and copy it to a temporary folder on the data migration host.
    3. c Unpack this tar file by entering tar -xvf data_migration_v10_v11.tar.
      Unpack it on the Linux machine dedicated to the Aternity Docker Components Server that temporarily serves as the ETL component for data migration.
      Unpack the ETL setup files
      Field Description
      -x

      Use -x to unzip the contents of the package.

      -v

      Use -v to output all messages (verbose).

      -f

      Use -f to specify the filename.

    4. d Configure the user who will set up the server to have the required sudo privileges.
      1. Edit the file sudoer_cfg.sh (located under the unpacked tar file). At the beginning of the file, replace the values of the specified in the below table parameters with the values relevant for your deployment:
        Field Description
        INSTALLING_USER='etlinstall'

        Replace with a relevant installing user name. This user must already exist in the system.

        ETL_INSTALLATION_DIRECTORY='/home/etluser/migration'

        Replace with location of the installation files (not the ETL target location).

        ETL_USER='etluser'

        Replace with name of the user with permissions to run the migration ETL. This user must have permissions to run scheduled (crontab) tasks. This user must already exist in the system.

        Typically, you would not run the ETL as a root user, which is the default if you do not specify another user.

        ETL_USER_GROUP='etluser'

        Replace with group of the etl_user.

        ETL_HOME_FOLDER='/etl_data'

        Replace with target location for the ETL (where the ETL will run from).

        VERTICA_DB_NAME='aternity' Replace with Vertica Database name.
        VERTICA_SCHEMA_NAME='aternity'

        Replace with Aternity schema name (same as the Aternity schema name in Oracle) – In migrations from 10.0.x, this already exists, in migrations from 9.0.x, this was created in step 2e.

      2. Do not edit additional text below these parameters. Save changes and exit.

      3. Run the following command: # /bin/bash sudoer_cfg.sh

      4. Copy the output and provide it to your IT representative. (IT people should grant access permissions in the sudoers file allowing privileged user(s) to set up ETL. Learn more.)

      5. Follow the below steps, running commands as the user defined under INSTALLING_USER.

    5. e Navigate to a temporary folder with the unpacked setup files and run this command, replacing the parameters with those from your configuration:
      python ETL_Install.py -etl_home_folder <target_folder> -oracle_user <aternity_user_name> -oracle_db_password <aternity_user_password> -oracle_ip <oracle_server_address> -oracle_port <oracle_port> -oracle_service <oracle_aternity_service_name> -vertica_db_name <vertica_aternity_db_name> -vertica_node <vertica_node_address> -vertica_db_admin_user <dbadmin_user_name> -vertica_db_admin_password <dbadmin_password> -vertica_cluster_ips <list_of_all_vertica_node_addresses> -etl_user <dedicated_linux_account> -etl_user_group_name <group_containing_dedicated_etl_user_account>

      You run the setup as a Python script with a long list of parameters needed. This is a single command line containing all of the parameters needed to run the migration ETL component setup. We recommend that you carefully edit this command in a text editor to ensure that you have correctly entered the parameters, and then paste the text into the command prompt.

      Run the setup script using Python
      Field Description

      etl_home_folder

      Provide a path to a destination folder where setup writes the program files. This folder also stores ETL data files.

      Note

      While you choose the destination folder of the setup, note that it also inserts files in /opt/vertica and /opt/sqlcl.

      oracle_user

      oracle_db_password

      Enter the name of the ATERNITY schema (username) and password of the Aternity Oracle Database Server.

      oracle_ip

      Enter the hostname or IP v4 address of the Aternity Oracle Database Server.

      oracle_port

      Enter the port required to access the Oracle Database Server (default is 1521).

      oracle_service

      Enter the Oracle database service name (usually Aternity). This is the alias to the instance of the Aternity database.

      vertica_db_name

      Enter the name of the Vertica database (not the schema).

      vertica_node

      Enter the hostname or IP v4 address of any one of the Vertica Database Servers in the Vertica cluster.

      The ETL attempts to connect to this server first, and if it is unable, it tries to connect with the next server in the cluster.

      If you only have one Vertica Database Server, enter its hostname or IP v4 address.

      vertica_db_admin_user

      vertica_db_admin_password

      Enter the username and password of the Vertica database administrator, with privileges to create schemas and prepare the database.

      vertica_cluster_ips

      Enter a comma-separated list of all of the hostnames or FQDNs (recommended) or IP v4 addresses of all of the Vertica Database Server nodes in the cluster, for example, vertica_db1.mydomain.com,vertica_db2.mydomain.com,vertica_db3.mydomain.com.

      If you only have one Vertica Database Server, enter its name or address.

      etl_user

      Enter the name of the user with permissions to run the ETL. This user must have permissions to run scheduled (crontab) tasks. Typically, you would not run the ETL as a root user, which is the default if you do not specify another user.

      etl_user_group_name

      Enter the group of the etl_user.

      The setup notifies you when it completes successfully.

      Your setup of the ETL Server completed successfully
  4. Step 4 Validate that data migration is successful by checking that cron jobs have been created for the user who is running data migration.
    Execute the following command and enter the name of the Linux account dedicated to running data migration, as set when the migration tool was deployed:
    crontab -l -u <dedicated_linux_user>

    The deployment of the data migration creates several scheduled cron jobs. You can verify that the data migration was deployed successfully by checking that these cron jobs were created. Run the command shown in this slide, replacing the placeholder with the name of the Linux account dedicated to running the migration, as set when the migration was deployed.

    example of the expected output
  5. Step 5 Keep track of the data migration process's state by checking the ETL log files periodically.
    Two log files can be found at:
    • <data_migration_home_folder>/<oracle_service_name>/<aternity_schema_name>/log/etl.log For example, /data_migration/aternity/customerSchema/log/etl.log .

      Note

      Folder names that are being created during data migration depend on the Oracle service name and the Aternity schema name of the current Aternity installation.

      Keep track of the data migration process's state by checking the ETL log files

      Verify that the log file contains entries and check it for ERROR. If the log contains entries without errors, the data migration is running properly.

    • <data_migration_home_folder>/<oracle_service_name>/<aternity_schema_name>/migration_status/current_migration_status.log

      This log file should contain three entries, each showing the migration progress for a different database table. Once all three reach 100%, the migration is complete.

      Check logs for the progress status
  6. Step 6 Let the data migration tool run until Aternity upgrade begins.

    The tool can complete its job in a few days and then continue running in parallel to the existing version until upgrade process begins. We recommend to stop the data migration tool (ETL) as close as possible to the actual upgrade, to avoid gaps in the retained data. Longer a gap between stopping data migration and starting the upgrade, bigger data gaps will be in the retained data.

    Note

    Not stopping the migration tool may prevent the upgrade from completing successfully.

    Before upgrade, stop the migration tool (ETL). On the data migration Linux host, execute the following command to delete the migration cron jobs: crontab -r -u <dedicated_linux_user>. Run the command as a user with root or sudo root privileges. To verify, enter sudo id.
    Note

    This command will delete all cron jobs for any user who runs this command, so it is important to specify the dedicated_linux_user in the command. The dedicated_linux_user

    is the etl_user as defined during the Docker Components installation.

    Alternatively, run the command as a dedicated_linux_user. In this case, the command is as follows: crontab -r.

  7. Step 7 Continue with Aternity upgrade. Learn more.
    If you see in the upgrade instructions some steps that you have already performed during data migration, skip and continue.