System requirements

Introduction

Product system requirements are reported on a component-by-component basis.

When the deployment pattern you adopted requires more than one component running on the same Data One domain node, the resulting system requirements will be the union of the system requirements of each component deployed on that node.

Global system requirements

  • Operating Systems

    • Red Hat Enterprise Linux 8.6 with fontconfig package installed
    • z/OS 2.4 or z/OS 2.5
      • just for STENG, which is available for both Linux and z/OS platforms
  • Relational databases

    • Oracle Enterprise Edition 19c
    • PostgreSQL 13
  • Clocks synchronization

    • All managed nodes in a Data One Domain must have their clocks synchronized via NTP (Network Time Protocol)
  • Linux sysstat package (optional)

    • This is an optional, but recommended, package that can be useful to Primeur Helpdesk to speed-up troubleshooting of non-functional issues at runtime.

DOIM system requirements

  • Any of these Ansible-based products:
    • Ansible Core 2.13 or higher Ansible Core 2.x, plus unixy stdout callback plug-in and archive module, which are not part of Ansible Core.
    • Any Ansible-based product containing Ansible Core 2.13 or higher Ansible Core 2.x, s unixy stdout callback plug-in and archive module, in case they are not already included in the adopted Ansible-based product.
    • Red Hat Ansible Engine 2.9.25 or higher 2.9.x.
    • Community-released Ansible 2.9.25 or higher 2.9.x.
  • Ansible requires sshpass if installation will be done with username and password credentials
  • EN Language set
  • Operating system zip/unzip command

CEMAN system requirements

  • In case of a clustered deployment, an NFS 4.1 shared file system must be reachable from each CEMAN node. The expected NFS settings are:

    nfsvers=4.1,sync,intr,noac,soft,lookupcache=none,timeo=50,retrans=1

  • in case of a clustered deployment, a load balancer in front of each CEMAN node is required for balancing HTTPS access to Data One WUI

Data Watcher system requirements

  • Python 3

STENG system requirements

  • For Linux installations, in case of a clustered deployment, an NFS 4.1 shared file system must be reachable from each STENG node. The expected NFS mount settings are:

    nfsvers=4.1,sec=sys,sync,intr,noac,hard,lookupcache=none,timeo=50,retrans=2

  • For z/OS installations please refer platform-specific documentation shipped with the product
  • When monitoring file systems using File Event Listener (FEL) additional file system requirements apply.

DMZ Gateway system requirements

There are no component-specific requirements for DMZ Gateway .

Data Shaper system requirements

There are no component-specific requirements for Data Shaper.

Storage requirements

Data One uses different types of storage for different purposes, here is a recap:

  • CEMAN shared file system

    • This is a dedicated shared file system internally used by all CEMAN cluster nodes to store product state information, including AMQ Broker message store
    • This file system must be an NFS 4.1, mounted as indicated earlier in this page
  • STENG shared file system

    • This is a dedicated shared file system internally used by all STENG cluster nodes to store product state information, including Data Mover key store and trust store
    • This file system must be an NFS 4.1, mounted as indicated earlier in this page
  • Storage classes

    • Storage Class is Data One abstractions over the notion of "disk", where to stage files waiting to be consumed and or transferred (see also Storage Classes), virtual paths in a virtual file system are underpinned by a physical storage described in storage class (see also Virtual File Systems (VFS)
    • Several storage class types are supported, each one with its own specific requirements
    • A commonly used storage class is the Shared File System Storage Class that can be underpinned by any shared file system supporting concurrent access, real-time synchronization and immediate visibility across cluster nodes
      • The product has been certified using a Shared File System based on NFS 4.1 mounted exactly like the previously discussed STENG shared file system
  • File Event Listener monitored locations

    • File Event Listener is a Data One feature the monitors changes to files residing on some monitored locations and triggers contracts to process them (see also File Event Listener)
    • Monitored locations are file systems, SMB3 shares and or remote SFTP Servers not directly managed by Data One, but simply used a source for customer-provided input files to be processed
    • Irrespective of monitored locations types, File Event Listener persists internal state information on a dedicated shared file system, which requires an NFS 4.1 shared file system mounted as indicated in File Event Listener:

      nfsvers=4.1,sec=sys,sync,intr,noac,hard,lookupcache=none,timeo=50,retrans=2

Network topology recommendations

As discussed in Data One common domain topology patterns, Data One offers multiple deployment options. Data One services can run on a variable number of nodes, depending on available resources and environment-specific performance constraints.

It is recommended that all such nodes, plus the database nodes and any storage server nodes hosting shared file systems used by Data One, are located in one or more separate subnets, accessible only to administrators, with just the required firewall traversal routes allowed.

For more information on Data One own firewall traversal route requirements please refer to Ceman firewall

For more information about DMZ traversal please refer to Planning initial installation and master configuration and other reference sections on the DMZ Gateway component throughout the provided documentation.

The above recommendations are NOT enforced by the product at runtime. They are provided here as a reminder of basic security best practices, generally aimed at reducing the attack surface; all the network topology hardening activities applicable to Data One are in charge of the customer’s own system administration team.