Primeur Online Docs
Data Shaper
Data Shaper
  • 🚀GETTING STARTED
    • What is Primeur Data Shaper
      • What is the Data Shaper Designer
      • What is the Data Shaper Server
      • What is the Data Shaper Cluster
    • How does the Data Shaper Designer work
      • Designer Views and Graphs
      • Data Shaper Graphs
      • Designer Projects and Sandboxes
      • Data Shaper Designer Reference
    • How do the Data Shaper Server and Cluster work
      • Data Shaper Server and Cluster
      • Data Shaper Server Reference
    • VFS Graph Components
      • DataOneFileDescriptor (DOFD) metadata
      • Passing parameters from Data One Contract to Data Shaper graph
      • Inheriting Data One context attributes in Data Shaper graph
  • DATA SHAPER DESIGNER
    • Configuration
      • Runtime configuration
        • Logging
        • Master Password
        • User Classpath
      • Data Shaper Server Integration
      • Execution monitoring
      • Java configuration
      • Engine configuration
      • Refresh Operation
    • Designer User Interface
      • Graph Editor with Palette of Components
      • Project Explorer Pane
      • Outline Pane
      • Tabs Pane
      • Execution Tab
      • Keyboard Shortcuts
    • Projects
      • Creating Data Shaper projects
      • Converting Data Shaper projects
      • Structure of Data Shaper projects
      • Versioning of server project content
      • Working with Data Shaper Server Projects
      • Project configuration
    • Graphs
      • Creating an empty graph
      • Creating a simple graph
        • Placing Components
        • Placing Components from Palette
        • Connecting Components with Edges
    • Execution
      • Successful Graph Execution
      • Run configuration
      • Connecting to a running job
      • Graph states
    • Common dialogs
      • URL file dialog
      • Edit Value dialog
      • Open Type dialog
    • Import
      • Import Data Shaper projects
      • Import from Data Shaper server sandbox
      • Import graphs
      • Import metadata
    • Export
      • Export graphs to HTML
      • Export to Data Shaper Server sandbox
      • Export image
      • Export Project as Library
    • Graph tracking
      • Changing record count font size
    • Search functionality
    • Working with Data Shaper server
      • Data Shaper server project basic principles
      • Connecting via HTTP
      • Connecting via HTTPS
      • Connecting via Proxy Server
    • Graph components
      • Adding components
      • Finding components
      • Edit component dialog box
      • Enable/disable component
      • Passing data through disabled component
      • Common properties of components
      • Specific attribute types
      • Metadata templates
    • Edges
      • Connecting Components with Edges
      • Types of Edges
      • Assigning Metadata to Edges
      • Colors of Edges
      • Debugging Edges
      • Edge Memory Allocation
    • Metadata
      • Records and Fields
        • Record Types
        • Data Types in Metadata
        • Data Formats
        • Locale and Locale Sensitivity
        • Time Zone
        • Autofilling Functions
      • Metadata Types
        • Internal Metadata
        • External (Shared) Metadata
        • SQL Query Metadata
        • Reading Metadata from Special Sources
      • Auto-propagated Metadata
        • Sources of Auto-Propagated Metadata
        • Explicitly Propagated Metadata
        • Priorities of Metadata
        • Propagation of SQL Query Metadata
      • Creating Metadata
        • Extracting Metadata from a Flat File
        • Extracting Metadata from an XLS(X) File
        • Extracting Metadata from a Database
        • Extracting Metadata from a DBase File
        • Extracting Metadata from Salesforce
        • SQL Query Metadata
        • User Defined Metadata
      • Merging Existing Metadata
      • Creating Database Table from Metadata and Database Connection
      • Metadata Editor
        • Opening Metadata Editor
        • Basics of Metadata Editor
        • Record Pane
        • Field Name vs. Label vs. Description
        • Details Pane
      • Changing and Defining Delimiters
      • Editing Metadata in the Source Code
      • Multi-value Fields
        • Lists and Maps Support in Components
        • Joining on multivalue fields (Comparison Rules)
    • Connections
      • Database Connections
        • Internal Database Connections
        • External (Shared) Database Connections
        • Database Connections Properties
        • Encryption of Access Password
        • Browsing Database and Extracting Metadata from Database Tables
        • Windows Authentication on Microsoft SQL Server
        • Snowflake Connection
        • Hive Connection
        • Troubleshooting
      • JMS Connections
      • QuickBase Connections
      • Hadoop Connections
      • Kafka Connections
      • OAuth2 Connections
      • MongoDB Connections
      • Salesforce Connections
    • Lookup Tables
      • Lookup Tables in Cluster Environment
      • Internal Lookup Tables
      • External (Shared) Lookup Tables
      • Types of Lookup Tables
    • Sequences
      • Persistent Sequences
      • Non Persistent Sequences
      • Internal Sequences
      • External (Shared) Sequences
      • Editing a Sequence
      • Sequences in Cluster Environment
    • Parameters
      • Internal Parameters
      • External (Shared) Parameters
      • Secure Graph Parameters
      • Graph Parameter Editor
      • Secure Graph Parameters
      • Parameters with CTL2 Expressions (Dynamic Parameters)
      • Environment Variables
      • Canonicalizing File Paths
      • Using Parameters
    • Internal/External Graph Elements
    • Dictionary
      • Creating a Dictionary
      • Using a Dictionary in Graphs
    • Execution Properties
    • Notes in Graphs
      • Placing Notes into Graph
      • Resizing Notes
      • Editing Notes
      • Formatted Text
      • Links from Notes
      • Folding Notes
      • Notes Properties
    • Transformations
      • Defining Transformations
      • Transform Editor
      • Common Java Interfaces
    • Data Partitioning (Parallel Running)
    • Data Partitioning in Cluster
      • High Availability
      • Scalability
      • Graph Allocation Examples
      • Example of Distributed Execution
      • Remote Edges
    • Readers
      • Common Properties of Readers
      • ComplexDataReader
      • DatabaseReader
      • DataGenerator
      • DataOneVFSReader
      • EDIFACTReader
      • FlatFileReader
      • JSONExtract
      • JSONReader
      • LDAPReader
      • MultiLevelReader
      • SpreadsheetDataReader
      • UniversalDataReader
      • X12Reader
      • XMLExtract
      • XMLReader
      • XMLXPathReader
    • Writers
      • Common Properties of Writers
      • DatabaseWriter
      • DataOneVFSWriter
      • EDIFACTWriter
      • FlatFileWriter
      • JSONWriter
      • LDAPWriter
      • SpreadsheetDataWriter
      • HIDDEN StructuredDataWriter
      • HIDDEN TableauWriter
      • Trash
      • UniversalDataWriter
      • X12Writer
      • XMLWriter
    • Transformers
      • Common Properties of Transformers
      • Aggregate
      • Concatenate
      • DataIntersection
      • DataSampler
      • Dedup
      • Denormalizer
      • ExtSort
      • FastSort
      • Filter
      • Map
      • Merge
      • MetaPivot
      • Normalizer
      • Partition
      • Pivot
      • Rollup
      • SimpleCopy
      • SimpleGather
      • SortWithinGroups
      • XSLTransformer
    • Joiners
      • Common Properties of Joiners
      • Combine
      • CrossJoin
      • DBJoin
      • ExtHashJoin
      • ExtMergeJoin
      • LookupJoin
      • RelationalJoin
    • Others
      • Common Properties of Others
      • CheckForeignKey
      • DBExecute
      • HTTPConnector
      • LookupTableReaderWriter
      • WebServiceClient
    • CTL2 - Data Shaper Transformation Language
    • Language Reference
      • Program Structure
      • Comments
      • Import
      • Data Types in CTL2
      • Literals
      • Variables
      • Dictionary in CTL2
      • Operators
      • Simple Statement and Block of Statements
      • Control Statements
      • Error Handling
      • Functions
      • Conditional Fail Expression
      • Accessing Data Records and Fields
      • Mapping
      • Parameters
      • Regular Expressions
    • CTL Debugging
      • Debug Perspective
      • Importing and Exporting Breakpoints
      • Inspecting Variables and Expressions
      • Examples
    • Functions Reference
      • Conversion Functions
      • Date Functions
      • Mathematical Functions
      • String Functions
      • Mapping Functions
      • Container Functions
      • Record Functions (Dynamic Field Access)
      • Miscellaneous Functions
      • Lookup Table Functions
      • Sequence Functions
      • Data Service HTTP Library Functions
      • Custom CTL Functions
      • CTL2 Appendix - List of National-specific Characters
      • HIDDEN Subgraph Functions
    • Tutorial
      • Creating a Transformation Graph
      • Filtering the records
      • Sorting the Records
      • Processing Speed-up with Parallelization
      • Debugging the Java Transformation
  • DATA SHAPER SERVER
    • Introduction
    • Administration
      • Monitoring
    • Using Graphs
      • Job Queue
      • Execution History
      • Job Inspector
    • Cluster
      • Sandboxes in Cluster
      • Troubleshooting
  • Install Data Shaper
    • Install Data Shaper
      • Introduction to Data Shaper installation process
      • Planning Data Shaper installation
      • Data Shaper System Requirements
      • Data Shaper Domain Master Configuration reference
      • Performing Data Shaper initial installation and master configuration
        • Creating database objects for PostgreSQL
        • Creating database objects for Oracle
        • Executing Data Shaper installer
        • Configuring additional firewall rules for Data Shaper
Powered by GitBook
On this page
  • Short Description
  • Ports
  • Metadata
  • Pivot Attributes
  • Details
  • Group Data by Setting Attributes
  • Define Your Own Transformation - Java/CTL
  • CTL Interface
  • Java Interface
  • Examples
  • Data Transformation with Pivot - Using Key
  • Converting fixed number of records to single record
  • Converting fixed number of records to single record using CTL
  • Passing trough Fields to Output
  • Best Practices
  • See also
  1. DATA SHAPER DESIGNER
  2. Transformers

Pivot

PreviousPartitionNextRollup

Last updated 1 month ago

Short Description

The component reads input records and treats them as groups. A group is defined either by a Group key or a number of records forming the group. Pivot then produces a single record from each group. In other words, the component creates a pivot table.

Pivot has two principal attributes which instruct it to treat some input values as output field names and other inputs as output values.

The component is a simple form of .

COMPONENT
SAME INPUT METADATA
SORTED INPUTS
INPUTS
OUTPUTS
JAVA
CTL
AUTO-PROPAGATED METADATA

Pivot

-

x

1

1

✓

✓

x

Note: When using the Group key attribute, input records should be sorted. See Details below.

Ports

PORT TYPE
NUMBER
REQUIRED
DESCRIPTION
METADATA

Input

0

✓

For input data records

Any1

Output

0

✓

For summarization data records

Any2

Metadata

Pivot does not propagate metadata. Pivot has no metadata template.

Pivot Attributes

ATTRIBUTE
REQ
DESCRIPTION
POSSIBLE VALUES

BASIC

Group Key

[1]

The Group key is a set of fields used to identify groups of input records (more than one field can form a Group key). A group is formed by a sequence of records with identical Group key values. Group key fields are passed to the output (if a field with the same name exists).

any input field

Group size

[1]

The number of input records forming one group. When using Group size, the input data does not have to be sorted. Pivot then reads a number of records and transforms them to one group. The number is just the value of Group size.

<1; n>

Field defining output field name

[2]

The input field whose value "maps" to a field name on the output.

Field defining output field value

[2]

The input field whose value "maps" to a field value on the output.

Sort order

Equal NULL

Determines whether two fields containing null values are considered equal.

true (default) | false

ADVANCED

Pivot transformation

[3]

Using CTL or Java, you can write your own records transformation here.

Pivot transformation URL

[3]

The path to an external file which defines how to transform records. The transformation can be written in CTL or Java.

Pivot transformation class

[3]

The name of a class that is used for data transformation. It can be written in Java.

Pivot transformation source charset

The encoding of an external file defining the data transformation. The default encoding depends on DEFAULT_SOURCE_CODE_CHARSET in defaultProperties.

UTF-8 | any

DEPRECATED

Error actions

Defines actions that should be performed when the specified transformation returns an Error code. See Return Values of Transformations.

Error log

The URL of the file which error messages should be written to. These messages are generated during Error actions, see above. If the attribute is not set, messages are written to Console.

[1] One of the Group key or Group size attributes has to be always set. [2] These two values can either be given as an attribute or in your own transformation. [3] One of these attributes has to be set if you do not control the transformation by means of Field defining output field name and Field defining output field value.

Details

You can define the data transformation in two ways:

  1. Set the Group key or Group size attributes. See Group Data by Setting Attributes below.

  2. Write the transformation yourself in CTL/Java or provide it in an external file/Java class. See Define Your Own Transformation - Java/CTL below.

Group Data by Setting Attributes

Group Data Using Group Key If you group data using the Group key attribute, your input should be sorted according to Group key values. To tell the component how your input is sorted, specify Sort order. If the Group key fields appear in the output metadata as well, Group key values are copied automatically.

Group Data Using Group Size When you are grouping using the Group size attribute, the component ignores the data itself, takes e.g. 3 records (for Group size = 3) and treats them as one group. Naturally, you have to have an adequate number of input records otherwise errors on reading will occur. The number has to be a multiple of Group size, e.g. 3, 6, 9 etc. for Group size = 3.

Mapping There are the two major attributes which describe the "mapping". They say:

  • which input field’s value will designate the output field - Field defining output field name

  • which input field’s value will be used as a value for that field Field defining output field value

As for the output metadata, it is arbitrary but fixed to field names. If your input data has extra fields, they are simply ignored (only fields defined as a value/name matter). Likewise, output fields without any corresponding input records will be null.

If a value of Field defining output field name does not correspond to any of names of output metadata fields, the component fails.

Define Your Own Transformation - Java/CTL

In Pivot, you can write the transformation function yourself. That can be done either in CTL or Java, see Advanced attributes in Pivot Attributes above. Before writing the transformation, you might want to refer to some of the sections touching the subject:

CTL Interface

You can implement methods getOutputFieldIndex and getOutputFieldValue or you can set one of the attributes and implement the other one with a method. So you can, for example, set valueField and implement getOutputFieldIndex. Or you can set nameField and implement getOutputFieldValue. For a better understanding, examine the methods' documentation directly in the Transform editor.

Java Interface

Compared to Denormalizer, the Pivot component has new significant attributes: nameField and valueField. These can be defined either as attributes (see above) or by methods. If the transformation is not defined, the component uses com.opensys.cloveretl.component.pivot.DataRecordPivotTransform which copies values from valueField to nameField.

In Java, you can implement your own PivotTransform that overrides DataRecordPivotTransform. However, you can override only one method, e.g. getOutputFieldValue, getOutputFieldIndex or others from PivotTransform (that extends RecordDenormalize).

Examples

Data Transformation with Pivot - Using Key

Let us have the following input values:

Because we are going to group the data according to the groupID field, the input has to be sorted (mind the ascending order of groupIDs). In the Pivot component, we will make the following settings: Group key = groupID (to group all input records with the same groupID) Field defining output field name = fieldName (to say we want to take output fields' names from this input field) Field defining output field value = fieldValue (to say we want to take output fields' values from this input field)

Processing that data with Pivot produces the following output:

Notice the input recordNo field has been ignored. Similarly, the output comment had no corresponding fields on the input, that is why it remains null. groupID makes part in the output metadata and thus was copied automatically.

Note: If the input is not sorted (not like in the example), grouping records according to their count is especially handy. Omit Group key and set Group size instead to read sequences of records that have exactly the number of records you need.

Converting fixed number of records to single record

Input metadata have fields filedName and fieldValue. The records contain a timestamp, IP address and username.

timestamp|2014-10-30 13:51:12
address  |192.168.10.15
username |Alice
timestamp|2014-10-30 13:52:14
address  |192.168.3.151
username |Bob
timestamp|2014-10-30 13:52:40
address  |192.168.102.105
username |Eve

Convert the data to a one line structure having a timestamp, IP address and username.

Solution Use attributes Group size, Field defining output field name and Field defining output field value.

ATTRIBUTE
VALUE

Group size

3

Field defining output field name

fieldName

Field defining output field value

fielValue

Output metadata has to have fields timestamp, address and user.

Converting fixed number of records to single record using CTL

This example is similar to the previous one: input records contain a timestamp, IP address and username, but there is no field indicating which one is a timestamp, IP address or username. The order of the input records within the group is always the same: timestamp is before IP address and IP address is before username.

2014-10-30 13:51:12
192.168.10.15
Alice
2014-10-30 13:52:14
192.168.3.151
Bob
2014-10-30 13:52:40
192.168.102.105
Eve

Solution One output record correspond to three input records, so we use the Group size attribute. Mapping to the output record is defined in Pivot transformation.

ATTRIBUTE
VALUE

Group size

3

Pivot transformation

See the code below

//#CTL2

function integer getOutputFieldIndex(integer idx) {
    return idx % 3;
}

function string getOutputFieldValue(integer idx) {
    return $in.0.value;
}

The order of input records corresponds to the order of output metadata fields. If you need a different order, rearrange the output metadata or change the content of the getOutputFieldIndex() function.

Passing trough Fields to Output

The input records have customerId, batchId, fieldName and value metadata fields:

C0001|B001|firstName|John
C0001|B001|lastName |Doe
C0001|B001|accountNo|A0001

Convert data to following the format:

C0001|B001|John|Doe|A0001

Solution

ATTRIBUTE
VALUE

Group key

customerId;batchId

Field defining output field name

fieldName

Field defining output field value

value

Note that Group key fields have been passed to the corresponding output fields.

Best Practices

If the transformation is specified in an external file (Pivot transformation URL), we recommend users to explicitly specify Pivot transformation source charset.

See also

Groups of input records are expected to be sorted in the order defined here. The meaning is the same as in Denormalizer, see . Note that in Pivot, setting this to ignore can produce unexpected results if input is not sorted.

writing transformations in Denormalizer, the component Pivot is derived from: and .

Denormalizer
Defining Transformations
Denormalizer
MetaPivot
Normalizer
Common Properties of Components
Specific attribute types
Common Properties of Transformers
CTL Interface
Java Interface
Sort order