Primeur Online Docs
Data Shaper
Data Shaper
  • 🚀GETTING STARTED
    • What is Primeur Data Shaper
      • What is the Data Shaper Designer
      • What is the Data Shaper Server
      • What is the Data Shaper Cluster
    • How does the Data Shaper Designer work
      • Designer Views and Graphs
      • Data Shaper Graphs
      • Designer Projects and Sandboxes
      • Data Shaper Designer Reference
    • How do the Data Shaper Server and Cluster work
      • Data Shaper Server and Cluster
      • Data Shaper Server Reference
    • VFS Graph Components
      • DataOneFileDescriptor (DOFD) metadata
      • Passing parameters from Data One Contract to Data Shaper graph
      • Inheriting Data One context attributes in Data Shaper graph
  • DATA SHAPER DESIGNER
    • Configuration
      • Runtime configuration
        • Logging
        • Master Password
        • User Classpath
      • Data Shaper Server Integration
      • Execution monitoring
      • Java configuration
      • Engine configuration
      • Refresh Operation
    • Designer User Interface
      • Graph Editor with Palette of Components
      • Project Explorer Pane
      • Outline Pane
      • Tabs Pane
      • Execution Tab
      • Keyboard Shortcuts
    • Projects
      • Creating Data Shaper projects
      • Converting Data Shaper projects
      • Structure of Data Shaper projects
      • Versioning of server project content
      • Working with Data Shaper Server Projects
      • Project configuration
    • Graphs
      • Creating an empty graph
      • Creating a simple graph
        • Placing Components
        • Placing Components from Palette
        • Connecting Components with Edges
    • Execution
      • Successful Graph Execution
      • Run configuration
      • Connecting to a running job
      • Graph states
    • Common dialogs
      • URL file dialog
      • Edit Value dialog
      • Open Type dialog
    • Import
      • Import Data Shaper projects
      • Import from Data Shaper server sandbox
      • Import graphs
      • Import metadata
    • Export
      • Export graphs to HTML
      • Export to Data Shaper Server sandbox
      • Export image
      • Export Project as Library
    • Graph tracking
      • Changing record count font size
    • Search functionality
    • Working with Data Shaper server
      • Data Shaper server project basic principles
      • Connecting via HTTP
      • Connecting via HTTPS
      • Connecting via Proxy Server
    • Graph components
      • Adding components
      • Finding components
      • Edit component dialog box
      • Enable/disable component
      • Passing data through disabled component
      • Common properties of components
      • Specific attribute types
      • Metadata templates
    • Edges
      • Connecting Components with Edges
      • Types of Edges
      • Assigning Metadata to Edges
      • Colors of Edges
      • Debugging Edges
      • Edge Memory Allocation
    • Metadata
      • Records and Fields
        • Record Types
        • Data Types in Metadata
        • Data Formats
        • Locale and Locale Sensitivity
        • Time Zone
        • Autofilling Functions
      • Metadata Types
        • Internal Metadata
        • External (Shared) Metadata
        • SQL Query Metadata
        • Reading Metadata from Special Sources
      • Auto-propagated Metadata
        • Sources of Auto-Propagated Metadata
        • Explicitly Propagated Metadata
        • Priorities of Metadata
        • Propagation of SQL Query Metadata
      • Creating Metadata
        • Extracting Metadata from a Flat File
        • Extracting Metadata from an XLS(X) File
        • Extracting Metadata from a Database
        • Extracting Metadata from a DBase File
        • Extracting Metadata from Salesforce
        • SQL Query Metadata
        • User Defined Metadata
      • Merging Existing Metadata
      • Creating Database Table from Metadata and Database Connection
      • Metadata Editor
        • Opening Metadata Editor
        • Basics of Metadata Editor
        • Record Pane
        • Field Name vs. Label vs. Description
        • Details Pane
      • Changing and Defining Delimiters
      • Editing Metadata in the Source Code
      • Multi-value Fields
        • Lists and Maps Support in Components
        • Joining on multivalue fields (Comparison Rules)
    • Connections
      • Database Connections
        • Internal Database Connections
        • External (Shared) Database Connections
        • Database Connections Properties
        • Encryption of Access Password
        • Browsing Database and Extracting Metadata from Database Tables
        • Windows Authentication on Microsoft SQL Server
        • Snowflake Connection
        • Hive Connection
        • Troubleshooting
      • JMS Connections
      • QuickBase Connections
      • Hadoop Connections
      • Kafka Connections
      • OAuth2 Connections
      • MongoDB Connections
      • Salesforce Connections
    • Lookup Tables
      • Lookup Tables in Cluster Environment
      • Internal Lookup Tables
      • External (Shared) Lookup Tables
      • Types of Lookup Tables
    • Sequences
      • Persistent Sequences
      • Non Persistent Sequences
      • Internal Sequences
      • External (Shared) Sequences
      • Editing a Sequence
      • Sequences in Cluster Environment
    • Parameters
      • Internal Parameters
      • External (Shared) Parameters
      • Secure Graph Parameters
      • Graph Parameter Editor
      • Secure Graph Parameters
      • Parameters with CTL2 Expressions (Dynamic Parameters)
      • Environment Variables
      • Canonicalizing File Paths
      • Using Parameters
    • Internal/External Graph Elements
    • Dictionary
      • Creating a Dictionary
      • Using a Dictionary in Graphs
    • Execution Properties
    • Notes in Graphs
      • Placing Notes into Graph
      • Resizing Notes
      • Editing Notes
      • Formatted Text
      • Links from Notes
      • Folding Notes
      • Notes Properties
    • Transformations
      • Defining Transformations
      • Transform Editor
      • Common Java Interfaces
    • Data Partitioning (Parallel Running)
    • Data Partitioning in Cluster
      • High Availability
      • Scalability
      • Graph Allocation Examples
      • Example of Distributed Execution
      • Remote Edges
    • Readers
      • Common Properties of Readers
      • ComplexDataReader
      • DatabaseReader
      • DataGenerator
      • DataOneVFSReader
      • EDIFACTReader
      • FlatFileReader
      • JSONExtract
      • JSONReader
      • LDAPReader
      • MultiLevelReader
      • SpreadsheetDataReader
      • UniversalDataReader
      • X12Reader
      • XMLExtract
      • XMLReader
      • XMLXPathReader
    • Writers
      • Common Properties of Writers
      • DatabaseWriter
      • DataOneVFSWriter
      • EDIFACTWriter
      • FlatFileWriter
      • JSONWriter
      • LDAPWriter
      • SpreadsheetDataWriter
      • HIDDEN StructuredDataWriter
      • HIDDEN TableauWriter
      • Trash
      • UniversalDataWriter
      • X12Writer
      • XMLWriter
    • Transformers
      • Common Properties of Transformers
      • Aggregate
      • Concatenate
      • DataIntersection
      • DataSampler
      • Dedup
      • Denormalizer
      • ExtSort
      • FastSort
      • Filter
      • Map
      • Merge
      • MetaPivot
      • Normalizer
      • Partition
      • Pivot
      • Rollup
      • SimpleCopy
      • SimpleGather
      • SortWithinGroups
      • XSLTransformer
    • Joiners
      • Common Properties of Joiners
      • Combine
      • CrossJoin
      • DBJoin
      • ExtHashJoin
      • ExtMergeJoin
      • LookupJoin
      • RelationalJoin
    • Others
      • Common Properties of Others
      • CheckForeignKey
      • DBExecute
      • HTTPConnector
      • LookupTableReaderWriter
      • WebServiceClient
    • CTL2 - Data Shaper Transformation Language
    • Language Reference
      • Program Structure
      • Comments
      • Import
      • Data Types in CTL2
      • Literals
      • Variables
      • Dictionary in CTL2
      • Operators
      • Simple Statement and Block of Statements
      • Control Statements
      • Error Handling
      • Functions
      • Conditional Fail Expression
      • Accessing Data Records and Fields
      • Mapping
      • Parameters
      • Regular Expressions
    • CTL Debugging
      • Debug Perspective
      • Importing and Exporting Breakpoints
      • Inspecting Variables and Expressions
      • Examples
    • Functions Reference
      • Conversion Functions
      • Date Functions
      • Mathematical Functions
      • String Functions
      • Mapping Functions
      • Container Functions
      • Record Functions (Dynamic Field Access)
      • Miscellaneous Functions
      • Lookup Table Functions
      • Sequence Functions
      • Data Service HTTP Library Functions
      • Custom CTL Functions
      • CTL2 Appendix - List of National-specific Characters
      • HIDDEN Subgraph Functions
    • Tutorial
      • Creating a Transformation Graph
      • Filtering the records
      • Sorting the Records
      • Processing Speed-up with Parallelization
      • Debugging the Java Transformation
  • DATA SHAPER SERVER
    • Introduction
    • Administration
      • Monitoring
    • Using Graphs
      • Job Queue
      • Execution History
      • Job Inspector
    • Cluster
      • Sandboxes in Cluster
      • Troubleshooting
  • Install Data Shaper
    • Install Data Shaper
      • Introduction to Data Shaper installation process
      • Planning Data Shaper installation
      • Data Shaper System Requirements
      • Data Shaper Domain Master Configuration reference
      • Performing Data Shaper initial installation and master configuration
        • Creating database objects for PostgreSQL
        • Creating database objects for Oracle
        • Executing Data Shaper installer
        • Configuring additional firewall rules for Data Shaper
Powered by GitBook
On this page
  • Short Description
  • Ports
  • Metadata
  • JSONWriter Attributes
  • Details
  • Examples
  • Writing flat records as JSON
  • Writing arrays I
  • Writing arrays II
  • Using wildcards
  • Using templates
  • More input streams
  • Best Practices
  • See also
  1. DATA SHAPER DESIGNER
  2. Writers

JSONWriter

PreviousFlatFileWriterNextLDAPWriter

Short Description

JSONWriter writes data in the .

COMPONENT
DATA OUTPUT
INPUT PORTS
OUTPUT PORTS
EACH TO ALL OUTPUTS
DIFFERENT TO DIFFERENT OUTPUTS
TRANSFORMATION
TRANSF. REQUIRED
JAVA
CTL
AUTO-PROPAGATED METADATA

JSONWriter

JSON file

1-n

0-1

✓

x

x

x

x

x

x

Ports

PORT TYPE
NUMBER
REQUIRED
DESCRIPTION
METADATA

Input

0-N

At least one

Input records to be joined and mapped into the JSON structure.

Any (each port can have different metadata)

Output

0

x

Optional. For port writing.

Metadata

JSONWriter does not propagate metadata.

JSONWriter has no metadata template.

JSONWriter Attributes

ATTRIBUTE
REQ
DESCRIPTION
POSSIBLE VALUES

Basic

File URL

yes

Charset

The encoding of an output file generated by JSONWriter. The default encoding depends on DEFAULT_SOURCE_CODE_CHARSET in defaultProperties.

UTF-8 | <other encodings>

Mapping

k:p

Defines how input data is mapped onto an output JSON. See Details below.

Mapping URL

k:p

External text file containing the mapping definition.

Advanced

Create directories

By default, non-existing directories are not created. If set to true, they are created.

false (default) | true

Omit new lines wherever possible

By default, each element is written to a separate line. If set to true, new lines are omitted when writing data to the output JSON structure. Thus, all JSON elements are on one line only.

false (default) | true

Cache size

The size of the database used when caching data from ports to elements (the data is first processed then written). The larger your data is, the larger cache is needed to maintain fast processing.

auto (default) | e.g. 300MB, 1GB etc.

Cache in Memory

Cache data records in memory instead of the JDBM’s disk cache (default). Note that while it is possible to set the maximal size of the disk cache, this setting is ignored in case the in-memory cache is used. As a result, an OutOfMemoryError may occur when caching too many data records.

false (default) | true

Sorted input

Tells JSONWriter whether the input data is sorted. Setting the attribute to true declares you want to use the sort order defined in Sort keys, see below.

false (default) | true

Sort keys

Max number of records

0-N

Partitioning

Records per file

1-N

Partition key

Partition lookup table

Partition file tag

Number file tag (default) | Key file tag

Partition output fields

Partition unassigned file name

Partition key sorted

false (default) | true

Create empty files

If set to false, prevents the component from creating an empty output file when there are no input records.

true (default) | false

[1] One of these has to be specified. If both are specified, Mapping URL has a higher priority.

Details

JSONWriter receives data from all connected input ports and converts records to JSON objects based on the mapping you define. Finally, the component writes the resulting tree structure of elements to the output: a JSON file, port or dictionary. JSONWriter can write lists and variants.

Every JSON object can contain other nested JSON objects. Thus, the JSON format resembles XML and similar tree formats.

  • Connect input edges to JSONWriter and edit the component’s Mapping attribute. This will open the visual mapping editor.

Metadata on the input edge(s) are displayed on the left-hand side. The right-hand pane is where you design the desired JSON tree. Mapping is then performed by dragging metadata from left to right (and performing additional tasks described below).

  • In the right hand pane, design your JSON tree consisting of:

    • Arrays - arrays are ordered sets of values in JSON enclosed between the [ and ] brackets. To learn how to map them in JSONWriter, see Writing arrays II below.

  • Connect input records to output (wildcard) elements to create Binding.

Example 26. Creating Binding Example mapping in JSONWriter - employees are joined with projects they work on. Fields in bold (their content) will be printed to the output file - see below.

Excerpt from the output file related to the figure above (example of one employee written as JSON):

"employee" : {
    "firstName" : "Jane",
    "lastName" : "Simson",
    "projects" : {
      "project" : {
        "name" : "JSP",
        "manager" : "John Smith",
        "start" : "06062006",
        "end" : "in progress",
        "customers" : {
          "customer" : {
            "name" : "Sunny"
          },
          "customer" : {
            "name" : "Weblea"
          }
        }
      },
      "project" : {
        "name" : "OLAP",
        "manager" : "Raymond Brown",
        "start" : "11052004",
        "end" : "31012006",
        "customers" : {
          "customer" : {
            "name" : "Sunny"
          }
        }
      }
    }
  },
  • When writing variant fields, JSONWriter translates the tree structure of variant directly to JSON. A variant map is formatted as JSON object, variant list is formatted as JSON array. Dates in variant are formatted as datetime strings in UTC. Byte arrays in variant are formatted as Base64 strings.

Examples

Writing flat records as JSON

This example shows a way to write flat records (no arrays, no subtrees) to a JSON file.

The input edge connected to JSONWriter has metadata fields CommodityName, Unit, Price and Currency and receives the data:

Brent Crude Oil | Barrel | 75.36   | USD
Gold            | Ounce  | 1298.54 | USD
Silver          | Ounce  | 16.83   | USD

Write the data to a JSON file.

Solution

Set up the File URL and Mapping attributes.

  • File URL

${DATAOUT_DIR}/comodities.json

  • Mapping

<?xml version="1.0" encoding="UTF-8"?>
<root xmlns:clover="http://www.cloveretl.com/ns/xmlmapping">
  <Commodity  clover:inPort="0">
    <CommodityName>$0.CommodityName</CommodityName>
    <Unit>$0.Unit</Unit>
    <Price>$0.Price</Price>
    <Currency>$0.Currency</Currency>
  </Commodity>
</root>

Produced JSON File

{
  "Commodity" : {
    "CommodityName" : "Brent Crude Oil",
    "Unit" : "Barrel",
    "Price" : 75.36,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Gold",
    "Unit" : "Ounce",
    "Price" : 1298.54,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Silver",
    "Unit" : "Ounce",
    "Price" : 16.83,
    "Currency" : "USD"
  }
}

Writing arrays I

This examples shows a way to write arrays.

The input edge connected to the JSONWriter has metadata fields CommodityName, Unit, Price and Currency. It is similar to the previous example, but the price is not a single value but a list of values.

Solution

Set up the File URL and Mapping attributes.

  • File URL

${DATAOUT_DIR}/comodities2.json

  • Mapping

<?xml version="1.0" encoding="UTF-8"?>
<root xmlns:clover="http://www.cloveretl.com/ns/xmlmapping">
  <Commodity clover:inPort="0">
    <CommodityName>$0.CommodityName</CommodityName>
    <Unit>$0.Unit</Unit>
    <clover:collection clover:name="Price">
      <item>$0.Price</item>
    </clover:collection>
    <Currency>$0.Currency</Currency>
  </Commodity>
</root>

Produced JSON File

{
  "Commodity" : {
    "CommodityName" : "Brent Crude Oil",
    "Unit" : "Barrel",
    "Price" : [ 75.36, 75.87 ],
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Gold",
    "Unit" : "Ounce",
    "Price" : [ 1298.54, 1298.18 ],
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Silver",
    "Unit" : "Ounce",
    "Price" : [ 16.83, 16.80 ],
    "Currency" : "USD"
  },
}

Writing arrays II

This example shows a way to write summary array using values of all input records.

Set up the File URL and Mapping attributes.

Solution

  • File URL

${DATAOUT_DIR}/comodities3.json

  • Mapping

<?xml version="1.0" encoding="UTF-8"?>
<root xmlns:clover="http://www.cloveretl.com/ns/xmlmapping">
  <Commodity clover:inPort="0">
    <CommodityName>$0.CommodityName</CommodityName>
    <Unit>$0.Unit</Unit>
    <Price>$0.Price</Price>
    <Currency>$0.Currency</Currency>
  </Commodity>
  <clover:collection clover:name="CommodityNames" clover:inPort="0">
    <CommodityName>$0.CommodityName</CommodityName>
  </clover:collection>
</root>

Produced JSON File

{
  "Commodity" : {
    "CommodityName" : "Brent Crude Oil",
    "Unit" : "Barrel",
    "Price" : 75.36,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Gold",
    "Unit" : "Ounce",
    "Price" : 1298.54,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Silver",
    "Unit" : "Ounce",
    "Price" : 16.83,
    "Currency" : "USD"
  },
  "CommodityNames" : [ "Brent Crude Oil", "Gold", "Silver" ]
}

Using wildcards

This example shows a way to use wild cards to map input metadata fields.

Write the data from the first example to a JSON file. The solution must be flexible - it must propagate the changes in input metadata to the output without changing the configuration of JSONWriter.

Solution

  • File URL

${DATAOUT_DIR}/comodities4.json

  • Mapping

<?xml version="1.0" encoding="UTF-8"?>
<root xmlns:clover="http://www.cloveretl.com/ns/xmlmapping">
  <Commodity clover:inPort="0">
    <clover:elements clover:include="$0.*"/>
  </Commodity>
</root>

Produced JSON File

{
  "Commodity" : {
    "CommodityName" : "Brent Crude Oil",
    "Unit" : "Barrel",
    "Price" : 75.36,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Gold",
    "Unit" : "Ounce",
    "Price" : 1298.54,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Silver",
    "Unit" : "Ounce",
    "Price" : 16.83,
    "Currency" : "USD"
  }
}

Using templates

This example shows a way to write output elements names based on input data.

Write the data from the first example to a JSON file. The name of the element containing the price of commodity should be the unit of measurement.

Solution

  • File URL

${DATAOUT_DIR}/comodities5.json

  • Mapping

<?xml version="1.0" encoding="UTF-8"?>
<root xmlns:clover="http://www.cloveretl.com/ns/xmlmapping">
  <Commodity clover:inPort="0">
    <CommodityName>$0.CommodityName</CommodityName>
    <clover:element clover:name="$0.Unit">$0.Price</clover:element>
    <Currency>$0.Currency</Currency>
  </Commodity>
</root>

Notice the dummy element CommodityName which you bind the input field to.

Produced JSON File

{
  "Commodity" : {
    "CommodityName" : "Brent Crude Oil",
    "Barrel" : 75.36,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Gold",
    "Ounce" : 1298.54,
    "Currency" : "USD"
  },
  "Commodity" : {
    "CommodityName" : "Silver",
    "Ounce" : 16.83,
    "Currency" : "USD"
  }
}

More input streams

This example shows a way to merge data from multiple input edges to a JSON file.

There are two input edges. The records on the first one contain a commodity name and unit of measurement. The records on the second one contain a commodity name, price per unit and currency. Multiple records from the second input port can correspond to a single record from the first input port. Create a JSON file which contains record from the first input port and corresponding records from the second output port as a subtree.

Solution

  • File URL

${DATAOUT_DIR}/comodities6.json

  • Mapping

<?xml version="1.0" encoding="UTF-8"?>
<root xmlns:clover="http://www.cloveretl.com/ns/xmlmapping">
  <Commodity clover:inPort="0">
    <clover:elements clover:include="$0.*"/>
      <clover:collection clover:name="Price">
        <Price clover:inPort="1"
               clover:key="CommodityName"
               clover:parentKey="CommodityName">
          <clover:elements clover:include="$1.*"
                           clover:exclude="$1.CommodityName"/>
        </Price>
    </clover:collection>
  </Commodity>
</root>

Produced JSON File

{
  "Commodity" : {
    "CommodityName" : "Silver",
    "Unit" : "Ounce",
    "Price" : [ {
      "PricePerUnit" : 17.81,
      "Currency" : "USD"
    } ]
  },
  "Commodity" : {
    "CommodityName" : "Gold",
    "Unit" : "Ounce",
    "Price" : [ {
      "PricePerUnit" : 1302.50,
      "Currency" : "USD"
    }, {
      "PricePerUnit" : 1300.00,
      "Currency" : "USD"
    } ]
  }
}

Best Practices

We recommend users explicitly specify Charset.

See also

Only one field (byte, cbyte or string) is used. The field name is used in File URL to govern how the output records are processed - see

The target file for the output JSON. See .

Tells JSONWriter how the input data is sorted, thus enabling streaming. The sort order of fields can be given for each port in a separate tab. Working with Sort keys has been described in .

The maximum number of records written to all output files. See .

The maximum number of records that are written to a single file. See .

The key whose values control the distribution of records among multiple output files. See .

The ID of a lookup table. The table serves for selecting records which should be written to the output file(s). See .

By default, output files are numbered. If this attribute is set to Key file tag, output files are named according to the values of Partition key or Partition output fields. See .

The fields of Partition lookup table whose values serve for naming output file(s). See .

The name of a file that the unassigned records should be written into (if there are any). If it is not given, the data records whose key values are not contained in Partition lookup table are discarded. See .

In case partitioning into multiple output files is turned on, all output files are open at once. This could lead to an undesirable memory footprint for many output files (thousands). Moreover, for example unix-based OS usually have a very strict limitation of number of simultaneously open files (1,024) per process. In case you run into one of these limitations, consider sorting the data according a partition key using one of our standard sorting components and set this attribute to true. The partitioning algorithm does not need to keep open all output files, just the last one is open at one time. See .

As a consequence, you map the input records to the output file in a manner similar to . Mapping editors in both components have similar logic. The very basics of mapping are:

WARNING Unlike XMLWriter , you do not map metadata to any attributes.

- another option to mapping elements explicitly. You use the Include and Exclude patterns to generate element names from respective metadata.

At any time, you can switch to the and write/check the mapping yourself in code.

If the basic instructions found here are not satisfying, please consult XMLWriter’s where the whole mapping process is described in detail.

JSON format
XMLWriter
Source tab
JSONReader
JSONExtract
XMLWriter
Common properties of components
Specific attribute types
Common Properties of Writers
Elements
Wildcard elements
Details
Sort Key
Writing to Output Port
Supported File URL Formats for Writers
Selecting Output Records
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files