Primeur Online Docs
Data Shaper
Data Shaper
  • 🚀GETTING STARTED
    • What is Primeur Data Shaper
      • What is the Data Shaper Designer
      • What is the Data Shaper Server
      • What is the Data Shaper Cluster
    • How does the Data Shaper Designer work
      • Designer Views and Graphs
      • Data Shaper Graphs
      • Designer Projects and Sandboxes
      • Data Shaper Designer Reference
    • How do the Data Shaper Server and Cluster work
      • Data Shaper Server and Cluster
      • Data Shaper Server Reference
    • VFS Graph Components
      • DataOneFileDescriptor (DOFD) metadata
      • Passing parameters from Data One Contract to Data Shaper graph
      • Inheriting Data One context attributes in Data Shaper graph
  • DATA SHAPER DESIGNER
    • Configuration
      • Runtime configuration
        • Logging
        • Master Password
        • User Classpath
      • Data Shaper Server Integration
      • Execution monitoring
      • Java configuration
      • Engine configuration
      • Refresh Operation
    • Designer User Interface
      • Graph Editor with Palette of Components
      • Project Explorer Pane
      • Outline Pane
      • Tabs Pane
      • Execution Tab
      • Keyboard Shortcuts
    • Projects
      • Creating Data Shaper projects
      • Converting Data Shaper projects
      • Structure of Data Shaper projects
      • Versioning of server project content
      • Working with Data Shaper Server Projects
      • Project configuration
    • Graphs
      • Creating an empty graph
      • Creating a simple graph
        • Placing Components
        • Placing Components from Palette
        • Connecting Components with Edges
    • Execution
      • Successful Graph Execution
      • Run configuration
      • Connecting to a running job
      • Graph states
    • Common dialogs
      • URL file dialog
      • Edit Value dialog
      • Open Type dialog
    • Import
      • Import Data Shaper projects
      • Import from Data Shaper server sandbox
      • Import graphs
      • Import metadata
    • Export
      • Export graphs to HTML
      • Export to Data Shaper Server sandbox
      • Export image
      • Export Project as Library
    • Graph tracking
      • Changing record count font size
    • Search functionality
    • Working with Data Shaper server
      • Data Shaper server project basic principles
      • Connecting via HTTP
      • Connecting via HTTPS
      • Connecting via Proxy Server
    • Graph components
      • Adding components
      • Finding components
      • Edit component dialog box
      • Enable/disable component
      • Passing data through disabled component
      • Common properties of components
      • Specific attribute types
      • Metadata templates
    • Edges
      • Connecting Components with Edges
      • Types of Edges
      • Assigning Metadata to Edges
      • Colors of Edges
      • Debugging Edges
      • Edge Memory Allocation
    • Metadata
      • Records and Fields
        • Record Types
        • Data Types in Metadata
        • Data Formats
        • Locale and Locale Sensitivity
        • Time Zone
        • Autofilling Functions
      • Metadata Types
        • Internal Metadata
        • External (Shared) Metadata
        • SQL Query Metadata
        • Reading Metadata from Special Sources
      • Auto-propagated Metadata
        • Sources of Auto-Propagated Metadata
        • Explicitly Propagated Metadata
        • Priorities of Metadata
        • Propagation of SQL Query Metadata
      • Creating Metadata
        • Extracting Metadata from a Flat File
        • Extracting Metadata from an XLS(X) File
        • Extracting Metadata from a Database
        • Extracting Metadata from a DBase File
        • Extracting Metadata from Salesforce
        • SQL Query Metadata
        • User Defined Metadata
      • Merging Existing Metadata
      • Creating Database Table from Metadata and Database Connection
      • Metadata Editor
        • Opening Metadata Editor
        • Basics of Metadata Editor
        • Record Pane
        • Field Name vs. Label vs. Description
        • Details Pane
      • Changing and Defining Delimiters
      • Editing Metadata in the Source Code
      • Multi-value Fields
        • Lists and Maps Support in Components
        • Joining on multivalue fields (Comparison Rules)
    • Connections
      • Database Connections
        • Internal Database Connections
        • External (Shared) Database Connections
        • Database Connections Properties
        • Encryption of Access Password
        • Browsing Database and Extracting Metadata from Database Tables
        • Windows Authentication on Microsoft SQL Server
        • Snowflake Connection
        • Hive Connection
        • Troubleshooting
      • JMS Connections
      • QuickBase Connections
      • Hadoop Connections
      • Kafka Connections
      • OAuth2 Connections
      • MongoDB Connections
      • Salesforce Connections
    • Lookup Tables
      • Lookup Tables in Cluster Environment
      • Internal Lookup Tables
      • External (Shared) Lookup Tables
      • Types of Lookup Tables
    • Sequences
      • Persistent Sequences
      • Non Persistent Sequences
      • Internal Sequences
      • External (Shared) Sequences
      • Editing a Sequence
      • Sequences in Cluster Environment
    • Parameters
      • Internal Parameters
      • External (Shared) Parameters
      • Secure Graph Parameters
      • Graph Parameter Editor
      • Secure Graph Parameters
      • Parameters with CTL2 Expressions (Dynamic Parameters)
      • Environment Variables
      • Canonicalizing File Paths
      • Using Parameters
    • Internal/External Graph Elements
    • Dictionary
      • Creating a Dictionary
      • Using a Dictionary in Graphs
    • Execution Properties
    • Notes in Graphs
      • Placing Notes into Graph
      • Resizing Notes
      • Editing Notes
      • Formatted Text
      • Links from Notes
      • Folding Notes
      • Notes Properties
    • Transformations
      • Defining Transformations
      • Transform Editor
      • Common Java Interfaces
    • Data Partitioning (Parallel Running)
    • Data Partitioning in Cluster
      • High Availability
      • Scalability
      • Graph Allocation Examples
      • Example of Distributed Execution
      • Remote Edges
    • Readers
      • Common Properties of Readers
      • ComplexDataReader
      • DatabaseReader
      • DataGenerator
      • DataOneVFSReader
      • EDIFACTReader
      • FlatFileReader
      • JSONExtract
      • JSONReader
      • LDAPReader
      • MultiLevelReader
      • SpreadsheetDataReader
      • UniversalDataReader
      • X12Reader
      • XMLExtract
      • XMLReader
      • XMLXPathReader
    • Writers
      • Common Properties of Writers
      • DatabaseWriter
      • DataOneVFSWriter
      • EDIFACTWriter
      • FlatFileWriter
      • JSONWriter
      • LDAPWriter
      • SpreadsheetDataWriter
      • HIDDEN StructuredDataWriter
      • HIDDEN TableauWriter
      • Trash
      • UniversalDataWriter
      • X12Writer
      • XMLWriter
    • Transformers
      • Common Properties of Transformers
      • Aggregate
      • Concatenate
      • DataIntersection
      • DataSampler
      • Dedup
      • Denormalizer
      • ExtSort
      • FastSort
      • Filter
      • Map
      • Merge
      • MetaPivot
      • Normalizer
      • Partition
      • Pivot
      • Rollup
      • SimpleCopy
      • SimpleGather
      • SortWithinGroups
      • XSLTransformer
    • Joiners
      • Common Properties of Joiners
      • Combine
      • CrossJoin
      • DBJoin
      • ExtHashJoin
      • ExtMergeJoin
      • LookupJoin
      • RelationalJoin
    • Others
      • Common Properties of Others
      • CheckForeignKey
      • DBExecute
      • HTTPConnector
      • LookupTableReaderWriter
      • WebServiceClient
    • CTL2 - Data Shaper Transformation Language
    • Language Reference
      • Program Structure
      • Comments
      • Import
      • Data Types in CTL2
      • Literals
      • Variables
      • Dictionary in CTL2
      • Operators
      • Simple Statement and Block of Statements
      • Control Statements
      • Error Handling
      • Functions
      • Conditional Fail Expression
      • Accessing Data Records and Fields
      • Mapping
      • Parameters
      • Regular Expressions
    • CTL Debugging
      • Debug Perspective
      • Importing and Exporting Breakpoints
      • Inspecting Variables and Expressions
      • Examples
    • Functions Reference
      • Conversion Functions
      • Date Functions
      • Mathematical Functions
      • String Functions
      • Mapping Functions
      • Container Functions
      • Record Functions (Dynamic Field Access)
      • Miscellaneous Functions
      • Lookup Table Functions
      • Sequence Functions
      • Data Service HTTP Library Functions
      • Custom CTL Functions
      • CTL2 Appendix - List of National-specific Characters
      • HIDDEN Subgraph Functions
    • Tutorial
      • Creating a Transformation Graph
      • Filtering the records
      • Sorting the Records
      • Processing Speed-up with Parallelization
      • Debugging the Java Transformation
  • DATA SHAPER SERVER
    • Introduction
    • Administration
      • Monitoring
    • Using Graphs
      • Job Queue
      • Execution History
      • Job Inspector
    • Cluster
      • Sandboxes in Cluster
      • Troubleshooting
  • Install Data Shaper
    • Install Data Shaper
      • Introduction to Data Shaper installation process
      • Planning Data Shaper installation
      • Data Shaper System Requirements
      • Data Shaper Domain Master Configuration reference
      • Performing Data Shaper initial installation and master configuration
        • Creating database objects for PostgreSQL
        • Creating database objects for Oracle
        • Executing Data Shaper installer
        • Configuring additional firewall rules for Data Shaper
Powered by GitBook
On this page
  • Short Description
  • Ports
  • Metadata
  • StructuredDataWriter Attributes
  • Details
  • Masks and Output File Structure
  • Examples
  • Writing Page with Header and Footer
  • Best Practices
  • Troubleshooting
  • See also
  1. DATA SHAPER DESIGNER
  2. Writers

HIDDEN StructuredDataWriter

Short Description

StructuredDataWriter writes data to files (local or remote, delimited, fixed-length, or mixed) with a user-defined structure. It can also compress output files and write to an output port, or dictionary.

COMPONENT
DATA OUTPUT
INPUT PORTS
OUTPUT PORTS
TRANSFORMATION
TRANSF. REQUIRED
JAVA
CTL
AUTO-PROPAGATED METADATA

StructuredDataWriter

structured flat file

1-3

0-1

x

x

x

x

x

Ports

PORT TYPE
NUMBER
REQUIRED
DESCRIPTION
METADATA

Input

0

✓

Records for body

Any

Input

1

x

Records for header

Any

Input

2

x

Records for footer

Any

Output

0

x

One field (byte, cbyte, string).

Metadata

StructuredDataWriter does not propagate metadata.

StructuredDataWriter has no metadata templates.

Metadata on an output port has one field (byte, cbyte or string).

StructuredDataWriter Attributes

ATTRIBUTE
REQ
DESCRIPTION
POSSIBLE VALUES

Basic

File URL

yes

Charset

Encoding of records written to the output.

UTF-8 (default) | <other encodings>

Append

By default, new records overwrite the older ones. If set to true, new records are appended to the older records stored in the output file(s).

false (default) | true

Body mask

A mask used to write the body of output file(s). It can be based on the records received through the first input port. For more information about the definition of Body mask and resulting output structure, see Masks and Output File Structure below.

Default Body Structure -see below (default) | user-defined

Header mask

[1]

A mask used to write the header of output file(s). It can be based on the records received through the second input port. For more information about the definition of Header mask and resulting output structure, see Masks and Output File Structure below.

empty (default) | user-defined

Footer mask

[2]

A mask used to write the footer of output file(s). It can be based on the records received through the third input port. For more information about the definition of Footer mask and resulting output structure, see Masks and Output File Structure below.

empty (default) | user-defined

Advanced

Create directories

By default, non-existing directories are not created. If set to true, they are created.

false (default) | true

Records per file

The maximum number of records to be written to one output file.

1-N

Bytes per file

The maximum size of one output file in bytes.

1-N

Number of skipped records

0-N

Max number of records

0-N

Partition key

Partition lookup table

[1]

Partition file tag

Number file tag (default) | Key file tag

Partition output fields

[1]

Partition unassigned file name

Sorted input

false (default) | true

Create empty files

If set to false, prevents the component from creating an empty output file when there are no input records.

true (default) | false

[1] Must be specified if the second input port is connected. However, it does not need to be based on input data records.

[2] Must be specified if third input port is connected. However, does not need to be based on input data records.

Details

StructuredDataWriter can write a header, data and a footer (exactly in this order) without need to handle graph phases.

Masks and Output File Structure

Output File Structure

  • An output file consists of a header, body, and footer, in this order.

  • Each of them is defined by specifying corresponding mask.

  • Having defined the mask, the mask content is written repeatedly, one mask is written for each incoming record.

  • If the Records per file attribute is defined, the output structure is distributed among various output files, but this attribute applies to Body mask only. The header and footer are the same for all output files.

Defining a Mask

Body mask, Header mask and Footer mask can be defined in the Mask dialog. This dialog opens after clicking a corresponding attribute row. In its window, you can see the Metadata and Mask panes.

You can define the mask either without field values or with field values.

Field values are referred using field names preceded by a dollar sign.

You do not have to map all input metadata fields.

Output can contain additional text not coming from input metadata. E.g. Return address on figure above.

You can use StructuredDataWriter to generate XML files or to fill in the template.

Default Masks

  1. Default Header mask is empty. But it must be defined if the second input port is connected.

  2. Default Footer mask is empty. But it must be defined if the third input port is connected.

  3. Default Body mask is empty. However, the resulting default body structure looks like the following:

<recordName>
    <field1name>field1value</field1name>
    <field2name>field2value</field2name>
    ...
    <fieldNname>fieldNvalue</fieldNname>
</recordName>

This structure is written to output file(s) for all records.

If Records per file is set, only the specified number of records are used for body in each output file at most.

Notes and Limitations

StructuredDataWriter cannot write lists and maps.

Examples

Writing Page with Header and Footer

Legacy application requires data in the following file structure: header, up to five data records and blank line. Convert data to format accepted by the legacy application:

F0512#4d6f6465726e20616e642070756e6368206361726420636f6d70617469626c65
Francis   Smith     77
Jonathan  Brown     5
Kate      Wood      75
John      Black     3
Elisabeth Doe       87

Solution

Connect an edge providing particular records to the first input port of StructuredDataWriter (metadata field body) and edge providing a header to the second input port (metadata field header).

Edit the attributes:

ATTRIBUTE
VALUE

File URL

${DATOUT_DIR}/file_$$.txt

Body mask

$body

Header mask

$header

Footer mask

Records per file

5

Adjust the line break in the Body mask, Header mask and Footer mask attributes according to the input records. If input records have a line break, do not add a line break after the string $body and $header. If input records do not have a line break, add a line break after $body and $header; fill in the line break to the attribute Footer mask too.

Best Practices

We recommend users to explicitly specify Charset.

Troubleshooting

If you partition unsorted data into many output files, you may reach the limit of simultaneously opened files. This can be avoided by sorting the input and using the attribute sorted input.

See also

PreviousSpreadsheetDataWriterNextHIDDEN TableauWriter

For port writing. See .

An attribute specifying where received data will be written (flat file, output port, dictionary). See .

The number of records to be skipped, see .

The maximum number of records written to all output files. See .

The key whose values control the distribution of records among multiple output files. For more information, see .

The ID of a lookup table. The table serves for selecting records which should be written to the output file(s). For more information, see .

By default, output files are numbered. If this attribute is set to Key file tag, output files are named according to the values of Partition key or Partition output fields. For more information, see .

The fields of Partition lookup table whose values serve for naming output file(s). For more information, see .

The name of a file that the unassigned records should be written into (if there are any). If it is not given, the data records whose key values are not contained in Partition lookup table are discarded. For more information, see .

If the partitioning into multiple output files is turned on, all output files are open at once. This could lead to an undesirable memory footprint for many output files (thousands). Moreover, for example unix-based OS usually have very strict limitation of number of simultaneously open files (1,024) per process. If you run into one of these limitations, consider sorting the data according to a partition key using one of our standard sorting components and set this attribute to true. The partitioning algorithm does not need to keep open all output files, just the last one is open at one time. For more information, see .

To write record with fixed-length metadata, use to convert several fixed-length metadata fields into one field. Subsequently write the output from FlatFileWriter using StructuredDataWriter.

FlatFileWriter
ComplexDataReader
MultiLevelReader
FlatFileWriter
XMLWriter
Common properties of components
Specific attribute types
Common Properties of Writers
Writing to Output Port
Supported File URL Formats for Writers
Selecting Output Records
Selecting Output Records
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files