Primeur Online Docs
Data Shaper
Data Shaper
  • 🚀GETTING STARTED
    • What is Primeur Data Shaper
      • What is the Data Shaper Designer
      • What is the Data Shaper Server
      • What is the Data Shaper Cluster
    • How does the Data Shaper Designer work
      • Designer Views and Graphs
      • Data Shaper Graphs
      • Designer Projects and Sandboxes
      • Data Shaper Designer Reference
    • How do the Data Shaper Server and Cluster work
      • Data Shaper Server and Cluster
      • Data Shaper Server Reference
    • VFS Graph Components
      • DataOneFileDescriptor (DOFD) metadata
      • Passing parameters from Data One Contract to Data Shaper graph
      • Inheriting Data One context attributes in Data Shaper graph
  • DATA SHAPER DESIGNER
    • Configuration
      • Runtime configuration
        • Logging
        • Master Password
        • User Classpath
      • Data Shaper Server Integration
      • Execution monitoring
      • Java configuration
      • Engine configuration
      • Refresh Operation
    • Designer User Interface
      • Graph Editor with Palette of Components
      • Project Explorer Pane
      • Outline Pane
      • Tabs Pane
      • Execution Tab
      • Keyboard Shortcuts
    • Projects
      • Creating Data Shaper projects
      • Converting Data Shaper projects
      • Structure of Data Shaper projects
      • Versioning of server project content
      • Working with Data Shaper Server Projects
      • Project configuration
    • Graphs
      • Creating an empty graph
      • Creating a simple graph
        • Placing Components
        • Placing Components from Palette
        • Connecting Components with Edges
    • Execution
      • Successful Graph Execution
      • Run configuration
      • Connecting to a running job
      • Graph states
    • Common dialogs
      • URL file dialog
      • Edit Value dialog
      • Open Type dialog
    • Import
      • Import Data Shaper projects
      • Import from Data Shaper server sandbox
      • Import graphs
      • Import metadata
    • Export
      • Export graphs to HTML
      • Export to Data Shaper Server sandbox
      • Export image
      • Export Project as Library
    • Graph tracking
      • Changing record count font size
    • Search functionality
    • Working with Data Shaper server
      • Data Shaper server project basic principles
      • Connecting via HTTP
      • Connecting via HTTPS
      • Connecting via Proxy Server
    • Graph components
      • Adding components
      • Finding components
      • Edit component dialog box
      • Enable/disable component
      • Passing data through disabled component
      • Common properties of components
      • Specific attribute types
      • Metadata templates
    • Edges
      • Connecting Components with Edges
      • Types of Edges
      • Assigning Metadata to Edges
      • Colors of Edges
      • Debugging Edges
      • Edge Memory Allocation
    • Metadata
      • Records and Fields
        • Record Types
        • Data Types in Metadata
        • Data Formats
        • Locale and Locale Sensitivity
        • Time Zone
        • Autofilling Functions
      • Metadata Types
        • Internal Metadata
        • External (Shared) Metadata
        • SQL Query Metadata
        • Reading Metadata from Special Sources
      • Auto-propagated Metadata
        • Sources of Auto-Propagated Metadata
        • Explicitly Propagated Metadata
        • Priorities of Metadata
        • Propagation of SQL Query Metadata
      • Creating Metadata
        • Extracting Metadata from a Flat File
        • Extracting Metadata from an XLS(X) File
        • Extracting Metadata from a Database
        • Extracting Metadata from a DBase File
        • Extracting Metadata from Salesforce
        • SQL Query Metadata
        • User Defined Metadata
      • Merging Existing Metadata
      • Creating Database Table from Metadata and Database Connection
      • Metadata Editor
        • Opening Metadata Editor
        • Basics of Metadata Editor
        • Record Pane
        • Field Name vs. Label vs. Description
        • Details Pane
      • Changing and Defining Delimiters
      • Editing Metadata in the Source Code
      • Multi-value Fields
        • Lists and Maps Support in Components
        • Joining on multivalue fields (Comparison Rules)
    • Connections
      • Database Connections
        • Internal Database Connections
        • External (Shared) Database Connections
        • Database Connections Properties
        • Encryption of Access Password
        • Browsing Database and Extracting Metadata from Database Tables
        • Windows Authentication on Microsoft SQL Server
        • Snowflake Connection
        • Hive Connection
        • Troubleshooting
      • JMS Connections
      • QuickBase Connections
      • Hadoop Connections
      • Kafka Connections
      • OAuth2 Connections
      • MongoDB Connections
      • Salesforce Connections
    • Lookup Tables
      • Lookup Tables in Cluster Environment
      • Internal Lookup Tables
      • External (Shared) Lookup Tables
      • Types of Lookup Tables
    • Sequences
      • Persistent Sequences
      • Non Persistent Sequences
      • Internal Sequences
      • External (Shared) Sequences
      • Editing a Sequence
      • Sequences in Cluster Environment
    • Parameters
      • Internal Parameters
      • External (Shared) Parameters
      • Secure Graph Parameters
      • Graph Parameter Editor
      • Secure Graph Parameters
      • Parameters with CTL2 Expressions (Dynamic Parameters)
      • Environment Variables
      • Canonicalizing File Paths
      • Using Parameters
    • Internal/External Graph Elements
    • Dictionary
      • Creating a Dictionary
      • Using a Dictionary in Graphs
    • Execution Properties
    • Notes in Graphs
      • Placing Notes into Graph
      • Resizing Notes
      • Editing Notes
      • Formatted Text
      • Links from Notes
      • Folding Notes
      • Notes Properties
    • Transformations
      • Defining Transformations
      • Transform Editor
      • Common Java Interfaces
    • Data Partitioning (Parallel Running)
    • Data Partitioning in Cluster
      • High Availability
      • Scalability
      • Graph Allocation Examples
      • Example of Distributed Execution
      • Remote Edges
    • Readers
      • Common Properties of Readers
      • ComplexDataReader
      • DatabaseReader
      • DataGenerator
      • DataOneVFSReader
      • EDIFACTReader
      • FlatFileReader
      • JSONExtract
      • JSONReader
      • LDAPReader
      • MultiLevelReader
      • SpreadsheetDataReader
      • UniversalDataReader
      • X12Reader
      • XMLExtract
      • XMLReader
      • XMLXPathReader
    • Writers
      • Common Properties of Writers
      • DatabaseWriter
      • DataOneVFSWriter
      • EDIFACTWriter
      • FlatFileWriter
      • JSONWriter
      • LDAPWriter
      • SpreadsheetDataWriter
      • HIDDEN StructuredDataWriter
      • HIDDEN TableauWriter
      • Trash
      • UniversalDataWriter
      • X12Writer
      • XMLWriter
    • Transformers
      • Common Properties of Transformers
      • Aggregate
      • Concatenate
      • DataIntersection
      • DataSampler
      • Dedup
      • Denormalizer
      • ExtSort
      • FastSort
      • Filter
      • Map
      • Merge
      • MetaPivot
      • Normalizer
      • Partition
      • Pivot
      • Rollup
      • SimpleCopy
      • SimpleGather
      • SortWithinGroups
      • XSLTransformer
    • Joiners
      • Common Properties of Joiners
      • Combine
      • CrossJoin
      • DBJoin
      • ExtHashJoin
      • ExtMergeJoin
      • LookupJoin
      • RelationalJoin
    • Others
      • Common Properties of Others
      • CheckForeignKey
      • DBExecute
      • HTTPConnector
      • LookupTableReaderWriter
      • WebServiceClient
    • CTL2 - Data Shaper Transformation Language
    • Language Reference
      • Program Structure
      • Comments
      • Import
      • Data Types in CTL2
      • Literals
      • Variables
      • Dictionary in CTL2
      • Operators
      • Simple Statement and Block of Statements
      • Control Statements
      • Error Handling
      • Functions
      • Conditional Fail Expression
      • Accessing Data Records and Fields
      • Mapping
      • Parameters
      • Regular Expressions
    • CTL Debugging
      • Debug Perspective
      • Importing and Exporting Breakpoints
      • Inspecting Variables and Expressions
      • Examples
    • Functions Reference
      • Conversion Functions
      • Date Functions
      • Mathematical Functions
      • String Functions
      • Mapping Functions
      • Container Functions
      • Record Functions (Dynamic Field Access)
      • Miscellaneous Functions
      • Lookup Table Functions
      • Sequence Functions
      • Data Service HTTP Library Functions
      • Custom CTL Functions
      • CTL2 Appendix - List of National-specific Characters
      • HIDDEN Subgraph Functions
    • Tutorial
      • Creating a Transformation Graph
      • Filtering the records
      • Sorting the Records
      • Processing Speed-up with Parallelization
      • Debugging the Java Transformation
  • DATA SHAPER SERVER
    • Introduction
    • Administration
      • Monitoring
    • Using Graphs
      • Job Queue
      • Execution History
      • Job Inspector
    • Cluster
      • Sandboxes in Cluster
      • Troubleshooting
  • Install Data Shaper
    • Install Data Shaper
      • Introduction to Data Shaper installation process
      • Planning Data Shaper installation
      • Data Shaper System Requirements
      • Data Shaper Domain Master Configuration reference
      • Performing Data Shaper initial installation and master configuration
        • Creating database objects for PostgreSQL
        • Creating database objects for Oracle
        • Executing Data Shaper installer
        • Configuring additional firewall rules for Data Shaper
Powered by GitBook
On this page
  • Short Description
  • Ports
  • Metadata
  • FlatFileWriter Attributes
  • Details
  • Appending to Files
  • Null Values
  • Notes and Limitations
  • Examples
  • Writing Records to File
  • Producing Quoted Strings
  • Writing Records Without Delimiters
  • Writing Fixed-Length Records to Output Port
  • Best Practices
  • See also
  1. DATA SHAPER DESIGNER
  2. Writers

FlatFileWriter

PreviousEDIFACTWriterNextJSONWriter

Short Description

FlatFileWriter writes data to flat files. The output flat file can be in form of CSV (character separated values), fixed-length format or mixed-length format (combination of mixed-length and fixed-length formats).

The component supports partitioning, compression, writing to output port or to remote destination.

is an alias for FlatFileWriter.

COMPONENT
DATA OUTPUT
INPUT PORTS
OUTPUT PORTS
TRANSFORMATION
TRANSF. REQUIRED
JAVA
CTL
AUTO-PROPAGATED METADATA

FlatFileWriter

flat file

1

0-1

x

x

x

x

x

Ports

PORT TYPE
NUMBER
REQUIRED
DESCRIPTION
METADATA

Input

0

✓

For received data records

Any

Output

0

x

Include specific byte/ cbyte/ string field

Metadata

FlatFileWriter does not propagate metadata.

The component has no metadata template.

FlatFileWriter requires string, byte or cbyte field in the output metadata.

FlatFileWriter Attributes

ATTRIBUTE
REQ
DESCRIPTION
POSSIBLE VALUES

Basic

File URL

✓

Charset

Character encoding of records written to the output. The default encoding depends on DEFAULT_CHARSET_DECODER in defaultProperties.

ISO-8859-1 | UTF-8 | <other encodings>

Append

If records are printed into an existing non-empty file, they replace the older ones by default (false). If set to true, new records are appended to the end of the existing output file(s) content. Some remote locations or compressed files do not support appending. See Appending to Files below.

false (default) | true

Quoted strings

false | true

Quote character

" | '

Advanced

Create directories

If set to true, non-existing directories in the File URL attribute path are created.

false (default) | true

Write field names

false (default) | true

Records per file

1 - N

Bytes per file

1 - N

Number of skipped records

0 (default) - N

Max number of records

0-N

Exclude fields

A sequence of field names separated by a semicolon that will not be written to the output. Can be used when the same fields serve as a part of Partition key.

Partition key

[1]

Partition lookup table

[2]

Partition file tag

[1]

Number file tag (default) | Key file tag

Partition output fields

[2]

Partition unassigned file name

Sorted input

false (default) | true

Create empty files

If set to false, prevents the component from creating an empty output file when there are no input records.

true (default) | false

Skip last record delimiter

If set to true, the last record delimiter in a file is not written. If set to false, the last record delimiter in a file is written.

false (default) | true

[1] Either both or neither of these attributes must be specified.

[2] Either both or neither of these attributes must be specified

Details

The type of formatting is specified in metadata for the input port data flow.

Appending to Files

Appending to files is supported, if you write data to:

  • local files

  • local zipped files

  • remote files via smb protocol

Appending to files is not supported in, if you write data to:

  • local gzipped files

  • remote files via ftp protocol

  • remote files via webdav protocol

  • remote files via Amazon S3 protocol

  • remote files via hdfs protocol

Null Values

Empty strings and null values are written to a file as empty strings.

Notes and Limitations

Field Size Limitation

FlatFileWriter can write fields of a size up to 4kB.

Maps, Lists and Variants

Examples

Writing Records to File

Write records to a file objects.txt using FlatFileWriter. The input metadata fields are color, shape and material.

Solution

Use the File URL attribute to define a path to the file to be created.

ATTRIBUTE
VALUE

File URL

${DATAOUT_DIR}/objects.txt

An example of the output file, delimited input metadata:

gray|cylinder|steel
brown|cube|wood
transparent|sphere|glass

The separators "|" depend on metadata on the input edge.

An example of the output file, fixed input metadata:

gray           cylinder      steel
brown          cube          wood
transparent    sphere        glass

Producing Quoted Strings

Write data from the previous example to a file. Each field value has to be surrounded by a quote character ' (apostrophe).

Solution

Use the attributes File URL, Quoted strings and Quote character.

ATTRIBUTE
VALUE

File URL

${DATAOUT_DIR}/objects-in-quotes.txt

Quoted strings

true

Quote character

'

If a string to be quoted contains a quote character, the quote character in the string is doubled. E.g. o’clock is quoted as 'o''clock'.

Writing Records Without Delimiters

This example shows writing records without writing record delimiters.

You receive an output from XMLWriter in a streaming mode. The records have to be seamlessly written to the file. No delimiter should be written between the records.

Solution

The solution to the problem depends on metadata. The input metadata of FlatFileWriter must have no Record delimiter, no Default delimiter and must use EOF as delimiter.

In FlatFileWriter, enter File URL.

The records will be written without delimiters as no delimiters are specified in metadata.

Writing Fixed-Length Records to Output Port

Write several fields of fixed-length metadata into one field of the output port (provided one input record creates one output record).

Solution

Make sure that input metadata has no record delimiter set. Select metadata on the input edge and open the Edit Metadata window. Select the first row in the Record pane of the editor and make sure that the Record delimiter property is empty.

Create metadata on the output edge with a single field.

Use the attributes File URL and Records per file.

ATTRIBUTE
VALUE

File URL

port:$0.field1:discrete

Records per file

1

Best Practices

We recommend to explicitly specify encoding of the output file (with the Charset attribute). It ensures better portability of the graph across systems with different default encoding.

The recommended encoding is UTF-8.

See also

For port writing. See .

Where the received data to be written (flat file, output port, dictionary) are specified, see .

When switched to true, all field values (except from byteand cbyte) will be quoted. If you do not set this attribute, its value is inherited from metadata on the input port (and displayed in faded gray text, see also ).

Specifies which kind of quotes will enclose output fields. Applies only if Quoted strings is true. By default, the value of this attribute is inherited from metadata on input port. See also .

Field labels are not written to output file(s) by default. If set to true, labels of individual fields are printed to the output. Please note that field labels differ from field names: labels can be duplicate and you can use any character in them (e.g. accents, diacritics). See .

The maximum number of records to be written to each output file. If specified, the dollar sign(s) $ (number of digits placeholder) must be a part of the file name mask, see

The maximum size of each output file in bytes. If specified, the dollar sign(s) $ (number of digits placeholder) must be a part of the file name mask, see . To avoid splitting a record into two files, the maximum size can be slightly overreached.

How many records/rows to be skipped before writing the first record to the output file, see .

How many records/rows to be written to all output files, see .

A sequence of field names separated by a semicolon defining the records distribution into different output files - records with the same Partition key are written to the same output file. According to the selected Partition file tag, use a proper placeholder ($ or #) in the file name mask, see .

An ID of a lookup table serving for selecting records that should be written to output file(s). For more information, see .

By default, output files are numbered. If the attribute is set to Key file tag, output files are named according to the values of Partition key or Partition output fields. For more information, see ().

Fields of Partition lookup table whose values serve to name output file(s). For more information, see .

The name of a file into which unassigned records should be written, if there are any. If not specified, data records whose key values are not contained in Partition lookup table are discarded. For more information, see .

If partitioning into multiple output files is turned on, all output files are open at once. This can lead to undesirable memory footprint for many output files (thousands). Moreover, for example unix-based OS usually have a very strict limitation of the number of simultaneously open files (1,024) per process. In case you run into one of these limitations, consider sorting the data according to a partition key using one of our standard sorting components and set this attribute to true. The partitioning algorithm does not need to keep open all output files, just the last one is open at one time. For more information, see .

To enable bigger fields to be written into a file, increase the DataFormatter.FIELD_BUFFER_LENGTH property, see . Increasing the size of this buffer does not cause any significant increase of the graph memory consumption.

Another way to solve the issue with fields too big to be written is the utilization of the component that can split large fields into several records.

FlatFileWriter cannot write maps, lists and variants. If you do not need a field with map, list or variant datatype in the output file, you can omit it using Exclude fields attribute. If you need to write the content of the map, list or variant field, convert the field into string using first.

UniversalDataWriter
Engine configuration
Normalizer
Map
FlatFileWriter
UniversalDataReader
Common properties of components
Specific attribute types
Common Properties of Writers
Record Details
Record Details
Record Pane
Writing to Output Port
Supported File URL Formats for Writers
Selecting Output Records
Selecting Output Records
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Partitioning Output into Different Output Files
Supported File URL Formats for Writers
Supported File URL Formats for Writers