# Logging workflow data with workflow log

After your project has gone through the initial development and testing, knowing what is going on in runtime becomes important.

The Workflow Logs in Hop allow workflow logging information to be passed down to a pipeline for processing as JSON objects. The receiving pipeline can process this logging information with all the functionality Hop pipelines have to offer, e.g. write to a relational or NoSQL database, a Kafka topic, etc.

Hop will send the logging information for each workflow you run to the Workflow Log pipeline you specify.

In this post, we’ll look at an example of how to configure and use the Workflow Log metadata to write workflow logging information to a relational database.

### Step 1: Create a Workflow Log metadata object

To create a **Workflow Log** click on the **New → Workflow Log** option or click on the **Metadata → Workflow Log** option.

The system displays the New Workflow Log view with the following fields to be configured.

<figure><img src="/files/OPRSMdSRKDmjpZ8KDYSB" alt=""><figcaption></figcaption></figure>

The Workflow Log can be configured as in the following example:

<figure><img src="/files/PsaUODT6UBfiQidXE3v1" alt=""><figcaption></figcaption></figure>

* Name: the name of the metadata object (workflows-logging).
* Enabled: (checked).
* Pipeline executed to capture logging: select or create the pipeline to process the logging information for this Pipeline Log (${PROJECT\_HOME}/hop/logging/workflows-logging.hpl).

{% hint style="info" %}
Tip: You should select or create the pipeline to be used for logging the activity.
{% endhint %}

* Execute at the start of the pipeline?: (checked).
* Execute at the end of the pipeline?: (checked).
* Execute periodically during execution?: (unchecked).

Finally, save the workflow log configuration.

{% hint style="info" %}
Tip: workflow logging will apply to any workflow you run in the current project. That may not be necessary or even not desired. If you want to only work with logging information for a selected number of workflows, you can add a selection of workflows to the table below the configuration options ("Capture output of the following workflows"). The screenshot below shows the single "generate-fake-books.hwf" workflow that logging will be captured for in the default Apache Hop samples project.
{% endhint %}

<figure><img src="/files/7YX7fK0cWHaymSaK3ZZM" alt=""><figcaption></figcaption></figure>

### Step 2: Create a new pipeline with the Workflow Logging transform

To create the pipeline you can go to the perspective area or by clicking on the New button in the New Workflow Log dialog. Then, choose a folder and a name for the pipeline.

A new pipeline is automatically created with a [Workflow Logging](/data-shaper-1.21/knowing-the-data-shaper-designer/pipelines/transforms/workflow-logging.md) transform connected to a [Dummy](/data-shaper-1.21/knowing-the-data-shaper-designer/pipelines/transforms/dummy.md) transform (Save logging here).

<figure><img src="/files/fK5foyFIywOHJyJKkrOw" alt=""><figcaption></figcaption></figure>

Now it’s time to configure the Workflow Logging transform. This configuration is very simple, open the transform and set your values as in the following example:

<figure><img src="/files/NmiFn1vxVL715qWnl9q0" alt=""><figcaption></figcaption></figure>

* Transform name: choose a name for your transform, just remember that the name of the transform should be unique in your pipeline (log).
* Also log transform: selected by default.

### Step 3: Add and configure a Table output transform

The Table Output transform allows you to load data into a database table. Table Output is equivalent to the DML operator INSERT. This transform provides configuration options for the target table and a lot of housekeeping and/or performance-related options such as Commit Size and Use batch update for inserts.

{% hint style="info" %}
Tip: In this example, we are going to use a relational database connection to log but you can also use output files. In case you decide to use a database connection, check the installation and availability as a pre-requirement.
{% endhint %}

Add a Table Output transform by clicking anywhere in the pipeline canvas, then Search 'table output' → Table Output.

<figure><img src="/files/vmb1XudYdGiXuWOcKgLe" alt=""><figcaption></figcaption></figure>

Now it’s time to configure the Table Output transform. Open the transform and set your values as in the following example:

<figure><img src="/files/E2SwwnDNuQfFlShUVERu" alt=""><figcaption></figcaption></figure>

* Transform name: choose a name for your transform, just remember that the name of the transform should be unique in your pipeline (workflows logging).
* Connection: The database connection to which data will be written (logging-connection). The connection was configured by using the logging-connection.json environment file that contains the variables:

<figure><img src="/files/w6N3JEcq0es438E937I3" alt=""><figcaption></figcaption></figure>

* Target table: The name of the table to which data will be written (workflows-logging).
* Click on the SQL option to generate the SQL to create the output table automatically

<figure><img src="/files/tkuIFBTEbdoZ7md1sCC3" alt=""><figcaption></figcaption></figure>

* Execute the SQL statements. In this simple scenario, we’ll execute the SQL directly. In real-life projects, consider managing your DDL in version control and through tools like [Liquibase](https://www.liquibase.org/) or [Flyway](https://flywaydb.org/).

<figure><img src="/files/TDb8Zlf935H8kzKAQmM8" alt="" width="493"><figcaption></figcaption></figure>

* Open the created table to see all the logging fields:

<figure><img src="/files/UUBnV4P6zdtPkTfAEmJD" alt="" width="268"><figcaption></figcaption></figure>

* Close and save the transform.

### Step 4: Run a workflow and check the logs

Finally, run a workflow by clicking on the Run → Launch option. The Workflow Log pipeline will be executed by any workflow you’ll run.

<figure><img src="/files/itaEpWwF3z6iobh356ZV" alt=""><figcaption></figcaption></figure>

The executed pipeline (generate-rows.hpl) generates a constant and writes the 1000 rows to a CSV file:

<figure><img src="/files/CnyxlpBLhAFhltsromsf" alt=""><figcaption></figcaption></figure>

The data of the workflow execution will be recorded in the workflows-logging table.

<figure><img src="/files/6z3Trhr8cUzaz9ugDKfm" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/0Jmz6sGNKkX2SyLwjTzI" alt=""><figcaption></figcaption></figure>

Check the data in the table.

<figure><img src="/files/rVYUd07CwRe5OYlkqm8b" alt=""><figcaption></figcaption></figure>

### Next steps

You now know how to use the workflow log metadata type to work with everything Apache Hop has to offer to process your workflow logging information.

Check the related page on [pipeline log](/data-shaper-1.21/knowing-the-data-shaper-designer/index-2/pipeline-log.md) to learn how to set up a similar process to work with pipeline logging.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.primeur.com/data-shaper-1.21/knowing-the-data-shaper-designer/index-5/logging-workflow-log.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
