Skip to main content


In Gigantics we call Pipeline to a template or blueprint that allows the user to execute a job periodically and/or using a public link.

Create a new pipeline


When creating a new pipeline, you need to configure what type of job you want the pipeline to run:

  • Scan: Scan the datasource looking for changes.
  • Discover: Creates a new discovery.
  • Create a dataset using rule: Create a dataset using an existing rule.
  • Load using rule: Load the tap into a sink by using a rule (does not create datasets).
  • Dump dataset: Load a dataset into a sink.
  • Pump the tap: Load the tap directly into the sink without creating datasets or applying rules.


Selects when the pipeline will be executed. The available options are:

  • Manual Execution: Runs the pipeline when the user decides. Can be by using the Run button on the Pipelines page or by calling the URL.

  • Periodic Execution: Runs the pipeline automatically every X time determined by the user.

Pipelines API

Pipelines can be executed by using Run button or by using a URL. To enable this type of remote execution it is necessary to Manage the APIs visible in the actions menu of the pipeline.

From the management window, the user can create or revoke the keys. To use them, just copy the API key and invoke it from the url:


The user who runs the rule must have permissions to edit models in the project.


1- Use a pipeline to periodically scan the database for changes in its schema.

2- Provision environments with masked data: Using a transformation rule and dumping the results into a sink periodically.

3- Load a dataset in your test database to run unit tests in your CI/CD pipeline.