Share sensitive data subsets between teams safely with masking and anonymization
Thanks to our AI engine, populate your databases with consistent datasets that looks real
Spin-up new datasets and databases for teams, tools and the external parties to consume
How it works
API-driven data provisioning for testing with sensitive data
1 Ingest databases, files or schemas
Ingest data or a schema from a database dump or a flat file into Gigantics using our smart REST API and plain cURL.
Data is automatically desensitized and stored by Gigantics.
$ pg_dump -U prod prod_db | curl gigantics.io/tap/sdf-dff-3ie
2 Now safely test your app with realistic data
Gigantics will store multiple datasets for each test scenario. Create random sampling subsets just in time for each ephemeral environment. Don't worry about foreign keys, Gigantics will take care of them for you.
$ curl gigantics.io/tap/sdf-dff-3ie?max=5000&ex=log_table
| psql -U test test_db
$ npm run tests
3 No database to ingest? No problem!
Gigantics can ingest your codebase or favorite ORM schema files and generate brand-new synthetic data that will provide your app with an instant development database.
$ tar cfz - prisma/
| curl -F file=@- gigantics.io/tap/sdf-dff-3ie/pg > db.sql
Detected Prisma schema file: prisma/schema.prisma
Generating SQL dump file for database: postgres
Data provisioning for your DevOps pipelines
Why test with realistic data?
Realistic UI testing
Test your UI in a production-like environment and validate the quality of your user experience with extremely high data volumes
Improve the performance and quality of results of your app's SQL queries
For each new version of the app or schema we may need to test data migration changes and make sure that migrations won't break the app or take too long in the production environment.
Realistic integration testing
Test the integrated system and apps as if they were running in the production environment
Provision environments fast
Provision instant, short-lived environments for testing or development
Experiment with mock or realistic data for analytics, reports, transformations and business simulations
Risk free data sharing
Hand data out to people from inside and outside your team or the organization without fearing for data leaks
Create more robust ML/AI data models by training with augmented datasets that are statistically sound
Increase data governance while promoting transparency
Expose the risks
Run discovery and review sensitive data. Detect all PII, GDPR and labelled results with audit version tracking.
Make data sharable
Anonymize your databases. Create secure datasets that are free of sensitive information for testing, development or sharing.
Produce consistent and anonymized data automatically from the sampled data.
Apply your rules
Create reproducible dataops pipelines for your modeling data analysis, masking and synthesis that can quickly fit into your DevOps toolchain.
Manage your datasets
After creating your new datasets thanks to the filtering operations, subsetting etc, manage your new datasets. You will be able to share them with the developers, testers etc for their tests.
One-click access to data
Start your virtual instance with Docker with the desired database in just one click. You can stop them once the tests have been finished.