Takes all tables in selected bucket or buckets and creates views containing datatypes in the external db within the same Snowflake account.
It is advisable to enable the RO role for the project, so the View creating role has only read access. Also, to support shared buckets the RO role must be enabled in both projects, otherwise the component will fail when shared tables are enabled in the configuration.
No Snowflake account sharing is needed. Views are created in the same Snowflake account where the project BYODB backend runs.
To allow the component to work correctly, set up the following:
-- Replace with your own values
CREATE ROLE IF NOT EXISTS <YOUR_ROLE>;
CREATE USER IF NOT EXISTS <YOUR_USERNAME>
PASSWORD = 'SomeStrongPassword123!'
DEFAULT_ROLE = <YOUR_ROLE>
DEFAULT_WAREHOUSE = YOUR_WAREHOUSE
GRANT ROLE <YOUR_ROLE> TO USER <YOUR_USERNAME>;
-- Grant read access to one or more storage schemas (e.g., IN_C_SALES)
GRANT USAGE ON DATABASE <STORAGE_DB> TO ROLE <YOUR_ROLE>;
GRANT USAGE ON SCHEMA <STORAGE_DB>.<STORAGE_BUCKET> TO ROLE YOUR_ROLE;
GRANT SELECT ON ALL TABLES IN SCHEMA <STORAGE_DB>.<STORAGE_BUCKET> TO ROLE <YOUR_ROLE>;
-- Optionally allow access to future tables
GRANT SELECT ON FUTURE TABLES IN SCHEMA <STORAGE_DB>.<STORAGE_BUCKET> TO ROLE <YOUR_ROLE>;
Repeat for all necessary storage schemas (e.g., OUT_C_ANALYTICS
, etc.).
-- Create if necessary
CREATE DATABASE IF NOT EXISTS <DESTINATION_DB>;
CREATE SCHEMA IF NOT EXISTS <DESTINATION_DB>.<TARGET_SCHEMA>;
-- Grant permission to create views
GRANT USAGE ON DATABASE <DESTINATION_DB> TO ROLE <YOUR_ROLE>;
GRANT USAGE ON SCHEMA <DESTINATION_DB>.<TARGET_SCHEMA> TO ROLE <YOUR_ROLE>;
GRANT CREATE VIEW ON SCHEMA <DESTINATION_DB>.<TARGET_SCHEMA> TO ROLE <YOUR_ROLE>;
⚠️ Do not grantALL PRIVILEGES
orCREATE TABLE
unless strictly required. This component only needs to create views.
⚠️ Role Compatibility Note
Ensure that the role configured for the component (<YOUR_ROLE>
) has both:
– read access to source Storage tables (buckets), and
– write access to create views in the destination schema.If you're using a restricted Read-Only role (RO), make sure it has been explicitly granted the required privileges in both contexts. Otherwise, the component will fail with
not authorized
orinsufficient privileges
errors
When setting up the component:
- Use the BYODB Snowflake credentials (username, password, host, warehouse)
- Set the destination database and schema for views (
<DESTINATION_DB>.<TARGET_SCHEMA>
) - Select tables from Storage buckets to expose (e.g.,
out.c-analytics
)
Each selected table will result in a VIEW
created in the target schema that directly references the table in Storage.
For authentication the component requires following configuration parameters:
-
Authentication Type (required) - Choose between:
password
- Standard password authenticationkey_pair
- Key pair authentication (default)
-
Storage Token - KBC Storage API token (optional)
-
User Name (required) - Snowflake user name
-
Password (required for password auth) - Snowflake user password
-
Private Key (required for key pair auth) - Private key in PEM format
-
Private Key Passphrase (optional) - Passphrase for encrypted private key
-
Account (required) - Snowflake account identifier (e.g.,
cID.eu-central-1
) -
Warehouse (required) - Name of the Snowflake warehouse to use
-
Role (required) - Snowflake role name to use for connection
-
DB Name Prefix (optional) - Prefix for Keboola generated DB names
- Default value:
KEBOOLA_
- Format:
{PREFIX}{PROJECT_ID}
- Common values:
KEBOOLA_
orSAPI_
- Default value:
{
"auth_type": "password",
"account": "xy12345.eu-central-1",
"username": "<YOUR_USERNAME>",
"#password": "your-password",
"warehouse": "<YOUR_WAREHOUSE>",
"role": "<YOUR_ROLE>",
"db_name_prefix": "KEBOOLA_"
}
For key pair authentication:
{
"auth_type": "key_pair",
"account": "xy12345.eu-central-1",
"username": "<YOUR_USERNAME>",
"#private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----",
"#private_key_pass": "optional-passphrase",
"warehouse": "<YOUR_WAREHOUSE>",
"role": "<YOUR_ROLE>",
"db_name_prefix": "KEBOOLA_"
}
Each configuration row requires the following parameters:
- Target DB name (required) - Name of the destination database in Snowflake
- Storage Buckets (required) - List of storage buckets to process. If empty, all buckets in the project will be used.
- Custom schema mapping (optional) - Enable to map buckets to custom schemas
- Default:
false
- Default:
- Schema Mapping (required if custom mapping enabled) - Maps source buckets to destination schemas
- Storage Bucket - Source bucket ID
- Target Schema - Target schema name in Snowflake
Case Settings:
- Column case - Case formatting for column names
- Options:
original
,upper
,lower
- Default:
original
- Options:
- View Case - Case formatting for view names
- Options:
original
,upper
,lower
- Default:
original
- Options:
- Schema Case - Case formatting for schema names
- Options:
original
,upper
,lower
- Default:
original
- Options:
Naming Options:
- Use bucket alias - Use bucket alias instead of Bucket ID in VIEW names
- Default:
true
- Description: Uses user-defined bucket name instead of the technical ID
- Default:
- Drop in/out prefix - Remove in/out prefix from schema names
- Default:
false
- Description: Removes the standard in/out prefix from resulting schema names
- Default:
- Use table user defined name - Use friendly table names
- Default:
false
- Description: Uses table's user-defined name instead of the default ID in VIEW names
- Default:
Other Options:
- Ignore shared tables - Skip processing of shared tables
- Default:
true
- Description: Enable only if RO role is used and enabled in all projects
- Default:
{
"destination_db": "<DESTINATION_DB>",
"bucket_ids": ["out.c-analytics"],
"custom_schema_mapping": true,
"schema_mapping": [
{
"bucket_id": "out.c-analyticss",
"destination_schema": "ANALYTICS"
}
],
"additional_options": {
"column_case": "lower",
"view_case": "upper",
"schema_case": "original",
"use_bucket_alias": true,
"drop_stage_prefix": false,
"use_table_alias": true,
"ignore_shared_tables": true
}
}
If required, change local data folder (the CUSTOM_FOLDER
placeholder) path to your custom path in
the docker-compose.yml
file:
volumes:
- ./:/code
- ./CUSTOM_FOLDER:/data
Clone this repository, init the workspace and run the component with following command:
git clone [email protected]:kds_consulting_team/kds-team.app-snowflake-byodb-view-writer.git kds-team.app-snowflake-byodb-view-writer
cd kds-team.app-snowflake-byodb-view-writer
docker-compose build
docker-compose run --rm dev
Run the test suite and lint check using this command:
docker-compose run --rm test
For information about deployment and integration with KBC, please refer to the deployment section of developers documentation