Multiple business organizations leverage the data they generate by storing it in robust Data Warehouses and performing analyses using Business Intelligence [BI] tools to gather actionable insights for optimizing strategies. Snowflake is one such Data Warehouse used by companies and Looker is a famous Data Visualization tool and BI platform that can help analyze this data from Snowflake. That is why, more often than, Looker Snowflake connectionis essential for most organizations.
In this article, you will be introduced to Looker, Snowflake, and their key features. You will be informed about the prerequisites of the Looker and Snowflake Integration, and learn about steps that can be followed to implement the Snowflake looker Integration.
Introduction to Snowflake
Snowflake is a powerful Cloud Data Warehouse platform that lets you not only store but also share data in real-time, fully leveraging cloud architecture. It’s perfect for businesses that want to avoid the hassle of maintaining in-house servers, as everything runs in the cloud. Snowflake offers top-notch performance, easily scales to meet your needs, and supports both structured and semi-structured data like CSV, XML, JSON, AVRO, and more. You can run fully relational queries with ANSI, ACID-compliant SQL. For more details, check out Snowflake’s documentation.
Key Features of Snowflake
Snowflake has the following features:
- Cloud Service Provision: The Snowflake Data Warehouse platform is completely hosted on Cloud infrastructure offered by Amazon Web Service, Microsoft Azure and Google Cloud with the option to choose the geographic region of the stored data.
- High Scalability: The user is provided with the ability to scale up and down rapidly in accordance with the business requirements and thus being more cost effective.
- Focus on Security: Along with the security features related to data storage such as encryption, there are a range of security measures to control data access such as restricting certain IP Addresses, Two-factor authentication etc.
- Low Administrative Requirements: As Snowflake is available as a Software-as-a-Service [SaaS], there is little requirement for setting up the IT infrastructure and maintaining a IT administrator over the deployment.
- Support for Data Sharing: Sharing data amongst other Snowflake users is very easy. For users not on the platform, special reader accounts can be created to access the shared content.
Introduction to Looker
Looker is a web-based Data Visualization and BI platform that easily lets you create business reports and real-time dashboards. It transforms your inputs into SQL queries, running them live against the database. With tons of visualization and customization options, it supports multiple data sources while ensuring your data remains secure, transparent, and reliable—perfect for mission-critical needs. Looker works seamlessly with cloud data warehouses like BigQuery, Redshift, and Snowflake, and its separate data modeling layer allows developers to collaborate efficiently. To learn more about Looker, you can visit their official documentation.
Key Features of Looker
The key features of Looker are as follows:
- Create Custom Applications: Users can create custom applications that can provide tailored visualization experience to different organizations according to their needs.
- Support for Advanced Integrations: Platforms like Google BigQuery, Snowflake, AWS Redshift along with 50+ SQL dialects are supported using various Connectors and Integrations.
- Latest BI Tools: Latest data and support for creating Real-Time Dashboards is provided and support for latest Business Intelligence tools is present to improve reporting.
- Support for Advanced Hosting: To ensure data reliability and safety, multiple Cloud-based platforms including Google Cloud Platform (GCP), Amazon Web Services(AWS) are supported.
- Looker ML Functionality: This feature is also known as LookML and is used to describe the measures and dimensions of all the projects stored and being analysed on Looker.
Pre-Requisites
- Working knowledge of of Snowflake
- Working knowledge of Looker
- Active Looker Account
- Active Snowflake account and Cloud Data Warehouse
- Provision Access on Snowflake
Hevo offers a fully managed solution to set up data integration from 150+ data sources (including 60+ Free Data Sources) and will let you directly load data to Snowflake or a Data Warehouse of your choice. It will automate your data flow in minutes without writing any line of code. Its fault-tolerant architecture makes sure that your data is secure and consistent.
Check out what makes Hevo amazing:
- Schema Management: Hevo eliminates the tedious task of schema management. It automatically detects the schema of incoming data and maps it to the destination schema.
- Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with minimal latency.
- Incremental Data Load: Hevo allows the transfer of modified data in real-time, ensuring efficient bandwidth utilization on both ends.
- Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Get Started with Hevo for Free
Procedure to Implement Looker Snowflake Integration
Step 1: Creation of a Looker User on Snowflake
You need to run each of these commands individually to create a Looker User on Snowflake.
Before that, you are required to check the “All Queries” option on the Snowflake Connection Panel.
-- change role to ACCOUNTADMIN
use role ACCOUNTADMIN;
-- create role for looker
create role if not exists looker_role;
grant role looker_role to role SYSADMIN;
-- Note that we are not making the looker_role a SYSADMIN,
-- but rather granting users with the SYSADMIN role to modify the looker_role
-- create a user for looker
create user if not exists looker_user
password = '<enter password here>';
grant role looker_role to user looker_user;
alter user looker_user
set default_role = looker_role
default_warehouse = 'looker_wh';
-- this part is to executed only is the user roles are to be changed
-- change role
use role SYSADMIN;
-- create a warehouse for looker (optional)
create warehouse if not exists looker_wh
-- set the size based on your dataset
warehouse_size = medium
warehouse_type = standard
auto_suspend = 1800
auto_resume = true
initially_suspended = true;
grant all privileges
on warehouse looker_wh
to role looker_role;
-- grant read only database access (repeat for all database/schemas)
grant usage on database <database> to role looker_role;
grant usage on schema <database>.<schema> to role looker_role;
-- rerun the following any time a table is added to the schema
grant select on all tables in schema <database>.<schema> to role looker_role;
-- or
grant select on future tables in schema <database>.<schema> to role looker_role;
-- create schema for looker to write back to
use database <database>;
create schema if not exists looker_scratch;
use role ACCOUNTADMIN;
grant ownership on schema looker_scratch to role SYSADMIN revoke current grants;
grant all on schema looker_scratch to role looker_role;
Step 2: Adding the Database Connection in Looker
To add New Database Connection in Looker you need to navigate to the Admin panel of the interface. Subsequently select “Connections” and then “New Connections”.
A Configuration Section will open up where you will be required to enter the following information.
Name: Provide an Arbitrary name to refer to this connection later.
Host: This section is for providing Hostname for Snowflake deployment. It is of the format <account_name>.snowflakecomputing.com
Port: Depending upon your deployment, you can change the Port number, the default Port is 443.
Database: Provide the name of the default database that is required for use.
Username and Password: These are the credentials of the user that will connect to Looker.
Schema: This the default Database Schema that is used in your Snowflake Deployment.
Temp Database: This section needs to be set to the Database Schema if PDTs [Persistent Derived Tables] are enabled.
Some selections that are supposed to be made are:
Dialect: Snowflake needs to be selected.
Persistent Derived Tables: Selection needs to be made to enable PDTs [Persistent Derived Tables]
Some Optional Changes that you might implement:
Max Connections: This is related to Connection Pool size.
Connection Pool Timeout: Session Timeout for Connection Pool.
Database Time Zone: Default is UTC.
Query Time Zone: Default is UTC.
You can also provide Additional JDBC [Java Database Connectivity] parameters pertaining to Snowflake JDBC driver. An example is providing as follows:
TIMESTAMP_TYPE_MAPPING=TIMESTAMP_LTZ
JDBC_TREAT_DECIMAL_AS_INT=FALSE
TIMESTAMP_INPUT_FORMAT=AUTO
AUTOCOMMIT=TRUE
#You can override each of these by setting an alternative value in the Additional Params field, #for example: &AUTOCOMMIT=FALSE
After making the configurations you can proceed to click on the “Test These Settings” button to check if the connection is Successful.
Finally, you need to click on the “Add Connection” button to save the connection.
Step 3: Setting Access on Per-Group or Per-User Basis
You can allocate different computing resources to different users using Looker Attributes. This can help assign Snowflake Warehouses to specific Looker users or even groups.
Firstly, add the groups or users in Looker and then proceed to enter the User Attributes to store the Snowflake Warehouse names as show below:
Then proceed to assign Warehouse name values to the users or groups as shown below:
Subsequently, you will be required to provide Additional Parameters on the “Connection Settings” page using the format shown below:
warehouse={{ _user_attributes['snowflake_warehouse'] }}
Step 4: Implementing Snowflake’s Autosuspend Feature
Using the Autosuspend feature, the Warehouse gets suspended after a specified period of time. Therefore all the queries after this period will show an error. However this error is not visible to Third-Party Dashboards such as the one on Looker.
To counter the shortcomings of this feature, there is also an Auto-resume feature which resumes the Warehouse when it is queried. The specified time period for these features can be controlled in the Warehouses Tab.
Step 5: Setting Up OAuth for Connection
To set up an OAuth based connection, you will require a user account with ACCOUNTADMIN
permission on Snowflake.
Firstly you are required to run the following command in Snowflake, where <looker_hostname>
is the hostname of the Looker Instance:
CREATE SECURITY INTEGRATION LOOKER
TYPE = OAUTH
ENABLED = TRUE
OAUTH_CLIENT = LOOKER
OAUTH_REDIRECT_URI = 'https://<looker_hostname>/external_oauth/redirect';
To obtain the OAuth Client ID and Client Secret, you need to run the following command:
SELECT SYSTEM$SHOW_OAUTH_CLIENT_SECRETS('LOOKER');
You will be required to enter these credentials while creating a new connection, and provide the information in the given window.
Step 6: Running Queries and Signing in on Snowflake
After the OAuth Snowflake connection is set upland running, The users are required to login to run queries, otherwise you will receive the error message as shown below.
The login can be easily completed by entering the “OAuth Connection Credentials” on the “Accounts” page on Looker as shown below.
You are required to click on the “Allow” button to give Looker access to your Snowflake account.
Step 7: Implementing User Permissions
You can control the User Permission on Looker as per your requirements for accessing features and data on the platform. For starters, the default roles and permissions are appropriate until the deployment is expanded to multiple users in the organisation.
Integrate data from Google Analytics to Snowflake
Integrate data from Google Ads to Snowflake
Integrate data from Google Drive to Snowflake
Limitations of Implementing Looker Snowflake Integration using Snowflake Database Connector
- The implementation requires technical knowledge of the user pertaining to both platforms and the ability to understand the developer documentation and implement additional parameters using Command Line Interface.
- Connection Administrator would require knowledge of the permission handling on both the platforms.
- The Autosuspend and Autoresume features can cause delays in running queries while analyzing the data using Looker.
Conclusion
In this article, you learned about Snowflake, Looker and their key features, steps to implement your Looker Snowflake Integration, its Prerequisites, and the limitations of the manual method. Integrating and analyzing data from a huge set of diverse sources can be challenging, this is where Hevo comes into the picture. Hevo Data, a No-code Data Pipeline helps you transfer data from a source of your choice in a fully automated and secure manner without having to write the code repeatedly. Hevo with its strong integration with 150+ sources & BI tools, allows you to not only export & load Data but also transform & enrich your Data & make it analysis-ready in a jiffy.
Get started with Hevo today! Sign up here for a 14-day free trial!
FAQ on Looker Snowflake
What is the difference between Looker and Snowflake?
Looker is a data visualization and business intelligence (BI) tool that allows users to analyze, explore, and visualize data. Snowflake, on the other hand, is a cloud-based data warehousing platform used to store, process, and analyze large amounts of structured and semi-structured data. Looker connects to Snowflake to pull data for reporting and visualization.
Which mechanism of Snowflake does Looker take advantage of?
Looker takes advantage of Snowflake’s virtual warehouses and scalable computing power. Looker queries Snowflake directly and uses Snowflake’s ability to scale computational resources dynamically, which ensures efficient query performance even with large datasets.
How do I migrate a database to Snowflake?
To migrate a database to Snowflake:
1. Extract data from your source database (e.g., SQL dump, CSV, or other file formats).
2. Load data into a stage, such as Amazon S3 or Google Cloud Storage.
3. Use Snowflake’s COPY INTO command to import the data into Snowflake tables.
4. Optionally, set up data transformation in Snowflake using SQL or an ETL like Hevo, for post-migration processing.
Abhishek is a data analysis enthusiast with a strong passion for data, software architecture, and crafting technical content. He has strong data analytical skills and practical experience in leveraging tools like WordPress, RankMath, and SEMRush to optimize content strategies while contributing significantly to Hevo Data.