Summary IconKEY TAKEAWAYS

There are several ways to connect Salesforce to Snowflake

  • Hevo Data: If you want a pipeline that runs itself, Hevo’s no-code ELT handles schema mapping, incremental loads, and error recovery automatically.
  • Salesforce Bulk API: If you need full control over large-volume extracts and have technical resources, the native Bulk API works, but you’ll own the scripts and maintenance.
  • Custom API Integration: If your team can write code, you can use Salesforce’s REST API to pull records and load them into Snowflake via custom scripts. It is most flexible but also requires the most effort.
  • Native BYOL Data Sharing: If you’re already deep in the Salesforce Data Cloud ecosystem and need real-time zero-ETL access, this is the enterprise-grade option.

Getting your Salesforce CRM data into Snowflake opens the door to deeper analytics and insights that Salesforce’s native tools simply can’t deliver on their own. 

Migrating your Salesforce data to Snowflake is often the first step, whether you’re trying to build a unified customer view or feed machine learning models with rich CRM data.

This guide walks you through four practical methods to make that connection so you can pick the one that fits your team’s technical comfort and business needs.

Understanding Salesforce to Snowflake Integration

Before diving into the methods below, it helps to know what each platform does and why they complement each other so well.

What is Salesforce?

Salesforce Logo

Salesforce is a cloud-based customer relationship management (CRM) platform used by businesses of all sizes to manage sales pipelines, customer service and more. It stores a rich trove of operational data (like contact records and custom objects tailored to your business processes) and provides tools for teams to act on that data day to day.

However, Salesforce is fundamentally an operational system. It’s designed to help your team work efficiently, not to run complex analytical queries across millions of rows. That’s what Snowflake is for.

What is Snowflake?

Snowflake logo

Snowflake is a cloud-native data warehousing platform that separates compute and storage and allows you to scale each independently based on your workload. It supports structured and semi-structured data (JSON, Avro, Parquet) and runs on all major cloud providers (AWS, Azure, GCP). Snowflake is purpose-built for analytical queries.

Snowflake is relevant for Salesforce users because of its ability to consolidate data from many sources into one queryable platform. Once your Salesforce data lives in Snowflake alongside data from your marketing tools and other sources, you can join across all of them with standard SQL, without special connectors or custom code.

4 Methods to Migrate Data from Salesforce to Snowflake

There’s no single best way to move your CRM data into a cloud warehouse. 

Some teams want a zero-maintenance workflow they can configure once and forget about. Others prefer full control over every step of the extraction and loading process. 

Here are four methods, ranging from fully managed to fully manual, that cover the spectrum.

Method 1: Using Hevo Data (Automated, No-Code)

Hevo Data is a no-code data pipeline platform that lets you set up a Salesforce-to-Snowflake connection in just a few minutes, without writing any code or managing infrastructure. 

Hevo handles extraction, schema mapping, incremental loading, and error handling automatically, which makes it a strong choice for teams that want reliable data replication without the engineering overhead.

Here’s how to set it up:

Step 1: Sign up and create a pipeline

Head over to Hevo Data and sign up for a free account (you can also start your free trial here). Once you’re in the dashboard, click the Create Pipeline button.

Step 2: Configure Salesforce as your source

  • Search for Salesforce in the source catalog and select it. 
  • You’ll be asked to authorize your Salesforce account via OAuth. Just log in with your Salesforce credentials and grant Hevo the required permissions. 
  • Next, choose the duration for your historical data load. This controls how far back Hevo pulls data on the initial sync. 
  • Click Continue when you’re ready.

Step 3: Select your Salesforce objects

Hevo will show you a list of all available Salesforce objects:

  • Accounts
  • Contacts
  • Opportunities
  • Leads
  • Cases, and 
  • any custom objects you’ve created. 

Select the ones you want to replicate. You can always add or remove objects later without disrupting the pipeline. 

Click Continue.

Step 4: Configure Snowflake as your destination

  • Select Snowflake from the destination list. Fill in your Snowflake connection details. 
  • If you’re unsure about any of these values, Snowflake’s admin console has them under your account settings. 
  • Click Save & Continue.

[For detailed guidance, Hevo’s Snowflake destination documentation walks through each field.]

Step 5: Set a table prefix and finish

  • Give your tables a prefix (something like sf_ or salesforce_) to keep them organized alongside other data in your warehouse. 
  • Click Continue, and Hevo will begin the initial data sync. Going forward, it will automatically detect changes in Salesforce and replicate them to Snowflake on an ongoing basis.
  • That’s it. You now have a live, continuously updating pipeline from Salesforce to Snowflake. Hevo also monitors pipeline health for you and sends alerts if anything breaks, so you don’t have to babysit it.
  • You can explore the full range of Hevo’s integrations here.

Method 2: Using Salesforce Data Cloud (Zero-Copy Data Sharing)

If your organization uses Salesforce Data Cloud (formerly known as Salesforce CDP), you have access to a native data sharing feature that lets you make Salesforce data available in Snowflake without physically copying it. 

This zero-copy approach means the data stays in Salesforce’s infrastructure, and Snowflake reads it directly through a secure sharing layer.

This method is ideal for enterprises that are already invested in the Salesforce Data Cloud ecosystem and want to avoid data duplication. Here’s how it works at a high level:

Step 1: Configure Snowflake as a Data Share Target

  • Go to Data Share Targets in Salesforce Data Cloud.
  • Click on New and select Snowflake. Enter a label for the connection and provide your Snowflake account URL (base URL only, without any extra path).
  • This step establishes the connection endpoint between Data Cloud and Snowflake. Finally, click Save.

Step 2: Authenticate the connection

  • Choose the supported authentication method for your setup (IDP-based or credentials-based).
  • Provide the required details and complete the authentication flow. This step ensures a secure trust relationship between Salesforce Data Cloud and Snowflake.
  • Verify that the connection status shows Active and authentication shows Successful.
  • Then, click Continue.

Step 3: Configure access in Snowflake

  • Log in to your Snowflake account.
  • Create or configure a user or role that will be used to access the shared data and grant the necessary permissions to import and query the shared database.
  • Ensure your Snowflake environment is ready to consume external data shares. Now, click Save.

Step 4: Select and share data.

  • Go back to Salesforce Data Cloud and select the Data Model Objects (DMOs) or Data Lake Objects you want to share. 
  • These objects define what data will be exposed to Snowflake.
  • Review the configuration and confirm the share.
  • Then, click Activate.

Step 5: The final step

  • In Snowflake, create or access the database created from the incoming share. The shared data will appear as read-only tables (no data is physically copied).
  • You can now query, join, and combine this data with your existing Snowflake datasets.
  • And that’s it! You are ready to use zero-copy data sharing between Salesforce Data Cloud and Snowflake.

When to use this method

This is best suited for organizations on Salesforce Enterprise or Unlimited editions with Data Cloud enabled, and who specifically want to avoid duplicating data across platforms. 

If you need to write back to Salesforce or require heavy transformations during the loading process, a pipeline-based method (like Hevo or the manual methods below) may be a better fit.

Method 3: Manual Export Using Salesforce Data Loader + Snowflake COPY INTO

This is the simplest manual approach and is best for one-time migrations or occasional data refreshes. The idea is that you export data from Salesforce as a CSV file, upload it to Snowflake, and load it with a SQL command.

Step 1: Export data from Salesforce

Option A — Salesforce Data Loader (recommended for single objects)

Salesforce Data Loader is a free desktop app for Windows and macOS. If you don’t have it, download it from Salesforce Setup under Data Loader.

  • Open Data Loader and log in with your Salesforce credentials
  • Choose the Export operation
  • Select the object you want to export

Optionally write a SOQL query to filter the data:

SELECT Id, Name, Amount, StageName, CloseDateFROM OpportunityWHERE CloseDate > 2025-01-01
  • SOQL is Salesforce’s query language. It is like SQL, but for Salesforce objects.
  • Choose a local folder to save the output CSV file

Option B — Data Export Service (recommended for bulk and full exports)

Go to Setup, and then Data Export to export all objects at once. Salesforce packages everything into a ZIP file of CSVs and emails you a download link when it’s ready.

Step 2: Clean and prepare the CSV files

Open the exported CSV and check for common issues before loading:

  • Date formats: Snowflake expects dates in YYYY-MM-DD format
  • Null values: Empty cells may be exported as blank strings instead of true nulls
  • Special characters: Commas, quotes, or newlines inside text fields can confuse the CSV parser
  • File size: If the file is very large (millions of rows), split it into smaller chunks. Snowflake loads multiple smaller files faster than one huge file.

Step 3: Create a Snowflake stage and file format

A stage is a temporary holding area inside Snowflake where you upload files before loading them into a table. A file format tells Snowflake how to read your CSV.

Run this in the Snowflake web console or SnowSQL (Snowflake’s command-line tool):

CREATE OR REPLACE FILE FORMAT sf_csv_format
  TYPE = 'CSV'
  FIELD_OPTIONALLY_ENCLOSED_BY = '"'
  SKIP_HEADER = 1
  NULL_IF = ('', 'NULL');

CREATE OR REPLACE STAGE sf_stage
  FILE_FORMAT = sf_csv_format;

Step 4: Upload the CSV to the stage

Use the PUT command in SnowSQL to upload your local file to the stage:

PUT file:///path/to/your/opportunities.csv @sf_stage;

Note: If your files are already in cloud storage (S3, GCS, or Azure Blob), you can create an external stage pointing to that location instead; there’s no need to upload manually.

Step 5: Create the target table

Define a table in Snowflake that matches the columns in your CSV:

CREATE OR REPLACE TABLE salesforce_opportunities (
  id          VARCHAR,
  name        VARCHAR,
  amount      NUMBER(18,2),
  stage_name  VARCHAR,
  close_date  DATE
);

Step 6: Load the data

Run COPY INTO to move the staged file into your table:

COPY INTO salesforce_opportunities
FROM @sf_stage/opportunities.csv
FILE_FORMAT = sf_csv_format;

Step 7: Verify the load

Spot-check the results with a quick query:

SELECT * FROM salesforce_opportunities LIMIT 20;

Repeating the process

Every time you need fresh data, you’ll repeat this cycle: export from Salesforce – upload to stage – run COPY INTO.

To automate this for recurring needs

  • Salesforce Data Loader CLI supports command-line execution that can be scheduled via cron (Linux/macOS) or Task Scheduler (Windows)
  • Snowflake Tasks is a built-in feature that lets you schedule SQL statements (like COPY INTO) on a recurring basis

Method 4: Using the Salesforce Bulk API with a Python Script

For teams comfortable with code, this approach gives you full control over what data gets pulled, how it’s transformed and when it runs. Instead of manually exporting CSVs, a Python script handles everything automatically.

Step 1: Set up your environment

Install the required Python libraries

pip install simple-salesforce snowflake-connector-python pandas

You’ll also need:

  • A Salesforce Connected App with API access enabled. This gives you a consumer key, consumer secret and access token. 
  • Snowflake credentials including your account identifier, warehouse, database and schema name.

Step 2: Extract data from Salesforce

Use simple-salesforce to connect and query data using Bulk API 2.0, which is Salesforce’s modern, high-performance API for large datasets:

from simple_salesforce import Salesforce
import pandas as pd

sf = Salesforce(
    username='your_username',
    password='your_password',
    security_token='your_token'
)

# Query using Bulk API 2.0 -- best for large datasets
results = sf.bulk2.Opportunity.query(
    "SELECT Id, Name, Amount, StageName, CloseDate FROM Opportunity"
)

df = pd.DataFrame(results)
Why Bulk API 2.0?

It processes records asynchronously (in the background) in large batches. This makes it much faster than a standard query for anything over a few thousand records. Use sf.bulk2 (not sf.bulk). Bulk API 2.0 is the current recommended version as of 2026.

Step 3: Transform the data (Optional)

With the data loaded into a Pandas DataFrame (a table-like structure in Python), you can clean it before loading:

# Convert date strings to proper date objects
df['CloseDate'] = pd.to_datetime(df['CloseDate']).dt.date

# Drop Salesforce's internal metadata column -- you don't need it
df = df.drop(columns=['attributes'], errors='ignore')

# Replace missing values with empty strings
df = df.fillna('')

Step 4: Load data into Snowflake

Use the Snowflake Python Connector to write the DataFrame directly into a Snowflake table:

import snowflake.connector
from snowflake.connector.pandas_tools import write_pandas

conn = snowflake.connector.connect(
    user='your_sf_user',
    password='your_sf_password',
    account='your_account',
    warehouse='your_warehouse',
    database='your_database',
    schema='your_schema'
)
success, nchunks, nrows, _ = write_pandas(
    conn, df, 'SALESFORCE_OPPORTUNITIES'
)
print(f"Loaded {nrows} rows in {nchunks} chunks")
conn.close()

write_pandas handles the staging and loading automatically behind the scenes — it uses Snowflake’s PUT and COPY INTO commands internally, so you get bulk-loading performance without managing files yourself.

Step 5: Automate (optional)

To run this script on a schedule, wrap it in one of the following

  • Cron job (Linux/macOS): a simple scheduler built into the operating system
  • Windows Task Scheduler: the Windows equivalent
  • Apache Airflow DAG: if your team already uses Airflow for workflow orchestration

For incremental loads (only pulling new/changed records instead of everything), modify your SOQL query to filter by LastModifiedDate:

SELECT Id, Name, Amount, StageName, CloseDate
FROM Opportunity
WHERE LastModifiedDate > 2025-01-01T00:00:00Z

This way, each run only fetches records that changed since the last run, which is much faster and cheaper for large orgs.

When to use this method

This approach is best when 

  • you need custom transformation logic during extraction
  • you want full visibility into what’s happening at every step
  • your organization already has a Python-based data engineering workflow in place.

Limitations of Manual Methods

Manual approaches (Methods 3 and 4) give you control, but they come with tradeoffs that tend to compound over time.

Maintenance

Every time Salesforce adds a custom field or changes a field’s data type, your workflow breaks. You’ll need to manually update your export queries and Snowflake table schemas to match. Automated tools detect and handle schema changes on your behalf.

Error handling

Bulk API jobs can fail partway through due to rate limits, network timeout  or Salesforce governor limits. Building retry logic and ensuring data consistency after a partial failure is nontrivial work and it has to be maintained indefinitely.

Incremental syncing 

Salesforce doesn’t make it easy to identify soft-deleted records or track changes uniformly across all object types. You’ll need to build your own change detection, deduplication and merge logic to keep your Snowflake tables accurate over time.

API limits 

Salesforce enforces daily and concurrent API call limits. Large organizations with complex data models can easily hit these caps during peak hours, and manual scripts need to be carefully throttled and scheduled to avoid eating into the quota that other integrations depend on.

For a one-off migration or a small proof of concept, manual methods work fine. For ongoing, production-grade replication, the engineering time spent building and maintaining a custom solution exceeds the cost of just using an automated tool.

Why Connect Salesforce to Snowflake?

Integrating Salesforce with Snowflake solves specific business problems that surface when CRM data is trapped inside a single platform.

  • Unified cross-platform analytics: Salesforce doesn’t exist in a vacuum. Once your CRM data lands in Snowflake alongside your marketing, product, and finance data, you can start answering questions that no single tool could answer on its own.
  • Historical trend analysis: Salesforce is built for managing deals, not analyzing them over time. Snowflake’s columnar architecture handles multi-year queries around deal size, win rates, and sales cycles in a way Salesforce reporting simply cannot.
  • Reduced load on Salesforce: Complex analytical queries run against Salesforce consume API calls and slow the platform down for your sales team. Moving that workload to Snowflake keeps your CRM fast for the people who use it every day.
  • Advanced and predictive analytics: Snowflake integrates natively with Python, Spark, and ML platforms. That means your data science team can build churn models and segmentation analyses on real CRM data, without working around Salesforce’s limitations.
  • Governance and compliance: Snowflake gives you one place to manage access controls, audit data usage, and enforce compliance policies across all your data sources. That is much easier than trying to govern sensitive CRM data spread across multiple tools.

Simplify Your Data Integration with Hevo

If you’ve read through the manual methods above and thought “that’s a lot of steps,” you’re not wrong. Hevo Data is built to eliminate exactly that complexity.

With Hevo, you get a fully managed, no-code pipeline that keeps your Salesforce data in Snowflake accurate and up to date, automatically. There’s no schema mapping to maintain and no error handling to code. 

Hevo supports over 150 sources, so when you’re ready to bring in data from beyond Salesforce, the same platform handles it all.

Start moving your Salesforce data to Snowflake in minutes. 

Sign up for Hevo’s free trial.

FAQs

How do I send data from Salesforce to Snowflake?

You have several options. Automated ELT tools like Hevo let you set up a pipeline in minutes without writing code. Alternatively, you can use Salesforce Data Cloud’s zero-copy sharing (if you’re on Data Cloud). 
You can also export CSVs through Salesforce Data Loader and load them with Snowflake’s COPY INTO command, or write a custom Python script using the Salesforce Bulk API and the Snowflake Connector.

Is Salesforce built on Snowflake?

No. Salesforce and Snowflake are completely independent platforms built by different companies. Salesforce is a CRM platform, and Snowflake is a cloud data warehouse. However, the two integrate well together, and Salesforce’s Data Cloud product includes native connectors for sharing data with Snowflake.

Can I sync Salesforce data to Snowflake in real time?

Automated tools like Hevo support near-real-time replication with change data capture (CDC), which detects new records, updates, and deletes in Salesforce and applies them to Snowflake within minutes. True real-time streaming (sub-second latency) is possible through Salesforce’s Streaming API, but it requires significant custom development. 

What Salesforce objects can I replicate to Snowflake?

With the right tool or API access, you can replicate virtually any Salesforce object, as well as custom objects your organization has created. Tools like Hevo automatically discover all available objects and let you select which ones to sync.

Is Snowflake just a data warehouse?

Snowflake started as a data warehouse, but it has expanded into a broader data platform. It now supports data lake workloads (through native support for semi-structured formats like JSON and Parquet), data sharing across organizations, a marketplace for third-party data, and features like Snowpark for running Python, Java, and Scala code directly on your data.

            Chirag Agarwal
            Principal CX Engineer, Hevo Data

            Chirag Agarwal is a Customer Experience Manager at Hevo Data with over 7 years of experience in support engineering and data infrastructure. Having spent more than 4 years at Hevo, he has deep hands-on expertise across ETL/ELT workflows, data pipeline architecture, Snowflake, AWS DMS, and Apache Airflow. He leads teams, drives process optimization, and writes from real-world experience on topics ranging from data quality and pipeline cost management to tool comparisons across Fivetran, Airbyte, and more.