SFTP/FTP to Snowflake: How to Transfer Data Seamlessly

on Tutorials • March 6th, 2020 • Write for Hevo

This article talks about a specific data engineering scenario where data gets moved from a Secure File Transfer Protocol/File Transfer Protocol i.e. SFTP or FTP to Snowflake. Before we get into the exact steps of this process, let’s go through a quick overview of the source and destination in this data flow.

Understanding SFTP/FTP and Snowflake Data Warehouse

SFTP/FTP – They are network protocols that facilitate data transfer between client and server systems. The basic difference between the two protocols is the added security layer in SFTP that establishes a secure connection between the client and the server based on an authentication method (such as username/password). 

Snowflake – A popular cloud data warehouse, that is primarily relational in nature, which also supports semi-structured data formats like ORC, JSON, XML, etc. Data can be ingested into Snowflake from a variety of sources including the popular cloud storage like Amazon S3, GCS, etc. In this blog, however, we are focusing our attention on data transfer from an SFTP/FTP server to Snowflake.

The following section describes the steps to be followed to develop a custom ETL solution to achieve this particular task. However, in case you decide to opt for a third-party solution that can help you with creating and managing this data pipeline, Hevodata is a cloud ETL platform that offers such a service, along with many more data engineering pipeline management services.

Methods to Move Data from SFTP/FTP To Snowflake:

There are two broad approaches to moving data from SFTP/FTP Server to Snowflake:

Method 1: Hand-coding Custom ETL Scripts

Method 2: Using an Automated Data Pipeline Platform – Hevo Data (Official Snowflake Data Integration Partner)

This blog covers the first approach in detail. It also sheds light on the limitations of this approach so that you can take the path that suits your use case best.

SFTP/FTP to Snowflake via Custom ETL method

The broad steps to this approach include:

  1. Download the data file(s) from the SFTP/FTP server to the local client machine
  2. Upload the data file(s) from the client machine to an internal Snowflake stage
  3. Download the staged data file(s) onto a Snowflake data warehouse table
  4. Data transformation and Automatic data loading

Step 1: Download from SFTP/FTP server

As touched on earlier, in terms of data transfer security, SFTP is a lot more secure than FTP. Hence, this section is going to list the steps of establishing a connection to an SFTP server that has the data files and have them downloaded to the local machine.

  • Connecting to the SFTP server
    Open up the command prompt and use the following command syntax at the prompt –

     

    sftp username@*serverip*

    On successful execution of that command, the prompt will likely ask for the password of that username. Once that is provided and authenticated, the prompt will change to SFTP, indicating a successful connection established with the server.

  • Downloading the data files
    Once you are in the SFTP server, you need to navigate to the directory that has the data files. To understand the current directory location you can use the command pwd (stands for present working directory), and to navigate to a different folder/directory, the cd (stands for change directory) command (cd *directory_path*) can be used. Similarly, to view the current path on the local machine and to change the local machine directory, the commands lpwd and lcd can be used, respectively.After getting to the required local directory, the following get command will download the file onto the local machine –

     

    get *filename*  ex:  get test.csv

    or in case of multiple files to be downloaded together,

    mget *filenamepattern*  ex:  mget test0*.csv

The steps to download files from an FTP server are quite similar to the steps listed above.

Step 2: Upload to Snowflake stage

Once the data files are available in the local machine, the PUT command is used to upload the files onto a Snowflake stage. Here’s the syntax for the PUT command

PUT file://<file_path>/<file_name> internalstage

File_path – The local directory folder path to the file. For example a Windows OS path would look something like:

C:dbsnowflakeload

File_name – The file name has to be mentioned as the last string to the file path or Snowflake might try to upload all the files in that folder path.

Internal stage – This is the first internal Snowflake location (stage) where the uploaded data files reside on successful execution of the PUT command. To create an internal stage, the following syntax can be used

CREATE STAGE IF NOT EXISTS internal_stage

The above command creates an internal stage called internal_stage. An internal stage should be defined with a file format, to identify the type of data files it will be storing. This can be explicitly defined, or if undefined (as in the example above), it will be set to CSV by default.

An internal stage can then be defined using one of the following three options –

  • Named internal stage
  • Internal stage for a specific table
  • Internal stage for the current user  

The first option is commonly used and the syntax for that looks like – 

@[namespace.]<internal_stg_name>[/<path>]

namespace – an optional parameter indicating the database/schema in which the stage resides,
path – this is also an optional parameter. This is typically set to point to a specific file, if however, it points to a folder with multiple data files, during the upload, all of them get uploaded. 

There are some optional parameters that can be used with the PUT command. Here are a couple of examples – 

  • AUTO_COMPRESS = TRUE | FALSE

This option can be set to TRUE if you want Snowflake to gzip compress the data file being uploaded (if the file is not already in that format).

  • OVERWRITE = TRUE | FALSE

This can be set to TRUE to overwrite an existing file with the same file name that is currently being uploaded. Also, this option is quite useful to force an overwrite in case the data file in the stage is corrupted or has bad data.  

Step 3: Copy the staged file to Snowflake data warehouse

The command used to download the staged files onto Snowflake is called ‘copy’. There are a couple of ways to use the copy command depending on how many files are to be copied and how they are named:

  • Files
    If there are only a few files to be copied and if it is easy to just name them separately as a list, the following form of copy command can be utilized

     

    copy into mynewtable from @%internal_stage files=('abc1.csv', 'xyz2.csv');
  • Pattern
    In case there are more files but they follow a certain pattern in regards to their naming, then the following regex-based pattern matching copy command can be used

     

    copy into mynewtable from @%internal_stage pattern='abc[^0-9].csv';

    Snowflake manages a metadata table that stores the status of all the data file copies that were attempted. Hence, it is easy for Snowflake to realize a duplicate data load and prevent it from happening.

Step 4: Data transformation and Data Loading via Snowpipe

Snowflake provides options to do some basic data transformations during the data loading phase i.e while data is moved from the internal stage to the Snowflake table. This comes in handy as this eliminates the need for storing the data temporarily in a stage in the data warehouse, apply transformations on that, and then load the data into the final table. The following are some of the data transformation options provided by Snowflake

  • Column Reordering
    Here is an example of a COPY command using a select statement to reorder the columns of a data file before going ahead with the actual data load

     

    copy into abc_test(ID, type, area, cost)
    
    from (select a.$1, a.$4, a.$2, a.$3  from @internal_stage a)
    
    file_format = (format_name = csvtest);

    In the above example, the order of columns in the staged data is different from that of the abc_test table, hence, the select statement ensures columns are appropriately mapped using the $*number* syntax to match with the order of abc_test.

  • Column Omissions

     

    copy into abc_test(ID, type, area, cost)
    
    from (select b.$1, b.$4, b.$6, b.$8  from @internal_stage b)
    
    file_format = (format_name = csv1);

    In the above example, only a subset of the columns available on the data file are copied. This subset is chosen (via $*number* syntax) based on the fields in the destination table and the order of those fields in the staged data

  • Auto Increment Column
    Let’s say that the abc_test table has been created with the first column (ID) as an auto-increment column

     

    id number autoincrement start 0 increment 1

    The following copy command will copy over the data from the data file while generating number sequences automatically for the ID column

    copy into abc_test (type, cost) 
    from (select $4, $8 from @internal_stage);

    As you can see, the ID column is not mentioned in the above copy into the statement, however, while the records are inserted in the table, ID number sequences are automatically generated.

Snowpipe

Snowpipe is a utility provided by Snowflake to automatically detect and ingest staged files as and when they become available. Using this feature, you don’t have to run multiple copy commands manually to keep your data up-to-date.

Here’s the summary of the steps to set up Snowpipe for continuous data loading

  • Define a new pipe object with the required COPY INTO statement – this copy statement should be defined with the appropriate source, select statement, and destination locations, as provided in the examples before.
  • Through your client application, call/hit the public REST endpoints/APIs provided by Snowflake with a list of filenames and the previously created pipe reference.
  • If the above set of filenames are found in the stage location, they are queued (ingest queue) to be loaded into the destination obtained from the pipe.

Move any Data into your Warehouse without any code

Get Started For Free

SFTP/FTP to Snowflake: Limitations of the Building Custom Code

Undertaking and managing this data engineering task in-house, via the steps mentioned in the custom ETL method, in a reliable and robust manner, may prove to be more challenging than it appears on paper. Given you are custom building ETL from scratch, this would come at the cost of time.

Since each part of the infrastructure is assembled manually, the set up is brittle. Any minor change at the source or destinations can break the flow of data leading to irretrievable loss of data. 

In addition to this, the data cannot be loaded in real-time using the above method. This would need you to take up additional cron jobs that can run this set up at a set frequency. 

Since you are moving critical data into the warehouse, you will need to proactively monitor both the infrastructure and data loaded into Snowflake to ensure that there is no inconsistency. You will need to set up stable notification and alert systems to ensure that you are on top of this project. 

Implementing a solution that takes care of these hassles seamlessly, that lets you shift your focus onto generating insights from the data on Snowflake might be the decision you’d want to make – and this brings us back to Hevo. 

Hassle-Free Approach to Move Data from SFTP/FTP to Snowflake

Hevo, an official Snowflake Data Integration partner, is a no-code platform that enables you to move data from SFTP/FTP to Snowflake in real-time. Since Hevo is completely managed, your data projects can take off in just a few mins. Here’s how simple it is to move data from SFTP/FTP to Snowflake:

  • Step 1: Configure and connect your SFTP or FTP data source
  • Step 2: Point to the Snowflake table where you want to load the data

No ETL Scripts or Cron Jobs. Hevo will now take care of delivering data in a secure and reliable fashion to your Snowflake Data Warehouse. Sign up for a free trial (14 days) here to explore Hevo and start moving your data from SFTP/FTP servers to Snowflake instantly.

Here are more reasons for you to try Hevo:

  • Easy Setup and Implementation – Hevo is a self-serve, managed data integration platform. You can cut down your project timelines drastically as Hevo can help you move data from SFTP/FTP to Snowflake in minutes.
  • 100+ Pre-built integrations – In addition to SFTP/FTP, Hevo can bring data from 100’s of other data sources into Snowflake in real-time. This will ensure that Hevo is the perfect companion for your businesses’ growing data integration needs.
  • Complete Monitoring and Management – In case the FTP server or Snowflake data warehouse is not reachable, Hevo will re-attempt data loads in a set instance ensuring that you always have accurate, up-to-date data in Snowflake.
  • 24×7 Support – To ensure that you get timely help, Hevo has a dedicated support team to swiftly join data has a dedicated support team that is available 24×7 to ensure that you are successful with your project.

Sign up for a 14-day free trial to experience the power and simplicity of Hevo first hand. 

What are your thoughts about moving data from SFTP/FTP to Snowflake? Let us know in the comments. 

No-code Data Pipeline For Snowflake