To manage and perform data tasks efficiently nowadays organizations are shifting to Cloud Data Warehouse. Snowflake is one such Data Warehouse solution that can be built on AWS, Azure, or GCP. If you have just completed a mass migration to Snowflake from your On-premises Data Warehouse you might need to know about Snowflake Tasks.

Now that the Schema and Data Components of your migration are complete, the processes that functioned in your previous environment are the final migration type. These workflows primarily consist of Batch Processing Loads from several Data Sources on a set timetable. You will go in deep about how to put these workflows into the new Snowflake environment.

In this article, you will get to know everything about Snowflake Tasks. A Task in Snowflake, like any other database or operating system, is more of a scheduler. Database scheduler, CRON task, and more similar examples come to mind. You can schedule a single SQL statement or a Stored Procedure in Snowflake

What is Snowflake?

Snowflake Tasks: logo

Snowflake is the world’s first Cloud Data Warehouse solution, built on the infrastructure of a Cloud provider (AWS, Azure, or GCP) of the customer’s choice. Snowflake SQL is consistent with the ANSI standard and includes typical Analytics and windowing capabilities. You’ll notice certain differences in Snowflake’s syntax, but you’ll also see some commonalities. 

Key Features of Snowflake

  • Caching: Snowflake uses a caching mechanism such that when the same query is issued again, the results are rapidly returned from the cache. To avoid re-generation of the report when nothing has changed, Snowflake employs persistent (within the session) query results.
  • Query Optimization: Snowflake can optimize a query on its own by Clustering and Partitioning. There’s no need to be concerned about Query Optimization.
  • Secure Data Sharing: Using Snowflake Database Tables, Views, and UDFs, data can be shared from one account to another.
  • Support to File Formats: Snowflake supports the import of semi-structured data such as JSON, Avro, ORC, Parquet, and XML. It has a column type called VARIANT that allows you to store semi-structured data.
  • Standard and Extended SQL Support: Snowflake provides good ANSI SQL support, as well as advanced SQL features including Merge, Lateral View, Statistical Functions, and many others.
  • Fault-Tolerant: In the event of a failure, Snowflake delivers extraordinary fault-tolerant capabilities to recover the Snowflake object (tables, views, database, schema, and so on).

To get further information check out the official website. 

Looking to load your data into Snowflake?

Check out Hevo’s no-code ETL pipeline tool that supports seamless integration of more than 150 sources with Snowflake.

Simplify your data workflows with efficient transformations and real-time updates, all while Hevo handles the complexity of data processing.

Perform effortless Snowflake Integrations with Hevo

What is Snowflake Tasks?

In simple terms, they are schedulers that can assist you in scheduling a single SQL Query or Stored Procedure. When paired with streams to create an end-to-end Data Pipeline, a job can be quite beneficial.

CRON and NON-CRON variant scheduling mechanisms are available in the Snowflake Tasks Engine. If you’re a Linux user, you’ll recognize the CRON variant’s syntax.

At any one time, Snowflake ensures that only one instance of a job with a schedule (i.e. a solo task or the root task in a tree of tasks) is executed. When the next scheduled execution time comes around, if a task is still running, the next scheduled execution time is skipped.

The job is always generated in suspended mode when you create it. As a result, you’ll need to manually resume the task in order to get things moving again. To resume the task, type the following command.

ALTER TASK <TASK_NAME> RESUME;

To avoid Unexpected Task Executions owing to daylight saving time, either: 

  • Do not schedule tasks to run between 1 AM and 3 AM (daily, or on days of the week that include Sundays), or
  • To accommodate for the time shift due to daylight saving time, manually update the Cron Expression for jobs scheduled during certain hours twice a year.

DDL Operations for Snowflake Tasks 

DDL operations based on SQL are performed to process the Snowflake Tasks.

How to Create a Task?

You can replace an existing task or create a new task in the current/specified schema. The following variation is also supported by this command (creates a clone of an existing task):

CREATE TASK … CLONE 

Syntax:

CREATE [ OR REPLACE ] TASK [ IF NOT EXISTS ] <name>
  [ { WAREHOUSE = <string> } | { USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = <string> } ]
  [ SCHEDULE = '{ <num> MINUTE | USING CRON <expr> <time_zone> }' ]
  [ ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE ]
  [ <session_parameter> = <value> [ , <session_parameter> = <value> ... ] ]
  [ USER_TASK_TIMEOUT_MS = <num> ]
  [ COPY GRANTS ]
  [ COMMENT = '<string_literal>' ]
  [ AFTER <string> ]
[ WHEN <boolean_expr> ]
AS
  <sql>

How to Describe a Task?

The columns of a Task are explained with the DESCRIBE command. Short-form and command for DESCRIBE are written as: 

Syntax: 

DESC[RIBE] TASK <name>

How to Alter a Task?

The Alter clause changes the attributes of a task that already exists. The keyword for this is:

Syntax: 

ALTER TASK [ IF EXISTS ] <name> RESUME | SUSPEND

ALTER TASK [ IF EXISTS ] <name> REMOVE AFTER <string> | ADD AFTER <string>

ALTER TASK [ IF EXISTS ] <name> SET
  [ WAREHOUSE = <string> ]
  [ SCHEDULE = '{ <number> MINUTE | USING CRON <expr> <time_zone> }' ]
  [ ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE ]
  [ USER_TASK_TIMEOUT_MS = <num> ]
  [ <session_parameter> = <value> [ , <session_parameter> = <value> ... ] ]

ALTER TASK [ IF EXISTS ] <name> UNSET [ <session_parameter> [ , <session_parameter> ... ] ] [ , ... ]

ALTER TASK [ IF EXISTS ] <name> SET TAG <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' ... ]

ALTER TASK [ IF EXISTS ] <name> UNSET TAG <tag_name> [ , <tag_name> ... ]

ALTER TASK [ IF EXISTS ] <name> MODIFY AS <sql>

ALTER TASK [ IF EXISTS ] <name> MODIFY WHEN <boolean_expr>

How to Drop a Task?

Removes a job from the specified/current schema. The keyword for this is:

Syntax: 

DROP TASK [ IF EXISTS ] <name>

How to Schedule a Task?

You can Schedule a Task in Cron Job in a way similar to the sample syntax below which is scheduled to run at 08:05 PM UTC every day:

Snowflake Tasks: Schedule
Image Source

You can see below the meaning of each * if are not aware of Cron Jobs.

# __________ minute (0-59)
# | ________ hour (0-23)
# | | ______ day of month (1-31, or L)
# | | | ____ month (1-12, JAN-DEC)
# | | | | _ day of week (0-6, SUN-SAT, or L)
# | | | | |
# | | | | |
 * * * * *

What is a Tree of Snowflake Tasks? 

You can establish a B-Tree-style Task Structure in Snowflake. Due to Task Dependency, there can only be one Root Task, and all child tasks are related to the root/predecessor tasks (i.e. Before or after). The scheduler for Root Jobs will be defined, and all child tasks will be executed in the order that their dependencies are defined.

Snowflake Tasks: tree
Image Source

The overall number of tasks in a simple task tree is limited to 1000 (including the Root Task). A single precursor task is allowed for each task in the tree; however, a task can have up to 100 Offspring Tasks (i.e. other tasks that identify the task as a predecessor).

In a simple tree of tasks, all tasks must be held by the same task owner, and all objects must be kept in the same Database and Schema.

What is Snowflake Task Overlapping? 

A specific tree of tasks can only run one instance at a time in Snowflake. For example, suppose TASK 1 is set to run every 5 minutes and has two child tasks.

If one of the child’s tasks fails to finish on time and overlaps the following run (after the next 5 minutes), Snowflake ensures that the scheduled next run is skipped at least once.

This is the default behavior, however, it can be modified by setting the following parameter to true during the creation of a Root Job to overlap and start the next run even if the previous Root Task hasn’t finished yet.

This is not advised unless you are certain that your execution has no dependencies or issues with the prior run. During the creation or modification of the Root Task, the following parameter can be set.

To better match a Task Tree with the Root Task’s timetable, do the following:

  • Increase the Scheduling Time between runs of the root task if possible.
  • Consider expanding the Warehouse that runs the Task Tree’s massive or complicated SQL Statements or Stored Procedures.
  • Examine the SQL Statements or Stored Procedures that each task uses. Check to see if the code may be modified to take advantage of Parallel Processing.

Understanding Snowflake Tasks Versioning for Runs

When you run Tasks, they produce a copy of itself that contains all of the Properties for all of the Tasks in the tree. Until it is completed, the Root Job operates under the same version of Code and Setup.

If you try to change any of the Child Task Code that hasn’t run yet, it won’t let you unless the Root Task is suspended first.

However, if you attempt to edit the Root Job, it will enable you to do so, but it will then be suspended, canceling all future planned runs of the Root Task. The Root Task must be restarted in order to see the reflection of changes for a Task.

If the definition of a Stored Procedure is invoked by a Task change while the Tree of Tasks is running, the new programming may be executed when the Stored Procedure is invoked by the Task in the current run.

What are the Use Cases/Scenarios for Snowflake Tasks?

How to Run a Task at Pre-determined Frequency?

Every 1 minute, every 30 minutes, and so on, the task can be completed.

Step A: Create a Task and Check

Making a Task that every 1 minute writes the current Timestamp and a constant value into a table: 

Snowflake Tasks: Create
Image Source
Snowflake Tasks: Show
Image Source

The job will always be in “Suspended” mode, meaning it will not start unless you “Resume” it. 

Step B: Resume Task and Check

The timer will start as soon as you continue it, and the Task will appear in your schedule. In this scenario, it will insert a record into the table after one minute and repeat the procedure every minute until you “Suspend” it again.

 -- resume / suspend
ALTER TASK load_user_table RESUME 
Snowflake Tasks: resume
Image Source

You will see, one new row is added to the table every minute. Now, “Suspend” the Task to prevent it from adding any more rows to the table.

 -- resume / suspend
ALTER TASK load_user_table SUSPEND

The work will now be put on hold until you Resume it.

How to Run a Task in Cron Jobs [Pre-determined Time]?

Let’s proceed explanation for this type of Task with an example. Let’s create a Job that runs at 8:50 a.m. UTC every day and only writes the current timestamp and a constant value into a table once each day.

Step A: Create a Task and Check

Snowflake Tasks: create con job
Image Source

See if the Task was successfully created.

Snowflake Tasks: show
Image Source

Step B: Resume the Task and Check

Keep in mind that the Task is always created in Suspend mode

Now, every day at 8:50 UTC, one record will be added.

Snowflake Tasks: con job o/p
Image Source

You have to set the Task to “Start” such that it adds one entry into the Database every day at 8:50 a.m. UTC. You can also use this method to schedule a Stored Operation.

How to Execute Stored Procedures in Tasks?

Here, you need to make a Task that invokes a Stored Procedure.

Step A: Creating Stored Procedures

Inside this Stored Procedure, use the same “Insert into table” line. In actuality, you’ll have more Advanced Procedures. In Snowflake Stored Procedures, you can employ “Javascript” functions.

Snowflake Tasks: stored procedure
Image Source

Step B: Create a Task and Check

Snowflake Tasks: create task
Image Source

See if the Task was successfully created.

Step C: Resume Task and Check

Keep in mind that the Task is always created in Suspend mode

Check in the USER_NAME filed you will have: “from_stored_procedure”.

Snowflake Tasks: resume and check
Image Source

How to Execute a Tree of Snowflake Tasks?

Let’s take the same example as above. Create a basic Task Tree by defining the current “load_user_table_by_cron_job” task as the predecessor task, which, when run successfully, activates the new “after_cron_job” task. The new task updates our “user” table with the current timestamp and a constant AFTER CRON JOB value.

Step A: Create a Task and Check

Snowflake Tasks: create after cron
Image Source

See if the Task was successfully created.

Step B: Resume Task and Check

Keep in mind that the Task is always created in Suspend mode

Snowflake Tasks: o/p
Image Source

Daily one record will be inserted after 8:50 UTC (following the execution of “load user table by cron job“).

Row No. 18 was inserted on 23 May after the task “load_user_table_by_cron_job” was completed, and Row No. 20 was inserted on 24 May after the task “load_user_table_by_cron_job” was completed.

Snowflake Tasks Security

1) Access Control Privileges

A) Creating Tasks

To create tasks, you’ll need a role with at least the following privileges:

Snowflake Tasks: creating tasks objects
Image Source

B) Owning Tasks

The Task Owner (i.e. the role with the OWNERSHIP privilege on the task) must have the following privileges after a Task is created:

Snowflake Tasks: owning tasks objects
Image Source

In addition, the role must have the necessary permissions to execute the task’s SQL Statement.

C) Suspending or Resuming Tasks

A role with the OPERATE privilege for the Task, in addition to the Task Owner, can suspend or restart the task. The USAGE privilege on the Database and Schema containing the job is required for this role. There are no other requirements.

Snowflake validates that the Task Owner role has the privileges indicated in Owning Tasks when a task is resumed (in this topic).

2) Assigning Task Administrator 

You can define a custom role (for example, taskadmin) and assign the EXECUTE TASK privilege to this role for simplicity of use. Any role that has the ability to grant privileges (for example, SECURITYADMIN or any role with the MANAGE GRANTS privilege) can then provide this custom role to any task owner role to allow them to edit their own tasks.

It is only necessary to revoke this custom role from the task owner role to remove the task owner role’s ability to execute the task. If you don’t want to create this custom role, an account administrator must revoke the task owner role’s EXECUTE TASK privilege.

For example, create a custom role called taskadmin and grant it the EXECUTE TASK privilege. Assign the taskadmin role to the myrole Task Owner role:

use role securityadmin;

create role taskadmin;

-- set the active role to ACCOUNTADMIN before granting the account-level privileges to the new role
use role accountadmin;

grant execute task, execute managed task on account to role taskadmin;

-- set the active role to SECURITYADMIN to show that this role can grant a role to another role
use role securityadmin;

grant role taskadmin to role myrole;

3) Dropping Task Admin Role

When a Task’s Owner role (the role having the OWNERSHIP privilege on the task) is removed, the task is “re-possessed” by the role that removed the owner role. This guarantees that ownership is transferred to a role closer to the top of the hierarchy.

When a Task is re-possessed, it is automatically paused, which means that all existing executions are complete, but no further executions are scheduled until the job is explicitly resumed by the new owner.

The goal is to prevent a user who has access to a specific role from leaving behind tasks that execute with higher permissions when the role is deleted.

The task completes processing under the dropped role if the role that a running task is executing under is dropped while the task is running.

Conclusion

This article has exposed you to the various Snowflake Tasks to help you improve your overall decision-making and experience when trying to make the most out of your data. In case you want to export data from a source of your choice into your desired Database/destination like Snowflake, then Hevo Data is the right choice for you! 

Share your experience of learning about Snowflake Tasks! Let us know in the comments section below!

Harsh Varshney
Research Analyst, Hevo Data

Harsh is a data enthusiast with over 2.5 years of experience in research analysis and software development. He is passionate about translating complex technical concepts into clear and engaging content. His expertise in data integration and infrastructure shines through his 100+ published articles, helping data practitioners solve challenges related to data engineering.

No-code Data Pipeline for Snowflake