Companies store and analyze their business data to make smarter decisions. They have their data scattered in a multitude of sources that need to be unified and analyzed to generate insights. Data Pipelines are required to load data from a data source to Data Warehouse. 

Azure Data Factory is an ETL tool by Microsoft that helps users to create and manage Data Pipelines and perform ETL processes. In this article, you will learn about Azure Data Factory Schedules and their components. You will also go through the process of creating Azure Data Factory Schedule Triggers, Functions, and Executions.

Load Data Seamlessly Using Hevo’s No Code Data Pipeline

Hevo Data, an Automated No Code Data Pipeline, can help you automate, simplify & enrich your data replication process in a few clicks. With Hevo’s wide variety of connectors and blazing-fast Data Pipelines, you can extract & load data from 150+ Data Sources straight into your Data Warehouse or any Databases.

What Makes Hevo’s ETL Process Unique?

  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.

Don’t just take our word for it—try Hevo and experience why industry leaders like Whatfix say,” We’re extremely happy to have Hevo on our side.”

Try a 14-day free trial to experience seamless data integration.

Get Started with Hevo for Free

Part 1: Creating Azure Data Factory Schedule Triggers

This Azure Data Factory Trigger is the most widely used trigger that can schedule the execution of a Data Pipeline. You can choose the start and finish time for the Azure Data Factory Schedule Trigger to be active, and it will only run a Pipeline within that time period.

It gives you more options by allowing you to schedule in minute(s), hour(s), day(s), week(s), or month intervals (s). The Azure Data Factory pipelines are executed on a wall-clock schedule using the Azure Data Factory Schedule trigger.

  1. Navigate to the Data Factory Edit tab or the Azure Synapse Integrate tab.
Switch to Edit tab
  1. From the menu, choose Trigger, then New/Edit.
Add New Trigger
  1. On the Add Triggers page, go to Choose Trigger and then +New.
Add triggers - new trigger
  1. Complete the following steps on the New Trigger page:
  • Ensure that the Azure Data Factory Schedule is selected as the Type.
  • Enter the trigger’s start date and time in the Start Date field. By default, it is set to the latest DateTime in Coordinated Universal Time (UTC).
  • Select the time zone in which the trigger will be created. Please keep in mind that the Trigger’s scheduled execution time will be considered after the Start Date (make sure the Start Date is at least 1 minute prior to the Execution time).

If the pattern is set to Days or higher and the time zone observes daylight saving, the Azure Data Factory Schedule trigger time will automatically adjust for the twice-yearly change.

To avoid the daylight saving time change, choose a time zone that does not watch daylight saving, such as UTC. Set the trigger to Recurrence. Choose one of the values from the drop-down menu. In the text box, enter the multiplier.

For example, if you want the trigger to run once every 15 minutes, choose Every Minute and type 15 into the text box. If you select “Day(s), Week(s), or Month(s)” from the Recurrence drop-down, you will see “Advanced recurrence options.”

Advanced recurrence options of Day(s), Week(s) or Month(s)

To specify an end date and time in the Azure Data Factory Schedule trigger, select the “Specify an End Date” option, then Ends On, then OK. Each pipeline run has an associated cost.

If you’re testing, you might want to limit the number of times the pipeline is triggered. But even so, make sure that there is enough time between both the publish time and the end time for the pipeline to run.

The Azure Data Factory Schedule trigger is activated just after you publicly release the solution, not after you save the trigger in the UI.

Trigger settings
Trigger Settings for End Date
  1. Check the box next to Activated in the New Trigger window, then click OK. This checkbox allows you to disable the trigger later.
Trigger settings - Next button
  1. Examine the warning notification in the New Trigger window before clicking OK.
Trigger settings - Finish button
  1. Select publish all to publish the changes. Until you publicly release the changes, the trigger does not start triggering pipeline runs.
Publish button
  1. Navigate to the Pipeline runs tab on the left, then click Refresh to update the list. The pipeline runs that were triggered by the scheduled trigger will be displayed. Take a look at the Triggered By column values. When you select Trigger Now, the manual trigger run will appear in the list.
  1. Navigate to the Trigger Runs Schedule view.
Monitor trigger runs
Integrate Amazon S3 to MySQL Amazon Aurora
Integrate HubSpot to MySQL
Integrate Amazon RDS to Snowflake

Part 2: Creating Azure Data Factory Schedule Function

When we create a time-triggered function, we must specify a time in CRON format that will determine when the trigger will execute. In Azure Function, the CRON expression is divided into six parts: second, minute, hour, day, month, and weekday. Each part is separated by a space. Let’s make a timer-controlled Azure Data Factory Schedule function.

  1. Open your function app and go to Functions, then + Add.
Add a function in the Azure portal.
  1. Choose the Timer trigger template from the drop-down menu.
Select the timer trigger in the Azure portal
  1. Select Create Function after configuring the new Azure Data Factory Schedule trigger with the settings listed in the table below the image.
Screenshot shows the New Function page with the Timer Trigger template selected.
SettingSuggested valueDescription
NameDefaultDefines the name of your timer triggered function.
Schedule0 */1 * * * *A six field CRON expression that schedules your function to run every minute.

Testing the Function

1. Expand the logs in your function by choosing Code + Test.

Test the timer trigger in the Azure portal

2.  Examine the data written to the logs to ensure proper execution.

View the timer trigger in the Azure portal

Now, instead of running every minute, you adjust the function’s scheduled to run once every hour.

Part 3: Azure Data Factory Pipeline Execution

A scheduling trigger causes pipelines to run on a 24-hour basis. This  Azure Data Factory Schedule trigger can be configured to work with both periodic and advanced calendars. For example, the Azure Data Factory Schedule trigger can be set to “weekly” or “Monday at 6 p.m. and Thursday at 6:00 p.m.”

The scheduling trigger is adaptable because the dataset pattern is agnostic and the trigger does not distinguish between time-series and non-time-series data. When you create an Azure Data Factory Schedule trigger, you use a JSON definition to specify the scheduling and recurrence.

Include a pipeline reference of the specific pipeline in the trigger definition to have your  Azure Data Factory Schedule schedule trigger start a pipeline run. There is a many-to-many relationship between pipelines and triggers. A single pipeline can be started by multiple triggers. Multiple pipelines can be started by a single trigger.

JSONCopy
{
  "properties": {
    "type": "ScheduleTrigger",
    "typeProperties": {
      "recurrence": {
        "frequency": <<Minute, Hour, Day, Week, Year>>,
        "interval": <<int>>, // How often to fire
        "startTime": <<datetime>>,
        "endTime": <<datetime>>,
        "timeZone": "UTC",
        "schedule": { // Optional (advanced scheduling specifics)
          "hours": [<<0-24>>],
          "weekDays": [<<Monday-Sunday>>],
          "minutes": [<<0-60>>],
          "monthDays": [<<1-31>>],
          "monthlyOccurrences": [
            {
              "day": <<Monday-Sunday>>,
              "occurrence": <<1-5>>
            }
          ]
        }
      }
    },
  "pipelines": [
    {
      "pipelineReference": {
        "type": "PipelineReference",
        "referenceName": "<Name of your pipeline>"
      },
      "parameters": {
        "<parameter 1 Name>": {
          "type": "Expression",
          "value": "<parameter 1 Value>"
        },
        "<parameter 2 Name>": "<parameter 2 Value>"
      }
    }
  ]}
}

Conclusion

In this article, you learnt about Azure Data Factory Schedules. Specifically, how-to time triggers, functions, and executions. There are currently three types of triggers supported by the service: Schedule trigger: A timer-based trigger that initiates a pipeline. Tumbling window trigger: A trigger that operates on a regular basis while retaining state and A trigger that is triggered in response to an event is referred to as an event-based trigger.

Hevo Data provides an Automated No-code Data Pipeline that empowers you to overcome the above-mentioned limitations. Hevo caters to 150+ data sources (including 60+ free sources) and can seamlessly load data to Data Warehouse in real-time. Try a 14-day free trial to explore all features, and check out our unbeatable pricing for the best plan for your needs.

Frequently Asked Questions

1. What is the schedule for ADF?

The schedule in Azure Data Factory refers to how often and at what time pipelines run. You can create triggers that will let pipelines be executed at various schedules, such as daily or weekly, and even on specific events; you can automatically run data integration as well as transformation tasks.

2. Is Azure Data Factory real-time?

Azure Data Factory is a batch model for data movement and transformation; it can also be designed to fit near-real-time scenarios based on event-driven triggers and services like Azure Stream Analytics. It isn’t exactly a continuous real-time processing solution but pretty responsive to events and changes in data, nonetheless.

3. What is a schedule trigger?

Schedule trigger is a mechanism in Azure Data Factory to trigger the execution of pipelines on specific time intervals. It provides an opportunity to automate the execution of data processing schedules regularly, for example, hour or minute or at any other possible frequency.

Syeda Famita Amber
Technical Content Writer, Hevo Data

Syeda is a technical content writer with a profound passion for data. She specializes in crafting insightful content on a broad spectrum of subjects, including data analytics, machine learning, artificial intelligence, big data, and business intelligence. Through her work, Syeda aims to simplify complex concepts and trends for data practitioners, making them accessible and engaging for data professionals.