AWS CloudTrail Data Events: Simplified 101

By: Published: May 23, 2022

cloudtrail data events: FI

AWS CloudTrail allows for Auditing, Security Monitoring, and Operational Troubleshooting and it also tracks user activity and API usage. CloudTrail logs, monitors, and saves account activity related to actions across your AWS infrastructure, giving you control over storage, analysis, and remediation.

The resource operations performed on or within a resource in your AWS account are shown in CloudTrail Data Events (also known as “data plane operations”). Frequently, these are operations with a high volume.

This article talks about AWS CloudTrail Data Events extensively. In addition to that, it gives a brief introduction to AWS CloudTrail.

Table Of Contents

What is AWS CloudTrail?

cloudtrail data events: amazon cloudtrail logo
Image Source

AWS CloudTrail is an AWS service that allows you to manage your AWS account’s Governance, Compliance, and Operational and Risk Auditing. CloudTrail records events for actions taken by a user, role, or AWS service. AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs actions are all examples of events.

When you create an AWS account, CloudTrail is automatically enabled. A CloudTrail Event is created whenever something happens in your Amazon Web Services account. In the CloudTrail console, go to Event history to quickly view recent events. Create a trail in your AWS account to keep track of activities and events.

A key aspect of security and operational best practices is visibility into your AWS account activity. CloudTrail allows you to track account activity across your AWS infrastructure by Viewing, Searching, Downloading, Archiving, Analyzing, and Responding to it. To assist you in analyzing and responding to activity in your AWS account, you can identify who or what took which action, what resources were used, when the event occurred, and other details. AWS CloudTrail Insights can be enabled on a trail to help you detect and respond to unusual activity.

You can use the API to integrate CloudTrail into applications, automate trail creation for your company, check the status of trails you create, and control how users view CloudTrail events.

When you create an AWS account, CloudTrail is enabled. A CloudTrail event is created whenever something happens in your AWS account. By going to Event history in the CloudTrail console, you can easily view events.

You can view, search, and download event history from your AWS account for the previous 90 days. You can also use CloudTrail to track, analyze, and respond to changes in your AWS resources. A Trail is an Amazon S3 bucket delivery configuration and with Amazon CloudWatch Logs and Amazon CloudWatch Events, you can deliver and analyze events in a trail as well. The CloudTrail console, the AWS CLI, or the CloudTrail API can all be used to make a trail.

Simplify ETL Using Hevo’s No-Code Data Pipeline

Hevo Data, a Fully-managed Data Aggregation solution, can help you automate, simplify & enrich your aggregation process in a few clicks. With Hevo’s out-of-the-box connectors and blazing-fast Data Pipelines, you can extract & aggregate data from 100+ Data Sources(including 40+ Free Sources) straight into your Data Warehouse, Database, or any destination. 

GET STARTED WITH HEVO FOR FREE[/hevoButton]

Hevo is the fastest, easiest, and most reliable data replication platform that will save your engineering bandwidth and time multifold. Try our 14-day full access free trial today to experience an entirely automated hassle-free Data Replication!

What are AWS CloudTrail Data Events?

Activity in an AWS account is recorded as an event in CloudTrail. A user, role, or service can perform this activity, which CloudTrail can track. CloudTrail events track API and non-API account activity via the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Management events, CloudTrail Data Events, and CloudTrail Insights events are the three different types of events that can be logged in CloudTrail. Trails do not log data or Insights events by default. CloudTrail JSON logs are used by all event types.

CloudTrail Data Events are records of resource operations performed on or within a resource. Data plane operations are another name for these. CloudTrail Data Events are frequently high-volume operations. The following types of data are kept track of:

  • Object-level API activity on buckets and objects in buckets on Amazon S3 (for example, GetObject, DeleteObject, and PutObject API operations).
  • Activity involving AWS Lambda functions (the Invoke API).
  • Table activity in Amazon DynamoDB at the object level (for example, PutItem, DeleteItem, and UpdateItem API operations).
  • Object-Level API activity with Amazon S3 on Outposts.
  • eth getBalance and eth getBlockByNumber are Amazon Managed Blockchain JSON-RPC calls on Ethereum nodes.
  • API calls to CompleteMultipartUpload and GetObject in Amazon S3 Object Lambda are examples of API activity.
  • PutSnapshotBlock, GetSnapshotBlock, and ListChangedBlocks on Amazon Elastic Block Store (EBS) snapshots are direct APIs.
  • Access points’ Amazon S3 API activity.
  • Streams activity in Amazon DynamoDB API.
  • Tables using the AWS Glue API.

Understanding AWS CloudTrail Data Events

How to Log CloudTrail Data Events?

CloudTrail Data Events aren’t logged by default in trails. You must add the supported resources or resource types for which you want to collect activity to a trail explicitly to record CloudTrail Data Events. For logging CloudTrail Data Events, additional fees will be applicable.

Amazon CloudWatch Events stores the events that your trails have logged. If you set a trail to log CloudTrail Data Events for S3 objects but not management events, your trail will only process and log CloudTrail Data Events for the S3 objects you specify. Amazon CloudWatch Events contains CloudTrail Data Events for these S3 objects.

Logging CloudTrail Data Events for Amazon S3 Objects

When you enable logging for all Data Events for an S3 bucket named bucket-1, the following example shows how it works. The CloudTrail user in this case specified an empty prefix as well as the option to log both Read and Write Data Events.

  • An object is uploaded to bucket-1 by a user.
  • PutObject is an Amazon S3 object-level API operation. CloudTrail records this as a data event. Events that occur on any object in the bucket are logged because the CloudTrail user specified an S3 bucket with an empty prefix. The trail records and processes the event.
  • An object is uploaded to bucket-2 by another user.
  • The PutObject API operation was performed on an object in an S3 bucket that was not part of the trail. The event is not recorded on the trail.

Logging CloudTrail Data Events for Specific S3 Objects

When you configure a trail to log events for specific S3 objects, the following example shows how logging works. In this case, the CloudTrail user specified bucket-3 as the S3 bucket name, with the prefix my-images as the prefix, and the option to log only Write CloudTrail Data Events.

  • In the bucket, a user deletes an object with the my-images prefix, such as arn:aws:s3:::bucket-3/my-images/example.jpg.
  • The DeleteObject API operation is an object-level API for Amazon S3. CloudTrail records it as a Write CloudTrail Data Events. The event happened on an object with the same S3 bucket and prefix as the trail. The trail records and processes the event.
  • Another user deletes an object in the S3 bucket with a different prefix, like arn:aws:s3:::bucket-3/my-videos/example.avi.
  • The event took place on an object that did not have the prefix you specified in your trail and The event was not recorded on the trail.
  • For the object arn:aws:s3:::bucket-3/my-images/example.jpg, a user utilizes the GetObject API operation.
  • The event occurred on the bucket and prefix specified in the trail, but GetObject is an Amazon S3 object-level API that is read-only. CloudTrail records it as Reading CloudTrail Data Events, even though the trail isn’t set up to log Read events. The event is not recorded on the trail.

Consider configuring the delivery of log files to an Amazon S3 bucket that belongs to another AWS account instead of logging CloudTrail Data Events for the Amazon S3 bucket where you receive log files if you configure a trail to log all Amazon Data Events in your AWS account.

Logging CloudTrail Data Events for S3 Objects in Other AWS Accounts

  • You can specify S3 objects that belong to other AWS accounts when configuring your trail to log CloudTrail Data Events. 
  • When an event occurs on a specific object, CloudTrail checks to see if it matches any of the trails in each account. 
  • The trail processes and logs the event for that account if the event matches the trail’s settings. Both API callers and resource owners can typically receive events.
  • If you own an S3 object and include it in your trail, your trail will record events that occur on that object. Because you own the object, your trail records events that occur when other accounts use it.
  • If you specify an S3 object in your trail but it is owned by another account, the trail’s owner will only log events from your account. Other accounts’ events are not logged in your trail.

Logging CloudTrail Data Events for an Amazon S3 Object for Two AWS Accounts

The following example shows how to use CloudTrail Data Events to log events for the same S3 object using two AWS accounts.

  • You want your trail to log CloudTrail Data Events for all objects in your S3 bucket named owner-bucket in your account. The trail is set up by specifying an empty object prefix for the S3 bucket.
  • James has been given access to the S3 bucket through a separate account. James also wants all CloudTrail Data Events for all objects in the same S3 bucket to be logged. He configures his trail and uses the same S3 bucket as before, but with an empty object prefix.
  • With the PutObject API operation, James uploads an object to the S3 bucket.
  • This occurrence occurred in his account and corresponded to the trail’s settings. The event is processed and recorded by James’s trail.
  • Your trail processes and logs the same event because you own the S3 bucket and the event matches the settings for your trail. CloudTrail charges for two copies of the data event because there are now two copies of the event (one in James’s trail and one in yours). You put something in the S3 bucket.
  • This event occurs in your account and corresponds to the trail settings. The event is processed and recorded by your trail.
  • James’s trail does not record the event because it did not occur in his account and he does not own the S3 bucket. Only one copy of this data event is charged by CloudTrail.
  • Logging CloudTrail Data Events for all buckets, including an S3 bucket used by two AWS accounts.
  • The logging behavior for trails that collect CloudTrail Data Events in an AWS account when Select all S3 buckets in your account is enabled is shown in the following example.
  • You want your account’s trail to record CloudTrail Data Events for all S3 buckets. In CloudTrail Data Events, you can configure the trail by selecting Read events, Write events, or both for all current and future S3 buckets.
  • James has access to an S3 bucket in your account through a different account. He wants to record CloudTrail Data Events for the bucket he has access to. He sets up his trail to receive CloudTrail Data Events from all of his S3 buckets.
  • With the PutObject API operation, James uploads an object to the S3 bucket.
  • This occurrence occurred in his account and corresponded to the trail’s settings. The event is processed and recorded by James’s trail.
  • Your trail processes and logs the event because you own the S3 bucket and the event matches your trail’s settings. CloudTrail charges each account for a copy of the data event because there are now two copies of it (one in James’s trail and one in yours). An object is uploaded to an S3 bucket.
  • This occurrence takes place in your account and corresponds to the trail’s settings. The event is processed by your trail and recorded.
  • James’s trail does not record the event because it did not occur in his account and he does not own the S3 bucket. In your account, CloudTrail only charges for one copy of this data event.
  • Holly, a third user with access to the S3 bucket, uses it to perform a GetObject operation. She’s set up a trail on all of her S3 buckets to record CloudTrail Data Events. CloudTrail records a data event in her trail because she called the API. Despite having access to the bucket, James is not the resource owner, so no events are recorded in his trail this time. 
  • You get an event in your trail as the resource owner about Holly’s GetObject operation. For each copy of the data event, CloudTrail charges your account and Holly’s account: one in Holly’s trail, one in yours.

Read-only and Write-only Events

You can choose whether you want read-only events, write-only events, or both when configuring your trail to log CloudTrail data and management events.

Read

Read events are API operations that read but do not change your resources. The Amazon EC2 DescribeSecurityGroups and DescribeSubnets API operations, for example, are read-only events. These operations only return information about your Amazon EC2 resources and do not alter your settings.

Write

API operations that modify (or may modify) your resources are included in write events. The Amazon EC2 RunInstances and TerminateInstances API operations, for example, change your instances.

Logging Read and Write events for Separate trails

  • The following example demonstrates how to set up trails to split log activity for an account into two S3 buckets: one for read-only events and the other for write-only events.
  • To receive log files, you create a trail and select an S3 bucket named read-only-bucket. The trail is then updated to reflect your preferences. Read through the management and CloudTrail Data Events.
  • You create a second trail and send log files to an S3 bucket named write-only-bucket. The trail is then updated to indicate that you want to Write Management and Data Events.
  • In your account, the Amazon EC2 DescribeInstances and TerminateInstances API operations take place.
  • The DescribeInstances API operation is a read-only event that corresponds to the first trail’s settings. The trail records the event and sends it to the read-only bucket.
  • The TerminateInstances API operation is a write-only event that corresponds to the second trail’s settings. The trail records the event and sends it to the write-only bucket.

Logging Data Events with the AWS Command Line Interface

  • Using the AWS CLI, you can set up your trails to track management and Data Events. Run the get-event-selectors command to see if your trail is logging data and management events. Keep in mind that logging multiple copies of management events will result in charges. For logging Data Events, there is always a fee.
aws CloudTrail get-event-selectors --trail-name TrailName

This command returns the trail’s default settings.

Log Events by Using Basic Event Selectors

  • Basic event selectors are shown in the following example from the get-event-selectors command. When you create a trail with the AWS CLI, all management events are logged by default. Data Events aren’t logged by default in trails.
{
    "EventSelectors": [
        {
            "IncludeManagementEvents": true,
            "DataResources": [],
            "ReadWriteType": "All"
        }
    ],
    "TrailARN": "arn:aws:CloudTrail:us-east-2:123456789012:trail/TrailName"
}
  • Run the put-event-selectors command to configure your trail to log management and Data Events.
  • The following example shows how to configure your trail to include all management and Data Events for two S3 objects using basic event selectors. A trail can have anywhere from one to five event selectors. A trail can have anywhere from one to 250 data resources.
aws CloudTrail put-event-selectors --trail-name TrailName --event-selectors '[{ "ReadWriteType": "All", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::mybucket/prefix", "arn:aws:s3:::mybucket2/prefix2"] }] }]'

This command returns the event selectors for the trail.

{
    "EventSelectors": [
        {
            "IncludeManagementEvents": true,
            "DataResources": [
                {
                    "Values": [
                        "arn:aws:s3:::mybucket/prefix",
                        "arn:aws:s3:::mybucket2/prefix2",
                    ],
                    "Type": "AWS::S3::Object"
                }
            ],
            "ReadWriteType": "All"
        }
    ],
    "TrailARN": "arn:aws:CloudTrail:us-east-2:123456789012:trail/TrailName"
}

Log Events by Using Advanced Event Selectors

  • If you’ve chosen to use advanced event selectors, the get-event-selectors command will return something like this. A trail has no advanced event selectors by default.
{
    "AdvancedEventSelectors": [],
    "TrailARN": "arn:aws:CloudTrail:us-east-2:123456789012:trail/TrailName"
}
  • The example below shows how to use advanced event selectors to log all management events (both readOnly and writeOnly) for S3 objects in the same two S3 bucket prefixes, as well as PutObject and DeleteObject events. Advanced event selectors can be used to choose not only the S3 prefix names by Amazon Resource Names(ARN) but also the names of the specific events you want to log, as shown here. Advanced event selectors can have up to 500 conditions, including all selector values, per trail. A trail can have anywhere between 1 and 250 data resources.
aws CloudTrail put-event-selectors --trail-name TrailName 
--advanced-event-selectors 
'[
  {
    "Name": "Log readOnly and writeOnly management events",
    "FieldSelectors": [
      { "Field": "eventCategory", "Equals": ["Management"] }
    ]
  },
  {
    "Name": "Log PutObject and DeleteObject events for two S3 prefixes",
    "FieldSelectors": [
      { "Field": "eventCategory", "Equals": ["Data"] },
      { "Field": "resources.type", "Equals": ["AWS::S3::Object"] },
      { "Field": "eventName", "Equals": ["PutObject","DeleteObject"] },
      { "Field": "resources.ARN", "StartsWith": ["arn:aws:s3:::mybucket/prefix","arn:aws:s3:::mybucket2/prefix2"] }
    ]
  }
]'
  • The following outcome displays the trail’s Advanced Event Selectors.
{
  "AdvancedEventSelectors": [
    {
      "Name": "Log readOnly and writeOnly management events",
      "FieldSelectors": [
        {
          "Field": "eventCategory", 
          "Equals": [ "Management" ],
          "StartsWith": [],
          "EndsWith": [],
          "NotEquals": [],
          "NotStartsWith": [],
          "NotEndsWith": []
        }
      ]
    },
    {
      "Name": "Log PutObject and DeleteObject events for two S3 prefixes",
      "FieldSelectors": [
        {
          "Field": "eventCategory", 
          "Equals": [ "Data" ],
          "StartsWith": [],
          "EndsWith": [],
          "NotEquals": [],
          "NotStartsWith": [],
          "NotEndsWith": []
        },
        {
          "Field": "resources.type", 
          "Equals": [ "AWS::S3::Object" ],
          "StartsWith": [],
          "EndsWith": [],
          "NotEquals": [],
          "NotStartsWith": [],
          "NotEndsWith": []
        },
        {
          "Field": "resources.ARN", 
          "Equals": [],
          "StartsWith": [ "arn:aws:s3:::mybucket/prefix","arn:aws:s3:::mybucket2/prefix2" ],
          "EndsWith": [],
          "Equals": [],
          "NotStartsWith": [],
          "NotEndsWith": []
        }
      ]
    }
  ],
  "TrailARN": "arn:aws:CloudTrail:us-east-2:123456789012:trail/TrailName"
}

Log all Amazon S3 events for a bucket by using advanced event selectors

  • The example below shows how to set up your trail to include all Data Events for all Amazon S3 objects in a given S3 bucket. For the resources, the value of S3 events. AWS::S3::Object is the type field’s name. 
  • You must use the StartsWith operator for resources because the ARN values for S3 objects and S3 buckets differ slightly. All events will be recorded using an ARN.
aws CloudTrail put-event-selectors --trail-name TrailName --region region 
--advanced-event-selectors 
'[
    {
            "Name": "S3EventSelector",
            "FieldSelectors": [
                { "Field": "eventCategory", "Equals": ["Data"] },
                { "Field": "resources.type", "Equals": ["AWS::S3::Object"] },
                { "Field": "resources.ARN", "StartsWith": ["arn:partition:s3:::bucket_name/"] }
            ]
        }
]'
  • The following is an example output from the command.
{
    "AdvancedEventSelectors": [
        {
            "Name": "S3EventSelector",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Data"
                    ]
                },
                {
                    "Field": "resources.type",
                    "Equals": [
                        "AWS::S3::Object"
                    ]
                },
                {
                    "Field": "resources.ARN",
                    "StartsWith": [
                        "arn:partition:s3:::bucket_name/"
                    ]
                }
            ]
        }
    ],
  "TrailARN": "arn:aws:CloudTrail:region:account_ID:trail/TrailName"
}
What makes Hevo’s ETL Process Best-In-Class

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-day free trial!

How to Monitor CloudTrail Data Events?

You can use CloudTrail in conjunction with CloudWatch Logs to keep track of your trail logs and receive notifications when certain events occur. The steps to do that are as follows:

  • Step 1: Set up your trail so that log events are sent to CloudWatch Logs.
  • Step 2: Create CloudWatch Logs metric filters to check log events for terms, phrases, or values that match. You can, for example, keep an eye on ConsoleLogin events.
  • Step 3: Assign metrics from CloudWatch to the metric filters.
  • Step 4: Create CloudWatch alarms that are triggered based on specified thresholds and periods. Alarms can be configured to send notifications when they are triggered, allowing you to respond.
  • Step 5: You can also set CloudWatch to act automatically in response to an alarm.
  • Step 6: Engineers who build, scale, and manage cloud-based applications on AWS are well aware that their applications and infrastructure will be attacked at some point. However, as applications grow in size and new features are added, securing the entire AWS environment becomes more difficult.
  • Step 7: AWS CloudTrail tracks who, what, where, and when activity occurs in your AWS environment and records it in the form of audit logs to add visibility and audibility. As a result, CloudTrail audit logs contain critical information for monitoring AWS account actions, detecting potentially malicious behavior, and surfacing parts of your infrastructure that may not be configured properly.

Blazing Trails

  • Each instance of activity (such as API requests and user logins) that AWS CloudTrail detects in your environment is recorded as an event, which is a JSON object that specifies the activity’s details, including the time it occurred, who performed it, the resources affected by it, and more. 
  • All of your events are available for up to 90 days after they occur on the AWS CloudTrail Console’s Event History page, where you can view and filter them.
cloudtrail data events: amazon cloudtrail event history page
Image Source
  • For all CloudTrail events, the majority of AWS customers use a single trail. You can, however, create an event stream that lets you filter events in and out. You might want to create an event stream that only contains activity related to a specific AWS service or resource, for example, to reduce your log load.
  • Create a trail, or an event stream, that sends events as log files to an AWS S3 bucket. Your events will be available according to the retention policy you specify, can be quickly filtered to find critical issues, and will be alerted using Amazon CloudWatch or Amazon Simple Notification Service (SNS).
  • Trails are by default Region agnostic, which means that they will log relevant events from any Region. You can create single-Region trails to focus on the activities of a single Region, but it is recommended to create an all-Region trail to gain more visibility and automatically track data from new Regions as they become available.
  • You can also create an organization trail to keep track of all the logs generated by the AWS accounts in your organization. 
  • AWS Organizations allows you to centrally manage user access permissions across all of your organization’s accounts, and it’s free to set up. When your team needs to manage multiple AWS accounts, organizations are recommended for governing your ever-changing environment and enforcing configurations on your primary and member accounts.

Key CloudTrail Audit Logs to Monitor

  • IAM Policies in AWS are complicated; they have the potential to grant users access to all resources in an account. This means that there’s a good chance that security misconfigurations will allow someone to manipulate your environment and gain access to your assets without your knowledge. 
  • You can get a better picture of user activity and how users interact with your resources by monitoring your audit logs, including whether or not they’re authorized to do so in the first place.
  • Attackers frequently search AWS resources for overly permissive IAM permissions or misconfigurations, including:
    • IAM users/roles
    • EC2 instances
    • S3 buckets
  • For instance, an S3 bucket could have a policy attached to it that grants Read access to all authenticated users, not just those in your account. If an attacker discovered this flaw, they could read all of the data saved in that bucket, potentially exposing sensitive customer or business data.
  • Your CloudTrail logs keep a reliable record of user activity and can provide you with all the information you need to monitor your environment once you know which logs to look at. The following resource-based logs are especially important because they are where the majority of threats will come from:
    • User Accounts
    • Buckets
    • Networking Components
  • When reading event logs, keep an eye out for JSON attributes that can help you spot an attack or misconfiguration. The call response (i.e., responseElements), the API call that was made (i.e., eventName), and any identifying information, such as the user or role that called the command are all examples of these (i.e., various fields under userIdentity).
User Accounts

Using an exposed AWS Secret Access Key and enumerating the key’s permissions is one of the most common ways for an attacker to infiltrate your environment. If the exposed key has extensive management permissions, the attacker can use it to grant themselves more while your security infrastructure is disabled. Monitoring your CloudTrail logs for the following activity can help you catch attackers as they examine their permissions and try to stay in your environment:

Unauthorized Activity
  • The following error message appears in the responseElements of unauthorized user activity logs:
{
   [...],
    "errorCode": "Client.UnauthorizedOperation",
    "errorMessage": "You are not authorized to perform this operation.",
    "requestParameters": {
        "regionSet": {}
    },
    "responseElements": null,
    "requestID": "0a857cb2-90c4-4f09-9624-1149fb27f8a1",
    "eventID": "26fe99a5-8ed5-4923-9cf7-b6cdf96fa5f3",
    "eventType": "AwsApiCall",
    "recipientAccountId": "111111111111"
}
  • A single unauthorized activity log does not always imply a threat. For example, the unauthorized action could have occurred because a user lacked the necessary permissions to view certain AWS console resources.
  •  It could also be caused by a service attempting to access a resource to which it does not have permission.
  • If an IAM user receives an authorization error for the first time, it may be worthwhile to investigate what caused the error. It could be the result of an attacker attempting to gain access to your resources through the account or service. 
  • For example, they might try to create a new user or role as a backdoor into your environment, or they might try to expand the IAM policy associated with a user or role they already have access to.
AWS GuardDuty DetectorDeleted

An attacker may try to disable the Amazon GuardDuty threat detectors running in your AWS account to go undetected when performing unauthorized or malicious actions. Any instances of GuardDuty detector deletion should always be investigated.

Buckets
  • When attempting to breach your environment, attackers frequently target S3 buckets. Due to a security misconfiguration or human error, an attacker could gain access to a bucket’s contents, just as they could with user accounts. You can detect the following bucket enumeration and modification attack techniques by monitoring your CloudTrail logs.
  • If an attacker gains access to an EC2 instance, the first thing they might do is list all of the S3 buckets to which they have access in the relevant instance profile, or try to change a bucket’s access policy entirely. A ListBuckets or PutBucketPolicy call is usually worth investigating because most automated resources already have direct access to all of the buckets they require.
AWS S3 Public Access Block Removed
  • An attempt to remove a public access block from an S3 bucket, for example, is an event that should be investigated. This may be a legitimate user attempting to complete a task by disabling a security control as a debugging mechanism. 
  • An attacker could also be attempting to expose the bucket to the public internet. The DeleteAccountPublicAccessBlock event logs should be investigated as soon as possible.
Networking Components

A misconfigured network resource, such as a VPC, Route Table, Network Gateway, Network Access Control List, or Security Group, may also be used by attackers to gain access to your environment. CloudTrail logs can assist you in detecting the following types of potential network attacks and taking the necessary steps to resolve the problem.

How to Analyze CloudTrail Data Events?

  • AWS CloudTrail logs contain valuable information that allows you to monitor activity across your entire AWS environment, it’s critical to know how to interpret them when conducting investigations. 
  • Each event is represented as a single JSON object in CloudTrail log files. The access key ID of the AWS identity that acted (userIdentity fields) and the details of the action performed are included in all event types’ entries (eventName and requestParameters). ResponseElements fields are available in both management and data event entries to help you determine whether the action was completed successfully.
  • You can see that a user named Alice (userName) requested the creation of a new user (eventName) named James in the snippet below (requestParameters).
{
    "eventVersion": "1.05",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "AIDAAAAAAAAAAAAAAAAAA",
        "arn": "arn:aws:iam::111111111111:user/Alice",
        "accountId": "111111111111",
        "accessKeyId": "AKIAAAAAAAAAAAAAAAAAA",
        "userName": "Alice"
    },
    "eventTime": "2020-09-21T10:31:20Z",
    "eventSource": "iam.amazonaws.com",
    "eventName": "CreateUser",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "1.2.3.4",
    "userAgent": "console.amazonaws.com",
    "requestParameters": {
        "userName": "James",
        "tags": []
    },
    "responseElements": {
        "user": {
            "path": "/",
            "userName": "James",
            "userId": "AIDABBBBBBBBBBBBBBBBB ",
            "arn": "arn:aws:iam::111111111111:user/James",
            "createDate": "Sep 21, 2020 10:31:20 AM"
        }
    },
    "requestID": "604e7549-4ea4-4185-83b0-acff4e462d27",
    "eventID": "600e50af-0a2c-4352-95a8-7b813c744072",
    "eventType": "AwsApiCall",
    "recipientAccountId": "111111111111"
}
  • You know that the command was executed successfully because the entry returns identification details for the newly created user (responseElements). Otherwise, as described in the AWS documentation, the JSON response would have included an errorCode and errorMessage element.
  • Before you look at the most important CloudTrail logs to keep an eye on, it’s important to understand the different user identity types defined by CloudTrail, as well as how CloudTrail determines who performed which actions.

CloudTrail Identity Types

A userIdentity element is present in every CloudTrail event log, and it describes the user or service that took the action. The type field in this element specifies the type of user or service that made the request, as well as the level of credentials that the user or service used. UserIdentity types in CloudTrail include the following:

  • Root: Your primary AWS account credentials were used to make the request. If you have an alias for your AWS account, it will appear here instead of your username.
  • IAMUser: An IAM user’s credentials were used to make the request.
  • FederatedUser: A user with temporary security credentials provided through a federation token made the request.
  • AWSAccount: A third-party AWS account submitted the request.
  • AWSService: An AWS service account submitted the request. Service accounts are used by many AWS services to perform automated tasks on your behalf.
  • AssumedRole: The AssumeRole operation of the AWS Security Token Service (STS) was used to obtain temporary credentials for the request. While the majority of these identity types are simple, AssumedRoles hide the user who acted. You’ll look at how AssumeRole calls work in practice, how to figure out who’s behind an AssumedRole identity, and how a cunning adversary could use an AssumedRole session to hide their true identity in the following section.

Interpreting the Initial Identity of an ‘AssumedRole’ CloudTrail Log

  • In AWS, it’s common to manage all users in a single account, which You’ll call account A. As a result, a security best practice is to ensure that IAM users are not directly associated with any IAM policies, and instead provide them with temporary credentials to perform actions.
  • This can be accomplished by creating a separate account (for example, account B) that contains IAM roles, each of which has a set of allowed actions defined in an IAM policy. When account A users need to act, they can assume those roles.
{
    "eventVersion": "1.05",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "AIDAAAAAAAAAAAAAAAAAA",
        "arn": "arn:aws:iam::222222222222:user/Alice",
        "accountId": "222222222222",
        "accessKeyId": "AKIAAAAAAAAAAAAAAAAAA",
        "userName": "Alice"
    },
    "eventTime": "2020-09-22T16:23:50Z",
    "eventSource": "sts.amazonaws.com",
    "eventName": "AssumeRole",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "1.2.3.4",
    "userAgent": "aws-sdk-go/1.16.8 (go1.12.7; linux; amd64)",
    "requestParameters": {
        "roleArn": "arn:aws:iam::111111111111:role/ExampleRole",
        "roleSessionName": "ExampleRoleSession",
        "externalId": "ffffffffffffffffffffffffffffffff",
        "durationSeconds": 3600
    },
    "responseElements": {
        "credentials": {
            "accessKeyId": "ASIADDDDDDDDDDDDDDDD",
            "expiration": "Sep 22, 2020 5:23:50 PM",
            "sessionToken": "d2UncmUgaGlyaW5nIDopIGh0dHBzOi8vd3d3LmRhdGFkb2docS5jb20vY2FyZWVycy8K"
        },
        "assumedRoleUser": {
            "assumedRoleId": "AROAEEEEEEEEEEEEEEEEE:ExampleRoleSession",
            "arn": "arn:aws:sts::111111111111:assumed-role/ExampleRole/ExampleRoleSession"
        }
    },
    "requestID": "4da64d92-6130-4355-86f2-1609a6eb53e1",
    "eventID": "ffef7974-b1a0-4e88-b27f-0b143965f30c",
    "resources": [
        {
            "accountId": "111111111111",
            "type": "AWS::IAM::Role",
            "ARN": "arn:aws:iam::111111111111:role/ExampleRole"
        }
    ],
    "eventType": "AwsApiCall",
    "recipientAccountId": "111111111111",
    "sharedEventID": "4f61c867-6a49-4c41-a267-388c38e99866"
}
  • The AssumeRole command returns an AccessKeyId “ASIADDDDDDDDDDDDDDDD)” that user Alice can then use to perform the role’s delegated actions. In the following event log, you can see that an AssumedRole user uses the access key “ASIADDDDDDDDDDDDDDDD” to perform the DescribeRegions operation; you can thus infer that user Alice used the access key.
{
    "eventVersion": "1.05",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROAEEEEEEEEEEEEEEEEE:ExampleRoleSession",
        "arn": "arn:aws:sts::111111111111:assumed-role/ExampleRole/ExampleRoleSession",
        "accountId": "111111111111",
        "accessKeyId": "ASIADDDDDDDDDDDDDDDD",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AROAEEEEEEEEEEEEEEEEE",
                "arn": "arn:aws:iam::111111111111:role/ExampleRole",
                "accountId": "111111111111",
                "userName": "ExampleRole"
            },
            "webIdFederationData": {},
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "2020-09-22T15:58:31Z"
            }
        }
    },
    "eventTime": "2020-09-22T16:26:02Z",
    "eventSource": "ec2.amazonaws.com",
    "eventName": "DescribeRegions",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "1.2.3.4",
    "userAgent": "aws-sdk-go/1.16.8 (go1.12.7; linux; amd64)",
    "requestParameters": {
        "regionSet": {}
    },
    "responseElements": null,
    "requestID": "0a857cb2-90c4-4f09-9624-1149fb27f8a1",
    "eventID": "26fe99a5-8ed5-4923-9cf7-b6cdf96fa5f3",
    "eventType": "AwsApiCall",
    "recipientAccountId": "111111111111"
}

Controlling AssumedRole Session Names

  • Setting the user’s session name is a good way to keep track of assumed roles and control them. The permissible session names are specified in the trust policy of the assumed role. 
  • The following trust policy, for example, specifies that the user must name their session after their username to assume a role. The command AssumeRole will fail if this is not done.
{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<AccountNumber>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:RoleSessionName": "${aws:username}"
            }
          }
        }
      ]
    }
  • You can easily track and filter the actions performed in each assumed role session with this configuration—or catch anyone who fails to provide a valid session name.

Collect and Analyze CloudTrail Logs with Datadog

The following are some of the advantages of using Datadog as your AWS log monitoring platform:

  • Direct integrations with AWS CloudTrail, Amazon S3, AWS Kinesis Firehose, and Amazon Lambda that automate field parsing of all AWS CloudTrail logs streaming from your AWS environment using log processing pipelines.
  • Datadog’s Logging without Limits allows for the cost-effective collection and archiving of all CloudTrail logs.
  • For security and compliance analysis, the scope of log context has been expanded.
  • You can create custom dashboards to get a high-level view of the health and security of your AWS environment once you’ve set up the AWS integration for your services and have CloudTrail logs streaming into Datadog. You can also detect critical security and operational issues using Datadog’s built-in Threat Detection Rules.
Export your CloudTrail Logs to Datadog
  • When you export CloudTrail logs from AWS to Datadog, you can examine and contextualize the events with other observability data from your environment. 
  • Amazon Kinesis Data Firehose, a fully managed AWS service that automates the delivery of your real-time, distributed streaming data to external data storage and analysis repositories, is a simple way to accomplish this.
  • Kinesis Data Firehose offers several benefits for AWS Data Delivery, including near-Real-time Uploading, Serverless Data Transformation Options, and Integrations with the entire AWS service suite.
Explore CloudTrail Logs in Datadog
  • Once your audit logs have arrived in Datadog’s Log Explorer, you can easily filter and search through them to find the most critical logs for your use case. 
  • For example, you might want to look for events in which a user attempted to create or change the permissions of a security group in the key AWS audit logs to watch. To do so, look for CreateSecurityGroup, AuthorizeSecurityGroupIngress, and AuthorizeSecurityGroupEgress events in your logs.
cloudtrail data events: amazon cloudtrail logs in datadog
Image Source
  • You can use your audit logs to create high-level Datadog dashboards with custom data visualizations in addition to filtering them for potential problems. This way, you can get a quick top-down view of your incoming logs without having to sift through them endlessly.
cloudtrail data events: amazon cloudtrail custom visualizations
Image Source
Detect Security Threats in Real-Time
  • Datadog Cloud SIEM allows you to apply strict Detection Rules to your entire event stream as it is ingested to help you catch security threats as they happen. The MITRE ATT&CK® framework covers many of the critical event types that include Detection Rules that match the attack techniques enumerated by the framework. 
  • You can also create your own rules to evaluate events based on your environment’s specific requirements.
  • Datadog creates a Security Signal when an incoming event matches one of your Detection Rules, which can be inspected in the Security Signals explorer. Security Signals give context to each trigger event, such as the username and IP address that started the offending action, the event’s timeline, and standardized guidelines for dealing with the threat.

Conclusion

This article describes AWS CloudTrail Data Events – How to Log, Monitor, and Analyze them in detail. It also gives an overview of AWS CloudTrail.

visit our website to explore hevo

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations, with a few clicks. Hevo Data with its strong integration with 100+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

Want to take Hevo for a spin? 

Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Harshitha Balasankula
Former Marketing Content Analyst, Hevo Data

Harshita is a data analysis enthusiast with a keen interest for data, software architecture, and writing technical content. Her passion towards contributing to the field drives her in creating in-depth articles on diverse topics related to the data industry.

No-code Data Pipeline For Your Data Warehouse