By using AWS re:Post, you agree to the
AWS re:Post
Terms of Use
re:Post
Hi, architectural question here.Goal is to move logs from cloudwatch logs to an on premises Splunk, reliably.
I see different options, with trade-off:
- add a lambda subscription filter to the log group and leverage a lambda Splunk blueprint function which is able to push logs. Simple, but risk of throttling if huge amount of logs is sent
- Combination of EventBridge scheduler + lambda to move data to s3, and then via fan-out pattern (sns,SQS) to the Splunk lambda logging function. This is more reliable but getting complex and not a big fun of schedules.
- Same as above, but using kinesis to move data to s3, and then fanning out to Splunk lambda as before. Also a constraint here is that kinesis may not usable for certain reason.
- Cloudwatch subscription filter with a lambda that push data to SNS. Topic is consumed either directly via Splunk lambda, or via SQS which then the Splunk lambda listen too. Risk about throttling and slightly complex architecture.
- Need to check feasibility, but was looking for an EventBridge tule, which may listen to log groups and move logs to SNS, and then to Splunk, but haven’t confirmed this yet.
Any other alternatives? Thanks
Topics
Application IntegrationComputeServerless
Tags
Amazon Simple Notification Service (SNS)AWS LambdaAmazon CloudWatch LogsAmazon EventBridge
Language
English
EXPERT
Antonio_Lagrotterialg...
asked a year ago4516 viewslg...
4 Answers
- Newest
- Most votes
- Most comments
1
Is this to batch move or stream real time as you mention move but then with subscription filters I believe they will only monitor for new events from that point of configuration and not historical events? (Need to confirm)
You could export logs directly to S3 from the console and then import from there. When I have done this in the past, I believe I could only export log groups 1 or 2 at a time.
EXPERT
Gary Mcleanlg...
answered a year agolg...
Antonio_Lagrotteria EXPERT
a year ago
Main objective here is reliability and no lost logs, so real time or batch not a constraint. As mentioned I m not a big fun of export solution and I know there are limitations with that
Gary Mclean EXPERT
a year ago
Makes perfect sense! Ta
1
Send CloudWatch logs to Kinesis Firehose https://repost.aws/knowledge-center/cloudwatch-logs-stream-to-kinesis
Kinesis Firehose streams can be sent directly to Splunk for ingestion https://aws.amazon.com/kinesis/data-firehose/splunk/ https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureFirehose
EXPERT
Steve_Mlg...
answered a year agolg...
EXPERT
Antonio_Lagrotterialg...
reviewed 5 months agolg...
Antonio_Lagrotteria EXPERT
a year ago
As mentioned, kinesis may not be used for certain reasons.
Steve_M EXPERT
a year ago
OK fair enough, if Kinesis Firehose is considered as being part of the Kinesis product then it's out.
I was reading it too literally and considering them as separate products. And data can be sent from Cloudwatch to Firehose without ever having to touch a Kinesis stream.
1
Why not try Splunk Addon for AWS as outlined here - https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatchLogs
Sydlg...
answered a year agolg...
Antonio_Lagrotteria EXPERT
a year ago
This is a "pull" approach, meaning logs are fetched by Splunk by connecting to AWS. As amount of logs will be huge, the push mechanism is preferred: https://docs.splunk.com/Documentation/AddOns/released/AWS/UseCases
I would recommend considering using Amazon Kinesis Data Firehose to reliably deliver logs from CloudWatch Logs to Splunk.
Some key advantages of this approach:
- Kinesis Data Firehose can automatically deliver log data from CloudWatch Logs to Splunk with minimal code required. It handles log aggregation, compression and transport securely at a large scale.
- Firehose delivers log data reliably to Splunk with options for data transformation along the way if needed. It can also handle high volumes of log data from CloudWatch Logs.
- This avoids the need to build out and manage your own log delivery infrastructure using Lambda, SNS/SQS etc. which comes with additional operational overhead.
- Splunk has documentation on how to configure Firehose for log delivery directly to Splunk for ingestion.
To get started, you can create a Firehose delivery stream that sources data from a CloudWatch Logs group and delivers to your Splunk endpoint. The AWS documentation provides steps to set this up. Let me know if you have any other questions!
EXPERT
Giovanni Laurialg...
answered 5 months agolg...
Relevant content
CloudWatch logs are not reaching Splunk for the new lambda with AWS Firehose/Lambda integration
rePost-User-0342927lg...
asked 2 years agolg...
debugging using cloudwatch logs from different
G V Navinlg...
asked 2 years agolg...
Splunk vs Amazon OpenSearch vs CloudWatch
Mounirlg...
asked 9 months agolg...
How can we filter logstreams while adding a Splunk Subscription on aws batch logs using Cloud Formation Template?
Anupriyalg...
asked 7 months agolg...
How do I push VPC flow logs to Splunk using Amazon Kinesis Firehose?
AWS OFFICIALUpdated a year ago
How do I use a Splunk log driver with an Amazon ECS task on Fargate?
AWS OFFICIALUpdated 4 months ago
How do I determine throttling in my CloudWatch logs?
AWS OFFICIALUpdated 2 years ago
How do I resolve throttling errors in my CloudWatch logs?
AWS OFFICIALUpdated 6 months ago
How can I send AWS WAF log to both CloudWatch logs and S3?
EXPERT
Lei Peilg...
published 3 months agolg...
How to view consolidated log from multiple log streams generated from an AWS Mainframe Modernization application?
EXPERT
Souma Suvra Ghoshlg...
published 8 months agolg...