Wednesday, August 28, 2024

Unlocking Event-Driven Architectures with AWS EventBridge: Use Cases and Benefits

 

Introduction

In today’s fast-paced digital landscape, designing systems that are responsive, scalable, and flexible is more critical than ever. AWS EventBridge, a serverless event bus service provided by Amazon Web Services (AWS), empowers developers to build robust event-driven architectures. This article delves into what AWS EventBridge is, its key features, and practical use cases to help you leverage it effectively in your applications.

What is AWS EventBridge?

AWS EventBridge is a serverless event bus service that facilitates the creation, management, and routing of events between applications and services. It allows you to build event-driven architectures that can react to changes in real-time and automate workflows efficiently.

Key Features:

  • Serverless Architecture: No infrastructure management required. AWS handles scalability, availability, and maintenance.
  • Event Routing: Directs events to various AWS services and custom endpoints based on predefined rules.
  • Event Filtering: Allows filtering of incoming events to route only relevant data to consumers.
  • Schema Registry: Automatically captures and maintains schemas for events, enabling easy integration and validation.

Core Components

  • Event Buses: Central channels where events are published. You can use the default event bus or create custom event buses.
  • Event Sources: The origin of events. Sources can include AWS services, custom applications, or external systems.
  • Event Rules: Define patterns to match incoming events and specify targets for routing.
  • Targets: AWS services or custom endpoints where events are sent. Targets can include Lambda functions, Step Functions, SNS topics, and more.

Use Cases

1. Microservices Communication

EventBridge simplifies communication between microservices by using events to decouple services. For instance, in an e-commerce platform, an order service can emit an event when an order is placed. This event can trigger downstream services like inventory management, payment processing, and shipping updates without tightly coupling them.

Example:

  • Order Service: Emits an OrderPlaced event.
  • Inventory Service: Listens to the OrderPlaced event and updates stock levels.
  • Shipping Service: Receives the event and starts the shipping process.

2. Real-Time Data Processing

EventBridge can be used to process and analyze data in real time. For example, a website might capture user interactions and publish events to EventBridge. These events can then be processed by a Lambda function or a data analytics service to generate real-time insights.

Example:

  • Website Interaction: Emits user activity events.
  • Analytics Service: Processes these events to generate real-time user behavior reports.

3. Automated Workflow Orchestration

By using EventBridge, you can automate complex workflows that require multiple steps and interactions between various AWS services. For example, you might automate a workflow for processing a file upload, including validation, transformation, and storage.

Example:

  • S3 Bucket: Emits an event when a file is uploaded.
  • Lambda Function: Validates and transforms the file.
  • S3 Bucket (New Location): Stores the processed file.

4. Third-Party Integrations

EventBridge’s ability to integrate with SaaS applications allows you to create seamless connections with third-party services. For instance, you can trigger workflows based on events from external services like Zendesk, PagerDuty, or Salesforce.

Example:

  • Zendesk: Emits an event when a new support ticket is created.
  • Slack: Receives the event and posts a notification in a Slack channel.

5. System Monitoring and Alerts

EventBridge helps in monitoring applications and generating alerts based on predefined conditions. For instance, you can set up an event rule to monitor AWS CloudWatch metrics and send notifications if certain thresholds are exceeded.

Example:

  • CloudWatch: Emits an alert when CPU usage exceeds 80%.
  • SNS Topic: Sends a notification to the operations team.

Getting Started with EventBridge

To start using AWS EventBridge, follow these steps:

  1. Create an Event Bus: Set up a default or custom event bus.
  2. Define Rules: Create rules to filter and match events.
  3. Add Targets: Specify where events should be sent (e.g., Lambda functions, Step Functions).
  4. Publish Events: Send events to your event bus from various sources.

Conclusion

AWS EventBridge is a powerful tool for building event-driven architectures that are scalable, flexible, and responsive. Its serverless nature, combined with its robust event routing and filtering capabilities, makes it an essential component for modern cloud applications. Whether you’re looking to integrate microservices, process real-time data, automate workflows, or connect with third-party services, EventBridge provides the features and scalability to meet your needs.

By embracing EventBridge, you can design systems that respond dynamically to events and changes, leading to more efficient and resilient applications. Start exploring EventBridge today and unlock the full potential of event-driven architectures in your AWS environment.

Tuesday, May 14, 2024

Localstack : Test our AWS services locally

Localstack Installation: 

download localstack:  https://app.localstack.cloud/download?


# configure, optional

aws configure set aws_access_key_id "dummy" --profile test-profile

aws configure set aws_secret_access_key "dummy" --profile test-profile

aws configure set region "eu-central-1" --profile test-profile

aws configure set output "table" --profile test-profile


#AWS SNS/SQS

Create Topic

aws --endpoint-url=http://localhost:4566 sns create-topic --name topic_name --region us-west-1 --profile default --output table | cat

aws --endpoint-url=http://localhost:4566 sns list-topics

Create Queue

aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name queue_name --profile default --region us-west-1 --output table | cat

aws --endpoint-url http://localhost:4566 sqs list-queues

Bind Topic with queue

Get Queue ARN 

aws sqs get-queue-attributes --queue-url http://localstack:4566/000000000000/queue_name --attribute-names QueueArn --endpoint-url=http://localhost:4566


aws --endpoint-url=http://localhost:4566 sns subscribe --topic-arn arn:aws:sns:us-west-1:000000000000:queue_name --protocol sqs --notification-endpoint arn:aws:sqs:us-west-1:000000000000:queue_name

aws --endpoint-url=http://localhost:4566 sns list-subscriptions

Publish message:

aws --endpoint-url=http://localhost:4566 sns publish --topic-arn arn:aws:sns:us-west-1:000000000000:topic_name --message 'Hello testing!'

Receive message:

aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url http://sqs.us-west-1.localhost.localstack.cloud:4566/000000000000/queue_name

###S3

Create S3 Bucket

awslocal s3api create-bucket --bucket <bucket-name> --region us-east-1

List S3 All Buckets

awslocal s3api list-buckets


#Upload Item to S3 Bucket

awslocal s3api put-object \

  --bucket <bucket-name> \

  --key <item> \

  --body image.jpg

 

#  List items in a S3 Bucket

awslocal s3api list-objects \

  --bucket <bucket-name>

###cloudformations 

  List all cloudformations in all regions

#!/usr/bin/env bash

( awslocal ec2 describe-regions | jq -r '.Regions[].RegionName' ) | while IFS= read -r line

do

 awslocal cloudformation describe-stacks --region $line

done


###List all the versions of a Lambda

awslocal lambda list-versions-by-function --function-name <lambda-name>  --query 'Versions[*][Version, FunctionArn]' --output json

Execute a Lambda w/o api gateway

base64 -i input.json -o input.txt

awslocal lambda invoke --function-name my-function --payload file://input.txt outputfile.txt

Sunday, February 11, 2024

Comprehensive steps to Fixing 'Too Many File Descriptors Open' Errors in Linux

 

Introduction

The "Too many file descriptors open" error in Linux often signifies that your system has exceeded its limit on open file descriptors. This can lead to a range of issues, from application crashes to performance degradation. Understanding file descriptors, how they work, and how to manage them effectively can help you resolve this error and prevent it in the future. This guide covers the common causes of this issue, methods for diagnosing and monitoring file descriptor usage, and strategies for increasing file descriptor limits.

Understanding File Descriptors

In Linux, everything is treated as a file, including network sockets, devices, and regular files. A file descriptor is a non-negative integer that uniquely identifies an open file within a process.

Key Concepts:

  • File Descriptor (FD): An unsigned integer used by a process to identify an open file. Each process maintains a table of file descriptors.
  • Default File Descriptors: Every process starts with three default file descriptors:
    • 0: Standard Input (stdin)
    • 1: Standard Output (stdout)
    • 2: Standard Error (stderr)

Common Causes of 'Too Many File Descriptors Open' Errors

1. High Volume of Network Calls

If your application makes a large number of network calls during API hits and the downstream systems respond slowly, it can exhaust available file descriptors.

2. Resource Leaks

Resource leaks, where file descriptors are not properly closed after use, can gradually consume available file descriptors. To avoid this, use language-specific features or libraries designed to manage resources efficiently. For example, in Java, use the try-with-resources statement.

Diagnosing File Descriptor Usage

Global Usage

To check the total number of file descriptors currently open on the system, use:

total number of file descriptors open on the system awk ‘{print $1}’ /proc/sys/fs/file-nr

High File Descriptor Usage

To identify high file descriptor usage or potential issues with sockets:

  • Identify CLOSE_WAIT State:
    netstat -tonp | grep CLOSE_WAIT //CLOSE_WAIT means your program is still running, and hasn’t closed the socket
    netstat -p // connections with processId. kill -9 <P_ID> to kill the close_wait connection

Per-Process Usage
We can use the lsof command to check the file descriptor usage of a process. Let’s take the caddy web server as an example:

sudo lsof -p <tomcat_pid>
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
tomcat_pid 135 tomcat_pid cwd DIR 254,1 4096 1023029 /etc/sv/caddy

“purge” unused sockets from the file table to free up space to accommodate new file descriptors.

Current FD socket usage
netstat -tonp | grep CLOSE_WAIT

Per-Session Limit
ulimit -Sn
ulimit -Hn

Per-Process Limit
grep “Max open files” /proc/$pid/limits
cat /proc/sys/fs/file-max

Increasing File Descriptor Limits
5.1. Temporarily (Per-Session) ulimit -n 4096

Per-User
appending a few lines to the /etc/security/limits.conf file and re-logging in:

Per-Service:

sudo mkdir -p /etc/systemd/system/apache.service.d/

$ sudo tee /etc/systemd/system/apache.service.d/filelimit.conf <<EOF
[Service]
LimitNOFILE=500000
EOF
$ sudo systemctl daemon-reload

$ sudo systemctl restart apache.service

Globally
fs.file-max = 500000
sysctl -p command to reload the configuration file.

Saturday, February 10, 2024

Feeding information via MDC

Correlation ID can be very useful for Logging in Microservices. We can benefit from Mapped Diagnostic Context or MDC manages contextual information on a per-thread basis. Especially when logging in a multi-threaded application.

For example, we can store the IP address of each request in a web application and insert this data to each log statement.

  • In our case, we’ll introduce a unique correlation id for each request.
  • -We must maintain the MDC to prevent data corruption. More specifically, after adding an entry to MDC, we must remove it when the request ends.

Here are means to achieve in various framework:

  • MdcFilter extends HttpFilter // WebApplcation
  • MdcInterceptor implements HandlerInterceptor //Spring MVC
  • public class MDCMessagePropertiesConverter extends DefaultMessagePropertiesConverter // RabbitMQ Listener

Tuesday, February 6, 2024

Redis Cache cookbook

-AWS Elastic cache flavor-> Redis 

-Each url treated as key and redis store the result

-This has fix lenngth of keys, and use LRU to remove the results

Start server: 

redis-server 

EXISTS mykey 

SET mykey "Hello"

GET myKey

MGET myKey nonExistentKey

PERSIST myKey

INFO server

INFO keyspace

DEL myKey

TTL myKey

EXPIRE myKey 120

KEYS my*