Friday, August 16, 2019

Flow Control in RabbitMQ

Flow Control is used to apply rate limiting when the publisher is sending messages with very high rate that RabbitMQ can not handle it.
- This is to keep up the server rabbitMQ reduce the  speed of the connection publishing messages too quickly
-There are three types of state being shown on the admin UI

  •   Idle
  •   Flow
  •   Running


flow-q

RsbbitMQ will have memory and disk storage configuration. If consumer is pulling data with a slower rate, it can occupy the disk, which intern result into flow control mode. Finally the publisher will get affected and it can get into blocked state.   

Thursday, August 1, 2019

Auto Scaling in AWS



Why is the need of Auto Scaling?
When there is huge traffic, network load balancer suits best for high performance.
The running server can be upscaled/upgraded, scaled up or down based on the demand.

Classic load balancer gives both option https/s and tcp (application and network) level balancer    

What are the types of AutoScaling?
Vertical Scaling:
Image title
Capacity can be resized by adding more CPU and Storage and memory when needed. The only thing to be noted here is it is done with the image or the instance should be stopped. So in order to have outage less scaling the instances must be under ELB to have other instances serving the traffic.With this there will not be any outage/downtime going to happen.
-Autoscaling is achieved by changing instance type when  the server is stopped.
-Server cannot be resized when it is up and running

Horizontal Scaling:
Web1, Web2, Web3 etc. More and more servers can be added based on the needs.

Policies
- Fixed (fixed number of servers), 
- Manual: change the number of instances manually
-Automatic/Dynamic: Some condition specified.
- Scheduled: 

Do we use network load balancer (ELB), when there is a very high traffic?
Outage and downtime, scaling up/scaling down, upgrading.
 Users will not be able to know how many servers they are connected to when they are connecting through load balancer.


How the Auto Scaling is being achieved?
Two steps process
-Configure Auto scaling Config
-Create Auto Scaling group
Manual:
We can change the number of instances required to be running

Dynamic Scaling:
Scaling policy: scaling out/back : Add scaling policy.
If CPU utilisation goes beyond 60% one more server can be added to ket utilisation goes down.
Min servers
Max servers
When cpu utilisation exceeds:A new server would be spinning up

Scheduled:
When We can forecast high load, (black Friday, quarterly result,  ) we can predict more load and we autoscaling can be scheduled.

Specify start time, end time, Recurrence, Min and max number of servers

Sample application to Simulate Auto Scaling:
Open ssh terminal
Login: Ec2-user
sudo su
yum install stress -y
stress -c 4
top //Show cpu utilisation process wise
Ops Automator Vertical Scaling

RabbitMQ : Best practices

-Use separate connections for publishing/consuming/administrating messages.
-Messages can be sent directly to exchange(direct/fanout) with routing key, no need to have queue binding.
-Use callbacks publisher confirms to reliable message receiving.
-Use ChannelAwareMessageListener when message acknowledgement is important.
-For auto acknowledgement, SimpleMessageListenerContainer will suffice.
- SimpleMessageListenerContainer may require content_type:text/plain and content_encoding:UTF-8 to avoid byte based received Message invocation.
-You can apply Jackson2JsonMessageConverter to RabbitTemplate to apply it while sending.
-Listener can be attached with various utility inbuilt functions, eg.  post processors, message properties converter etc. 
-Ensure related domain has dedicated vHost. Across VHost can be communicated via federation setup.
-Use Admin connection to declare exchange and queues.