Variable Ordering in a class.
Constant first then private variables
public> private
static> non-static
final> non-final
injected> normal variable
constructorBy Prithvi Atal, Engineer, Driving High-Performance Solutions
Variable Ordering in a class.
Constant first then private variables
public> private
static> non-static
final> non-final
injected> normal variable
constructorIn this article we will talk about exposing java to Json serialization and how to instrument them
@JsonProperty( "fieldName" ) // Changing the field name
@JsonUnwrapped // Expose child attributes to parent/holder directly instead of as a child
@JsonRootName(value = "user") // Giving root name to the json
JSON to Java schema conversion. :https://www.jsonschema2pojo.org/
Optimize and Improve PostgreSQL Performance with VACUUM, ANALYZE, and REINDEX utility.
VACUUM:
-Reclaims storage occupied by dead tuples with the following commands
AWS RDS show the database load in active session. It shows timeout due to vacuum delay.
REINDEX
It rebuilds one or more indices, replacing the previous version of the index. If an index has become corrupted, and no longer contains valid data, reindex can be executed.
REINDEX INDEX myindex, REINDEX TABLE mytable etc.
ANALYZE
-It collects statistics about specific table columns, entire table, or entire database. The PostgreSQL query planner then uses that data to generate efficient execution plans for queries. Samples:
ANALYZE users; collects statistics for users table.
ANALYZE VERBOSE users; does exactly the same plus prints progress messages.
ANALYZE users (id, display_name); collects statistics for id and display_name columns of users table.
ANALYZE; collects statistics for all table in the current database.
To see the results of actually executing the query, you can use the EXPLAIN ANALYZE command:
EXPLAIN
- To see how a query is executing and adjust the query to be more efficient
EXPLAIN ANALYZE SELECT seqid FROM traffic WHERE serial_id<21;
- Instead of returning the data provides a query plan detailing what approach the planner took to executing the statement provided.
//All above VACUUM, ANALYZE, and REINDEX need to be executed though admin user.
In this article we would be fetching similar records with lookup details.
Consider a retailer needs to know similar order placed along with the customer information.
There are two tables Order and Customer, the SQL query goes like this:
select c.customer_name, c.customer_location, co.brand, co.item_type, co.item_domain, co.item_manufacturer from customer c
inner join (
select o.brand_id, o.brand_id, o.brand_name, o.item_type, o.item_domain, o.item.manufacturer , o.customer_id, from Order o
where o.domain='mobile'
group by o.brand, o.item_type, o.item_domain, o.item.manufacturer
HAVING
COUNT(o. brand_id) > 1
) co on co. customer_id=c. customer_id;
1. Install the Nginx server and the required packages.
apt-get update
apt-get install nginx openssl
2. Create a private key and the website certificate using the OpenSSL command
Create a private key and the website certificate using the OpenSSL command.
mkdir /etc/nginx/certificate
cd /etc/nginx/certificate
openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out nginx-certificate.crt -keyout nginx.key
3. On the option named COMMON_NAME, you need to enter the IP address or hostname.
4. nginx config before the changes
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
}
nginx config after the changes
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/nginx/certificate/nginx-certificate.crt;
ssl_certificate_key /etc/nginx/certificate/nginx.key;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
}
The rewrite Directive
rewrite regex URL [flag];
eg:
rewrite ^ $request_uri;
server {
# ...
rewrite ^(/download/.*)/media/(\w+)\.?.*$ $1/mp3/$2.mp3 last;
rewrite ^(/download/.*)/audio/(\w+)\.?.*$ $1/mp3/$2.ra last;
return 403;
# ...
}
Explanation:
->It matches URLs that begin with the string /download
-> Then include the /media/ or /audio/ directory somewhere later in the path.
-> It replaces those elements with /mp3/ and adds the appropriate file extension, .mp3 or .ra
Example,
/download/cdn-west/media/file1 becomes /download/cdn-west/mp3/file1.mp3.
If there is an extension on the filename (such as .flv), the expression strips it off and replaces it with .mp3
What is the flow of database connections in application?
Registering a module
angular.module('myApp', [])
.controller('MyController', ['myService', function (myService) {
// Do something with myService
}]);
Registering a service
angular.module('myApp', [])
.service('myService', function () { /* ... */ })
.controller('MyController', ['myService', function (myService) {
// Do something with myService
}]);
Registering another service
angular.module('myModule', [])
.service('myCoolService', function () { /* ... */ });
Registering directive
angular.module('myModule')
.directive('myDirective', ['myCoolService', function (myCoolService) {
// This directive definition does not throw unknown provider.
}]);
Instance Type/Series : Use cases
T3/T4 [Free trial]- Micro service, dev environment,,low CPU utilization and occasional periods for high CPU activity (bursts)
M [general purpose ]: M4/5/6/ :As application servers, microservices, gaming servers, mid-size data stores, and caching fleets.
C [Compute optimised]: high CPU memory
R [Memory Optimized] : R6 Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics
P/G/Trn1/Inf1 [Accelerated computing] : More for machine learning
For versioning and organizing your database changes there are vwaios devops tools available in market.
The most known ones are
However Liquibase currently wins over all the avalable options primarily becuase of the following support:
OAuth 2.0 uses Access Tokens and Refresh Tokens to secure access to applications and resources.
Here is the flow:
Obtaining OAuth 2.0 access tokens from refresh_token for server-side web applications.
When we initially received the access token, it may have included a refresh token as well as an expiration time like in the example below.
{
"access_token": "AYjcyMzY3ZDhiNmJkNTY",
"refresh_token": "RjY2NjM5NzA2OWJjuE7c",
"token_type": "bearer",
"expires": 3600 }
To use the refresh token, make a POST request to the service’s token endpoint with grant_type=refresh_token, and include the refresh token as well as the client credentials if required.
OAuth API:
POST /oauth/token HTTP/1.1
Host: authorization-server.com
grant_type=refresh_token
&refresh_token=xxxxxxxxxxx
&client_id=xxxxxxxxxx
&client_secret=xxxxxxxxxx
The response will be a new access token, and optionally a new refresh token, just like you received when exchanging the authorization code for an access token.
{
"access_token": "BWjcyMzY3ZDhiNmJkNTY",
"refresh_token": "Srq2NjM5NzA2OWJjuE7c",
"token_type": "Bearer",
"expires": 3600
}
I. Refreshing an access token
II. Making an authorized API request [Authorization: Bearer ACCESS_TOKEN ]
After obtaining an access token for a user, your application can use that token to submit authorized API requests on that user's behalf. Specify the access token as the value of the Authorization: Bearer HTTP request header
GET /youtube/v3/channels?part=id&mine=true HTTP/1.1
Host: www.googleapis.com
Authorization: Bearer ACCESS_TOKEN
Using cURL:
curl -H "Authorization: Bearer ACCESS_TOKEN" https://www.googleapis.com/youtube/v3/channels?part=id&mine=true
Note Basic Authentication does not work on token based mechanism. Sample below:
String encoding = Base64.getEncoder().encodeToString(("pwd").getBytes("UTF-8"));
connection.setRequestProperty ("Authorization", "Basic " + encoding);
Writing code which talks is something the developer would love to have it. In this blog, we will talk about some of the mechanism which makes the code vocal.
Optional classes:
A very handy feature supported since Java8. However we have to use it to the extent it make things better.
In RabbitMQ, data safety is being handled by the below two mechanism:
I. Consumer Acknowledgements
II. Publisher confirms
rabbitTemplate.setConfirmCallback(rabbitEventConfirmCallback);
@Override
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
if(ack){
}
else{
//do something. Requeue/ ignore...
}
docker --version
docker pull <image_name>
docker images
docker run -it -d <image_name>
docker ps
docker ps -a
docker exec -it docker exec -it <image_name> vin bash
docker exec -it <image_name> /bin/bash
docker cp <image_name>:path/to/file path/to/ur/my_file
docker logs <image_name> > your_location/some_file.log
docker build -t <image_name> .
docker run --name=<image_name> -it -p 8080:8080 -h SOME_info -e SOME_PARAMETER=ABC /bin/sh
docker login
docker login -u <user> -p <pwd> artifactory_server_host
docker logs
docker logs -f [container_name]
-v : creating storage space separate from container
docker run -v /var/lib/mysql
printenv
Docker Compose
Runnable JDK 1.0
new Thread( () -> System.out.println("Runnable") ).start()
Callbale Java 5
Callable callable = Executors.callable(Runnable task);
Callable returns Future object
call method can throw checked exception
Future result = exec.submit(aCallable);
Response = result.get();
Callable<String> callable = () -> {
// Perform some computation
Thread.sleep(2000);
return "Return some result";
};
Remember, Future.get() is a blocking method and blocks until execution is finished,
so you should always call this method with a timeout to avoid deadlock or livelock in your application
while(!future.isDone()) {
System.out.println("Task is still not done...");
Thread.sleep(200);
double elapsedTimeInSec = (System.nanoTime() - startTime)/1000000000.0;
if(elapsedTimeInSec > 1) {
future.cancel(true);
}
}
future.get(100,TimeUnit.MILLISECONDS) :
Method waits for the result of the task atmost 100 milliseconds, If the wait time out, then get throws TimeoutException.
( future : future_results){
response= future.get();
}
Keep in mind future.get() will get blocked if its not completed. if future get is in loop , it will not proceed for the next future get call.
If application receives multiple http request thread , it is prone to get all app thread blocked resulting application unresponsive.
Read Lock:
Read lock allows multiple thread to acquire lock in read method when all the synchronizing threads are using only the read lock of the Reentrant Read Write Lock pair.
If any thread is using the write lock of Reentrant Read Write Lock pair, read lock on resource is not allowed
Write Lock : Write lock allows only one thread to acquire lock in write method.All the other synchronizing threads will have to wait for the lock to be released before they can acquire read or write lock on resource
ReadWriteLock rwLock = new ReentrantReadWriteLock();
Lock readLock = rwLock.readLock();
Lock writeLock = rwLock.writeLock();
readLock.lock();
try {
// reading data
} finally {
readLock.unlock();
}
writeLock.lock();
try {
// update data
} finally {
writeLock.unlock();
}
AmazonEBS : High-availability block-level storage volumes for Amazon Elastic Compute Cloud (EC2) instances.
- It is paired with an EC2 instance. So when we need a high-performance storage service for a single instance, use EBS
- It stores data on a filesystem which is retained after the EC2 instance is shut down.
Amazon EFS : Offers scalable file storage, also optimized for EC2. Using an EFS file system, you can configure instances to mount the file system.
Amazon S3 : An object store good at storing vast numbers of backups or user files.
Unlike EBS or EFS, S3 is not limited to EC2.
Files stored within an S3 bucket can be accessed programmatically or directly from services such as AWS CloudFront.
This is why many websites use it to hold their content and media files, which may be served efficiently from AWS CloudFront.
You can not allow xhr call to cross domain. Otherwise it would be very risky to allow third party script in your domain code. Here we will go through some of the ways cross domain can be made.
1. Proxy Route
II. Adding Access-Control-Allow-Origin in the header
Designing a resilient architecture can comprise of following paradigm:
I. Does your application has retrial mechanism?
II. Is there any mechanism to stop recurring attempt in case of remote external service failure. Eg. circuit breaker, reslient4j etc.
III. How fast your infrastructure can be muted?
IV. What maximum downtime your application has?
V. Is your application hosted with cold deployment?
VI Is there any Leader Election taking place when there is a multiple deployment?
JMX specification supports the following types of Bean:
While developing Spring based application there are instances where we would like to get a control of the spring beans. It comes very handy when we can get a control of the Spring Factory and can customise as per the needs
1. BeanPostProcessor:
2. BeanFactoryPostProcessor:
3. PropertyPlaceholderConfigurer : Property file location
-We can plugin some additional behaviour on top of existing definition.
- This will get called when all bean definitions will have been loaded, but no beans will have been instantiated yet.
-This allows for overriding or adding properties even to eager-initializing beans.
- This will let you have access to all the beans that you have defined in XML or that are annotated (scanned via component-scan).
public class CustomBeanFactory implements BeanFactoryPostProcessor {
@Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
for (String beanName : beanFactory.getBeanDefinitionNames()) {
BeanDefinition beanDefinition = beanFactory.getBeanDefinition(beanName);
// Manipulate the beanDefiniton or whatever you need to do
}
}
//Registering a new Bean
GenericBeanDefinition myBeanDefinition = new GenericBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("beanName", myBeanDefinition););
}
In today's world when the boilerplate code is readily available, write minimum code and connect pluggable functionalities are the way of developing software. To make the life of developer easier, plugin play a very handy role to speed up the development.
Here we would be listing few of the useful plugins :
Intellij:
1. Dependency Analyzer
Maven:
1. code coverage: jacoco-maven-plugin -> mvn jacoco:report
2. Shading Jar: maven-shade-plugin --->mvn shade:shade
3. Creating a Jar archive: maven-source-plugin