Wednesday, May 2, 2018

Networking basic terms

1. NAT(Network Address Transformation) is used to translate "Private IP', which is valid only in LAN into "Public IP" which is valid in WAN.

Tuesday, May 1, 2018

Front End optimization (FEO)

1. Embed thumbnail of the video instead of the video itself.  
The embedded video not only increases the byte size of your web pages but the browser has to make multiple HTTP requests to render the video player. This increases the overall loading time of your page thus affecting the page speed score. The other drawback with the default YouTube embed code is that it isn’t responsive. If people view your website on a mobile phone, the video player may not resize properly for the small screen.
Refer -  https://www.labnol.org/internet/light-youtube-embeds/27941/


Tuesday, August 22, 2017

What is SYN_SENT socket status?

When dealing with Network issues one of the linux command that comes handy is 'netstat’
 netstat -an or netstat -an | grep “remote ip”

This command shows all the sockets in the system. Each socket has various status. For example, a socket can be in ‘ESTABLISHED’ status or in ‘LISTENING’ status.

One important status we may come across is ‘SYN_SENT’. When we see a socket in this status, it most probably indicates a Firewall issue, i.e the remote host you are trying to reach is NOT reachable due to a firewall block.

Note that the SYN_SENT status will not remain for long time. It only lasts for couple of seconds. So, you have to be quick in running the netstat command (perhaps in another terminal window)

When the client initiates a connection to Server, it first sends a SYN package. At this point the socket status changes to ‘SYN_SENT’. If the remote server is reachable and working, the client will receive a ‘SYN + ACK’, for which the client will send a ‘ACK’ and thus forms a TCP connection.

Monday, February 6, 2017

How to save cost in AWS? (WIP)

1. Stop all the instances that are not required.
2. Select appropriate EC2 instance
3. Reserve instance (May not be required for Non-production servers if they can be stopped when not required)
4. Select appropriate storage type to store different type of data
5. Use the  advisor available in the AWS portal
6. Business support can be taken only for production and non-production can be without any support.
 

Monday, January 9, 2017

Types of Monitoring

1. Domain monitoring (To keep an eye on the domain expiry for renewal)
2. Device Monitoring (IPS, Firewall, Switch, F5 Load Balancer, etc)
3. Operating System/ VM monitoring (CPU, Memory, I/O, Disk Utilization, etc)
4. Server monitoring (JVM, GC cycles, Mbeans, etc)
5. Architecture monitoring (Real time Deployment architecture monitoring)
6. Service monitoring (URL monitoring)
7. Application monitoring (Slow Transactions)
8. Log monitoring(End user activity)

Tuesday, August 9, 2016

Cross Joins

Many SQL books and tutorials recommend that you “avoid cross joins” or “beware of Cartesian products” when writing your SELECT statements, which occur when you don't express joins between your tables.  It’s true that you need to ensure that your join conditions are adequately stated so that you don’t accidentally produce this effect, but it is not true that you should avoid these types of joins in every situation.

Cross Joins produce results that consist of every combination of rows from two or more tables.  That means if table A has 3 rows and table B has 2 rows, a CROSS JOIN will result in 6 rows.  There is no relationship established between the two tables – you literally just produce every possible combination.

The danger here, of course, is that if you have table A with 10,000 rows and Table B with 30,000 rows, and you accidentally create the product of these two tables, you will end up with a 300,000,000 row result -- probably not a good idea.  (Though it is great for creating test data and the like.)

Tuesday, July 26, 2016

Application Performance Monitoring


Monitoring of software application usually includes the following and more -
1. Infrastructure instance monitoring like CPU, Memory, Disk I/O, Network, file descriptors, etc
2. Container (Server) monitoring like JVM utilization, server/access/error logs, etc
3. Application monitoring like No of transactions, type of transactions, response times, etc

There are various tools available for monitoring Application. In this article am sharing some practical benefits of using an APM tool (Appdynamics) which I have been experiencing -


1. Ease of installation, administration and usage

2. Provides information on time taken by the application server and time taken by the database server

3. Provides an ability to further drill to exact method, line number and query that causes slowness

4. Ability to quickly find issues related to lock wait time outs, java exceptions, database exceptions, etc

5. Ease of doing before and after optimization analysis to understand impact on response times of business transactions

6. Provides call stack trace, exceptions, slow queries, query count/repetitive queries, API call count, slow async calls, external calls at one place in easy to understand manner which helps the production management personnel to connect with developers

7. Very less overhead on the application servers on which the java and machine agents are deployed

8. Ability to correlate heap usage with the transactions that impacts it

9. Ease in configuration of thresholds, alerts, package includes and excludes, etc.

10. Helps to find root cause of transaction slowness or run time exceptions in code which can be either in a developers code or the code referred/developed by other teams. Appdynamics helps the team to focus on the issue in hand than finger pointing.

11. Ease of finding the issue saves time and effort that otherwise is spent doing focused performance testing. (Last but not the least, its important to mention that excessive dependency on Appdynamics is not good. A Developer/performance tester some times forgets his very job and depends solely on a APM like Appdynamics) 

Monday, July 4, 2016

Java Issues : java.lang.OutOfMemoryError: GC overhead limit exceeded

In many Cases , we see Out Of Memory Issues When Working with Java Applications on the Java Servers.

“GC overhead limit exceeded” is generally thrown by the Garbage Collectors that we use for the JVM. This is generally thrown by serial or parallel collectors.

The issue comes when more amount of time is spent in doing the Garabage Collection and very less than 2% of the heap is recovered.

This may be due to applications that are running very long time or may be due to threads that are struck. Due to these sort of threads ,the objects that are loaded are not reclaimed and are hold-ed by these struck threads for a long time

The serial or parallel collectors throw this exception and this feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If there is unintended object retention , we need to check code for changes If the retention looks normal, and it is a load issue, the heap size would need to be increased.

Saturday, March 26, 2016

Is it important to add index on columns of a table with less no. of records?

I was asked a question regarding the need to add an index on a database table with less no. of records. Say small master tables. I believe its essential to put an index on appropriate columns for such  small tables as well. Here is the rationale behind it -

1. In my experience one of the top root causes of slow transaction is repetitive queries. It can so happen that smaller tables are queried many times which then increases the record scan that many times.

2. The db optimizer chooses an execution plan basis Selectivity index of columns, indexes, PK, data type and some other constraints of the tables involved in the query. I have come across few instances where large queries are tuned just by adding indexes in some master tables the query joins with.

Wednesday, January 13, 2016

The redo and undo logs

Redo Logs -

Instead of writing to one file (the permanent table records file) you are writing to two different files (the redo log file and the permanent table records file). The difference is that the writes to the database table file(s) are random in nature while the writes to the redo log file will be sequential, which is usually going to be much faster. You perform the much-faster writes to the redo log as needed and then perform the slower writes to the table files periodically when there is time. Thus, the system actually operates faster writing to both files rather than only one file. This applying of the redo logs occurs automatically on startup of the MySQL server after a crash. Very similar processes are used by other database platforms.

Undo Logs -

In addition to the redo log there must also be undo logs. When a database user starts a transaction and executes some commands, the database does not know if the user will end the transaction with a COMMIT or a ROLLBACK command. Ending with  a COMMIT means all the changes made in the course of the transaction have to be preserved (fulfilling the Durable aspect of ACID). If the transaction gets interrupted for some reason, such as the MySQL daemon crashing, the client disconnecting before sending a COMMIT, or the user issuing a ROLLBACK command, then all changes made by the transaction need to be undone.
If the server crashed, the redo log files are applied first, on start up. This puts the database in a consistent state. Now the database server will need to roll back the transactions which were not committed but had  already made changes to the database. Undo logs are used for this.  As an example, if you are running a transaction that adds a million rows and the server crashes after eight hundred thousand inserts are performed, the server will first use the redo log to get the database server into a consistent stand than then will perform a rollback of the eight hundred thousand inserts using the undo logs. For InnoDB, this undo information is stored in the ibdata file(s).

Refer -