5 ESSENTIAL KEY CONCEPTS TO DATABASE OPTIMIZATIONS

 Jul, 02 - 2017   no comments   ArchitectBigDataDatabasesDesignDistributed Databases


 

In this article I tried to list the most important and essential keys for database to perform efficiently. this article doesn’t target any specific database engine however I tried sometimes to link the concepts with the specific database techniques that I learned from different engines.

This is still a draft however it is 90% complete

 

 

Indexing (Double Edged Sword)

Indexing is somewhat similar concept to sorting, categorizing, or organizing our data  and we usually use everywhere to reach to our data quickly, we use it in our books, phone books in listing of names.

 

If we don’t have indexes in our tables the query execution plans would be very bad as it will scan or read all the table’s rows and filter out the results according to criteria. when we use index, the search would be working according to those key indexes we are defining.

 

In SQL server, where it has the concept of Clustered Index while MySQL uses the InnoDB Index. both of them uses B-Tree data structure to find the rows/keys quickly. those types of indexes are very useful especially for large tables.

 

Here are some rules with some caveats that I will highlight:

  • We should index keys that are used JOIN operations (inner join - left join - right join - cross join)
  • We should index columns that are used in query filters (where - having)
  • We should index keys used in aggregation or ordering (order by - group by)
  • Don’t index wide columns like char(50) as it result in large number of pages.
  • Key generations should be evenly distributed in case of using partitioning, we will touch this part in section of Sharding section.
  • Remove unused indexes.
  • Always monitor and follow-up after making index changes.
  • Sometimes we should separate Indexes and project only the fields of interest to different external system or solution for supporting searching. As an example we can use Solr Inverted Index to work as search engine for the system.

Caveats:

  • Balance Indexing (It is double edged sword): Over indexing the keys will results in poor performance especially in the writes/updates/deletes.
  • Don’t over index highly transactional data tables (OLTP databases) with heavy I/O
  • Indexing can make your poor database to perform much better however improper indexing can make high performant database to run poorly.

 

 

Denormalizations (Space vs Speed):

 

All the time we were studying different normalization forms to keep our data clean, consistent and not repetitive however from the first look to normalization it seems it is doing its good job as it saves disk spaces by eliminating duplications but it is also affecting on query times as if you need to get some data from 2 tables you have to JOINthose tables together which results in seeking multiple locations in disk to join data according to some keys.

 

In denormalization we are intentionally duplicating data to eliminate such JOINsfor example suppose we have a table of Student and Phones each student might have one or more phone numbers. in its normalized form we should have 2 tables and in case we need to list all phone numbers along with StudentName we should JOIN both tables together to bring out the results we need. in denormalizations we are redesigning the solution to have only one table instead named Student(Id PRIMARY KEY, Name varchar, Phones varchar) and one column for Phones where we are saving all phones numbers that are attached to each student. look to the differences as shown in the below diagrams.

.

 

Denormalization makes life easy especially in querying the data however it brings a challenge where we want to save or update the data. at this time we should update multiple locations. from performance point of view it is bad to update multiple locations and also it is hard to maintain and might be sometimes error prone as someone might forget to update all required fields.

 

Another example that might be also obvious and more practical, suppose we have a Post and Comments table and we just need to list all posts in a table along with number of comments each post has. so instead of joining the 2 tables, we should have a column in Post table where it is only saving number of comments for each post. Every time we are inserting or deleting a comment, its post table should be updated to keep the correct count.

 

So what is the actual solutions, actually it is trade off between normalization and denormalization. normalization saves space but affects on performance while denormalization wasts or over-use space but performs better. normalizations shines for systems where it has a lot of writes more than reads while denormalization is much better for systems where it has more reads than writes

 

Apart from write and read throughputs is to highlight the 2 solutions that are kind of interesting to find the best solution for data persistence. Cassandra is highly available database system that supports high read throughput but it is not strongly consistent which means some stale data might be returned to users while HBase are strongly consistent but not highly available as the data will not be returned until it is fully written to the disk.

 

Aggregation and Joining Elimination

 

This point is actually dependent on the previous one. As long as we are splitting or spreading data into multiple tables, we will end up with a lot of joins among tables to fetch our data. aggregations functions like counting, summations, minimum, or maximum also will affect on performance due to the scan or fetch and grouping operations, scanning all records and merging or grouping based on specific aggregate function and key.

 

Heavy usage of joins should be avoided by denormalization, and if we need to have a JOIN, the keys (primary/foreign keys) should be indexed.

 

Replications and Caching

 

This is the most interesting part for reacting to a busy system that is having a lot of read more than writes. the idea is to have one Master server and replicate (copy) your data behind the scenes to multiple instances. we should keep these data always replicated after any updates or changes. in case of reading the data we should distribute traffic to those read replicas and keep only the data to be written to go only to the master server. this approach however bring some problems especially regarding consistency of data as users might read stale data before the time it takes to replicate or copy the data from the master to its replicas.

 

 

The above technique is named Master/Slave replication that is used to increase the power of application towards the reads more than the writes however there is another approach or technique that is named Master/Master replication where we use to increase the power capacity of writes and reads together. both servers can receives writes/updates and replicate the changes to each other. if you need to know more about Master/Master replications, check this article that guides how to set it up.

 

Failover is also one of the major benefits of replications that allows us to switch to different replica in case of failure of Master instance for example. You can read more about read replicas and how they are very important in enhancing performance and increasing high availability of the system from Amazon RDS Read Replicas.

 

Caching also is a key concept for query optimization which is usually built in every DBMS. caching usually depends on how much memory we have in the machine and whether it is enough to cache enough data to serve the clients.

Caching also might be extended to application side where we might have a cluster of Memcached or Redis servers to store the most extensive or time consuming query results. application side should use

 

Partitioning (Sharding)

 

Is to distribute your data in different machines and query such data according to consistent hashing of their primary keys however this seems to be easy in the start but it needs a lot of work and also problems in generation of keys. The below diagram shows how the data is distributed in shards and how the query router (MySQL Fabric in case MySQL or Mongos in case of MongoDB) is doing its job of routing queries to correct partition of shard baed on the primary partition key. I already took the diagram from this article that were discussing the partioning based on MongoDB however it is the same concept. Primary are considered the master nodes and secondary nodes are considered the read replicas.

 

 

One of challenges of sharding data is to ticket rows with unique primary key Id which is easy to implement in one DB instance as it is just making it as a counter however in distributed data we have to generate keys using different methodology as if we applied a counter in each device there will duplications in each server.  Snowflake implemented couple of years ago by twitter to come up with this issue of ticketing Unique keys in different distributed machines according to different criteria like (Machine Id, Core Number, Time, and Counter in each second to avoid duplications). they retire the project and released later Twitter Timer Server that can be used easily outside twitter environment.

All of Big data solutions were built based of this concept like HBase, MongoDB, DynamoDB …etc. In Amazon’s DynamoDB, they are partitioning the data according to a partition key that specifies where the data should reside in physical machine while there is also a sort keys that are used to sort data inside each machine. you are not allowed to query data using a column/key that is not a part of partition or sort key.

 

Moreover there is another concept of even separation of indexing to make a different index that is not a part of the main table. this is named as Global Index that might be even saved in different machine other than the the machine where the actual table/data resides. Global Indexes simply don’t share any resources like CPU usage or memory space with table however somewhat it is still affecting indirectly and slightly on those resources to build or update (insert/update/delete) the global index. but querying from the index doesn’t affect at all on the main table transactions at all.

 

Keys generation must be evenly distributed which means key must be generated in a way that it should go to different servers or machines to distribute the traffic to servers instead of making all or most of the traffic to set of servers and keep the others idle.

 

As an example of bad key generation is to use this format (YYYY-MM-DD-HH-MM-SS) 20170707122530 – 20170707122531 – 20170707122532 – 20170707122533. The first part of the key is always static which will results in moving the data during the whole day to some set of servers while keeping the others idle.

 

In MySQL, there is a utility named MySQL Fabric that is easy to use for managing a MySQL deployment for sharding and high-availability.

 

 

 


Related articles