Compiling Python Redshift DB adapter for AWS Lambda env.

AWS lambda has gained huge momentum in the last couple of years and enabled software architects/ developers to build FaaS (Function as a Service).  As much as Lambda helps in scaling applications, it has some limitations like execution duration or memory space availability, etc.   For long running jobs, typically in the backend or batch processing, 5 minute duration can be a deal breaker.  But with appropriate data partitions and architecture it is still an excellent option for enterprises to scale their applications and be cost effective.

In the recent project, I architected data be loaded from a datalake into Redshift.  The data is produced by an engine in batches and pushed to s3.  The data partitioned on time scale and a consumer Python application will load this data at regular intervals into Redshift staging environment.  For scalable solution datalake can be populated from multiple producers and similarly one or more consumers can drain the datalake queue to load to Redshift.  The data from multiple staging tables are then loaded to final table after deduping and data augmentation.

Read More »

Check and set analyze statistics on tables-columns in Vertica

It is important to update statistics in any database for better query runs or query optimizations.  Vertica provides ANALYZE_STATISTICS and ANALYZE_HISTOGRAM to collect stats and in big data warehouse environment it is sometime possible that some tables may not have stats collected.  In that case you can run following query to check and add the tables/ columns to collect stats.



SELECT T1.table_schema, T1.table_name, T2.row_ct, T2.wos_ct, T2.ros_ct
FROM
(
-- Get list of tables that don't have 'FULL' stats
SELECT distinct table_schema, table_name
FROM projection_columns
WHERE (statistics_type = 'ROWCOUNT' or statistics_type = 'NONE')
-- and table_schema not like 'v_%'
AND (table_schema = 'schema_1' OR table_schema = 'schema_2' )
ORDER BY 1,2
) T1
JOIN
(
-- Get list of tables with storage details
SELECT anchor_table_schema, anchor_table_name,
sum(row_count) row_ct, sum(wos_row_count) wos_ct, sum(ros_row_count) ros_ct
FROM projection_storage
WHERE (anchor_table_schema = 'schema_1' OR anchor_table_schema = 'schema_2' )
GROUP BY 1,2
) T2
ON T1.table_schema = T2.anchor_table_schema
AND T1.table_name = T2.anchor_table_name
ORDER BY 3 desc;

From the list generated above add those tables/ columns to your regular stats collecting script which you can run during off peak hours.

Notes:
Vertica’s ANALYZE_HISTOGRAM returns similar data as ANALYZE_STATISTICS except that one can set the percent of data that is sampled to generate statistics metrics.  In the case of latter it uses fixed 10% sampling.

WOS: Write Optimized Store is a memory resident data structure used to support fast data loads without data compression and indexing.
ROS: Read Optimized Store is disk-based storage with data compressed and sorted.  Tuple Mover moves data out of WOS and into ROS in batches.

References:

Purging Vertica tables/ partitions at regular intervals

First thing first, especially, when there is potential to misunderstand due to definitions of specific words – purging vs truncating vs deleting data. In this article, my interest is to purge the tables of deleted rows.

Truncate => Removes all storage associated with a table, while preserving the table definitions.
Purge => Permanently removes deleted data from physical storage so that the disk space can be reused.  You can purge historical data up to and including the epoch in which the Ancient History Mark is contained.
Delete => It marks tuples as no longer valid in the current epoch. It does not delete data from disk storage for base tables.

Vertica purge_table or purge_partitions statement purges all projections of the specified table and can temporarily take significant disk space while performing the purge.

Following query returns list of queries that when executed purges data for each partitioned tables.

   — Look into only large tables (TABLE_SIZE_OF_INTEREST), say 1Billion row,
   —  that have good chunk deletes (>20%)
   SELECT ‘SELECT purge_partition(”’|| P.projection_schema ||’.’|| P.anchor_table_name || ”’,”’||PRT.partition_key||”’);’
    FROM projections P inner join
    (   SELECT projection_id, partition_key
        ,SUM(deleted_row_count)
        ,SUM(ros_row_count) as total_rows
        FROM partitions
        GROUP BY 1,2
        HAVING SUM(deleted_row_count)/SUM(ros_row_count) > 0.2
    ) PRT
    ON PRT.projection_id = P.projection_id
    GROUP BY PRT.partition_key, P.projection_schema, P.anchor_table_name
    HAVING max(total_rows) <= {TABLE_SIZE_OF_INTEREST}
    ORDER BY max(total_rows) DESC
    LIMIT 25 ;

Similarly for those tables that are non-partitioned use following query.

    SELECT ‘select purge_table(”’|| projection_schema ||’.’|| anchor_table_name || ”’);’
    FROM projections P inner join
    (   SELECT schema_name, projection_name ,
            sum(deleted_row_count) delete_rows ,
            sum(delete_vector_count) delete_vector_count ,
            sum(total_row_count) total_rows,
            sum(deleted_row_count)/sum(total_row_count) as delete_percent
        FROM storage_containers
        GROUP BY 1,2
        HAVING sum(deleted_row_count)/sum(total_row_count) > 0.2
    ) A
    ON A.schema_name = P.projection_schema
      AND A.projection_name = P.projection_name
    INNER JOIN tables T on P.anchor_table_id = T.table_id
    WHERE length(T.partition_expression) = 0
    GROUP BY 1
    HAVING max(total_rows) <= {TABLE_SIZE_OF_INTEREST}
    ORDER BY max(total_rows) DESC
    LIMIT 25 ;

With a wrapper around these queries in Python/ Perl/ Bash, etc one can easily go through the list executing each statement and cleaning up old deleted data for improved performance.

DW solution – IP mapping to Geo Locations

In my previous few posts on the blog, I covered how to convert IP addresses to corresponding integers and back. When performing a Data Warehouse (DW) tables join between IP addresses for location analysis joining on integer IP column is much faster than joining on string column.  As noted in other posts, this article covers IPv4 but technique could be used for IPv6 addresses on using 2 BIGINT.

For geo location info, you can use MaxMind’s Geo-City data that is provided in two entities – one with mapping IP block to a location ID and another one with location ID to city/state/country.  It is one to many (1:M) relationship and to find an IP address location one need to find the matching IP block and then map it to city or country.  There are posts that describe in detail how to perform it effeciently for a given IP.  These techniques will work for an OLTP environment where you pass known IP address from UI or application layer to database and gets its city/country.

  1. Fast MySQL Range Queries on MaxMind GeoIP Tables byAndy Skelton
  2. MaxMind GeoIP Install Setup Tutorial by Bartomedia
  3. Finding a user’s country/region based on IP by Shlomi Noach
  4. On efficiently geo-referencing IPs with MaxMind GeoIP and MySQL GIS by Jeremy Cole

Geo-City data has following fields seperated by comma and row terminated by new line (\n):

  • Location ID
  • Country Code
  • State/Region
  • City Name
  • Postal Code (US only)
  • Latitude
  • Longitude
  • Metro Code
  • Area Code

IP Block has:

  • Start IP Address
  • End IP Address
  • Location ID

While loading this dimensional data during ETL process, you should also do the IP address transformation to integer as explained in “Converting an IP address to an integer“.  And Geo-City dimensional table will also have partition range (the reason for this post) and surrogate key (id).  You should also perform other data validation like non-overlapping ranges, valid country codes, etc. which are all typical standard activities of DW ETL process.  The time, resource and effort put into cleaning and validating the data before pushing the data into final fact tables will pay well within short period of time.  To fix a data issue and reconcile the reports in DW/BI environment always leads to lower ROI.

You can also denormalize the above tables and create a single table.  In either case by joining on range of IP address for a given set of IPs will result in the same slow performance.

dim_geo_ip_city_location holds around 5.9million rows with following attributes:

CREATE TABLE dim_geo_ip_city_location (
location_id int NOT NULL,
    start_ip varchar(16) NOT NULL,
    end_ip  varchar(16) NOT NULL,
    country char(2),
    region varchar(64),
    city varchar(128),
    postal_code varchar(32),
    latitude decimal(8,4),
    longitude decimal(8,4),
    metro_code varchar(8),
    area_code varchar(8),
...
)ENIGNE=...

CREATE TABLE dim_ip_block (
    start_ip_num bigint NOT NULL,
    end_ip_num bigint NOT NULL,
    location_id int NOT NULL,
        ...
)ENGINE=...

In MySQL (5.0) optimizer performs range operation effeciently with a constant meaning that you have given the query an IP address like “WHERE ip_addr between 1230 and 1234”, for example, but not with range operations without constants.  For more info on using range optimization see mysql documentation and for indexing see “Mastering the Art of (MySQL) Indexing“.

I did not find any article providing some insights into doing DW/BI analysis on IP addresses.  In a DW environment the business user is interested in slicing and dicing the hundreds of thousands to million of IPs.  Just trying to find few thousand IP address locations (See query 1 below) was taking more than an hour when joined between lookup table (a table similar to dim_geo_ip_city_location but without partition column (ip_range_partition) and id) and ip address join table even with appropriate indexes like index on (start_ip_num, end_ip_num) or start_ip_num and end_ip_num.

Query 1:

-- Simple query to select IP block
SELECT IA.ip_num, B.location_id, B.start_ip_num, B.end_ip_num
FROM ip_addresses IA use index(...)
JOIN dim_ip_block B use index(...)
WHERE IA.ip_num between B.start_ip_num and B.end_ip_num

This leads to MySQL performing full scans. And in the case of ip_block table (B) it doesn’t use any index. For this to be avoided, I partitioned the dimension (lookup) table into 1000 parts and since this table has ~5.9 Million rows each partition will result in ~5,900 rows in each partition.  For more on data set partitions see its wiki.  The partitions should be numbered in the ascending order with respect to ascending range of ip blocks (start_ip_num).  Then also create a another table (dim_ip_range_partition) with start_ip_num and end_ip_num for this each range basically creating super IP blocks.

CREATE TABLE dim_ip_range_partition (
     partition_id int NOT NULL AUTO_INCREMENT,
     start_ip_num bigint NOT NULL,
     end_ip_num bigint NOT NULL,
     ....
) ENGINE=InnoDB ;

Also, create a primary key on patition_id and index on (start_ip_num, end_ip_num) on this table.
The solution is based on two things that there is no overlapping IP block and IP blocks are ordered (in ascending order).

To populate the above table,

  1. Add id (row_id) column to dim_ip_block
  2. Add partition_id integer column to dim_ip_block
  3. Update id from 1 to n (highest start_ip_num) in ascending order of start_ip_num
  4. Partition “n” ids into P parts (n/P) where n > 1 and 1 < P < n
  5. Assign each part/partition ascending integer value
  6. Now update above table dim_ip_range_partition P_TABLE from dim_ip_block B_TABLE where P_TABLE partition id between 1 and P of B_TABLE and P_TABLE.start_ip_num is lowest B_TABLE.start_ip_num of the corresponding partition and P_TABLE.end_ip_num is highest of corresponding partition.  In other words…

         INSERT INTO dim_ip_range_partition
            (partition_id, start_ip_num, end_ip_num, …)
         SELECT partition_id, min(start_ip_num), max(end_ip_num), …
         FROM dim_ip_block
         GROUP BY partition_id
         ORDER BY partition_id asc

By doing so, we have created a super IP blocks each having n/P blocks from the original ip_blocks.   And the following query is one way to extract city/country info for given set of IP addresses.

SELECT t.ip_num, t.ip, B.start_ip_num, B.start_ip, B.end_ip_num, B.end_ip
,L.city, L.country
FROM
(
   --- Get the super ip block (partition)
   SELECT C.ip_num, max(partition_id) partition_id, max(P.start_ip_num) start_ip_num
   FROM ip_addresses C use index(...)
   JOIN dim_ip_range_partition P
   WHERE C.ip_num >= P.start_ip_num
   GROUP BY C.ip_num
) t
LEFT OUTER JOIN mktg_ip_block B use index(...) 
  ON t.partition_id = B.partition_id     --- And join on this partition
  AND t.ip_num >= B.start_ip_num
  AND t.ip_num <= B.end_ip_num
JOIN dim_geo_ip_city_location L        --- Get city/country
  ON B.location_id = L.location_id

The query that would have taken more than hour returns within 30 seconds now!  You can further optimize the above design with say more denormalized relations, having a large number of partitions (this is a trade-off, with highest partition n = P, each partition is same as original table which will result in worse off performance!) on both lookup table and lookedup table.  Or extending the partitioning concept to more levels (super super block, ….) you can reduce the number of joining rows with little bit more complex query.

The solution is a typical Computer Science way of handling the complex problem by dividing into smaller ones and solving it.  In this it is more specific to MySQL solution for large data sets with range operations.

Cheers,
Shiva

Converting an IP address to an integer and reverse

In dealing with IP address analysis one may required to map IPs to geo location. There are third party applications or software that perform IP to city, state, country mapping and businesses, for example, can make use of them to see where their web traffic is coming from. 

Article covers IPv4 addresses which use 32 bits and similar technique can be used to IPv6 addresses which use 128 bits by using 2 BIGINTs to store (64bits each).

It is easy to handle reasonable load when one is querying for few IPs or few hundred IPs either from simple user interface entering one or more IP addresses or through APIs. In these cases IP addresses can simply be matched in string format without much of performance issue but it can become a performance bottleneck when dealing with millions of rows loaded through ETL process into Data Warehouse environment and reports are run against fact tables with hundreds of millions of rows. Under these circumstances converting IP to an integer will boost the performance substantially.

MySQL provides functions to convert IP string to integer and backwards. For example, inet_aton converts string to a number (integer) and inet_ntoa converts number to integer.

inet_aton and inet_ntoa:

mysql> select inet_aton('192.168.100.200');
+------------------------------+
| inet_aton('192.168.100.200') |
+------------------------------+
| 3232261320 |
+------------------------------+

mysql> select inet_ntoa(3232261320);
+-----------------------+
| inet_ntoa(3232261320) |
+-----------------------+
| 192.168.100.200 |
+-----------------------+
1 row in set (0.00 sec)


Bit operations:

The functions are simple to use and at the lower level they basically perform bit shifting. In the above example, ‘192.168.100.200’ is represented as 11000000 10101000 01100100 11001000 in binary form. For more on binary system or to convert a number from decimal to binary see wiki here.

To convert string IP to integer function inet_aton multiplies 192 by 256*256*256, 168 by 258*256, 100 by 256 and then adding 200 (last octet in the above ip address), that is,

and it is same as 192<<24 i.e., left shift 192 by 24 bits, 168 << 16 and 100 << 8. Bit shifting is a much faster multiplication since CPU can use its register to move the bit in simple operation.

In reverse, one can not simply divide the ip integer (3232261320) to get back the ip address in octet string format.  But one need to appropriately left and right shift the number (or do multiply and then divide) to get the octet that is of interest. To get the most highest valued position (192), shift the bits to right. For example,

11000000 10101000 01100100 11001000 >> 24 (right shift 24 bits) gives you

00000000 00000000 00000000 11000000

Therefore, all the right most 24 bits dropped off the register (cliff) and we are left with 4th octet 11000000 prefixed with 24 zeros.

mysql> SELECT 3232261320>>24 as 4th_octet;
+-----------+
| 4th_octet |
+-----------+
| 192 |
+-----------+
1 row in set (0.00 sec)

By the same logic to get the 3rd octet, we can move the bits 16 times to the right but we will be left with 4th octet as well. Doing

11000000 10101000 01100100 11001000 >>16 will result in

00000000 00000000 11000000 10101000

So we should first shift the bits to left so that the most highest octet (4th octet) drops off the register.  We can do so by shifting left by 8 bits.

mysql> SELECT 3232261320<<8 as 8blsn;  -- 8blsn: 8 bit left shift number
+--------------+
| 8blsn |
+--------------+
| 827458897920 |
+--------------+
1 row in set (0.00 sec)

Instead of getting a smaller number than before we got a larger number! And obviously, right shifting this number by 16bits to right will not give the 3rd octet of IP.   This happens due to fact that MySQL converted int to bigint silently.  The 4th octet moved into 5th octet resulting in appending 8 zeros to the right as below.

11000000 10101000 01100100 11001000 00000000

Since bigint is 8 bytes long one need to shift not by 1 byte (8bits) but 5 bytes (40 bits) to delete the highest octet (#192).

Just with 4 byte (32 bit) left shift the register would look like below.

11000000 10101000 01100100 11001000 00000000 00000000 00000000 00000000

And with 40 bit left shift the register would look like thus removing (#192 – 11000000 ),

10101000 01100100 11001000 00000000 00000000 00000000 00000000 00000000

Now right shift 56 bits to get the 3rd octet (#168).

00000000 00000000 00000000 00000000 00000000 00000000 00000000 10101000

This is same as
(3232261320 *256 *256 *256 *256 *256) / (256 *256 *256 *256 *256 *256 *256)

 mysql> SELECT 3232261320><>56;
+--------------------+
| 3232261320<>56 |
+--------------------+
| 168 |
+--------------------+
1 row in set (0.00 sec)

To get the 2nd octet first left shift the original number 48 bits and then right shift by 56 bits. For the 1st octet left shift by 56 bits both ways.

mysql> SELECT 3232261320<>56;
+--------------------+
| 3232261320<>56 |
+--------------------+
| 100 |
+--------------------+
1 row in set (0.00 sec)

mysql> SELECT 3232261320<>56;
+--------------------+
| 3232261320<>56 |
+--------------------+
| 200 |
+--------------------+
1 row in set (0.00 sec)

With all imperative/procedural languages providing the bit operators (AND, OR, bit shift, etc), one can and should convert the IP string to integer before being loaded to staging. Thus the parsing or conversion can also be easily parallel processed for large loads.

An example in C sharp (C#) is shown below.

// 
// C# program to convert IP address string to an integer.
//
using System;

namespace IP_addr_to_int
{
class Program
{
static void Main(string[] args)
{
// Infinite loop until user enters control-c.
for (; ; ) {
string ip = "";
Console.Write("Enter ip address: ");
ip = Console.ReadLine();

ulong ipNum = 0;
string[] ipArr = ip.Split('.');

if (ipArr.Length != 4) {
Console.WriteLine("Error. Enter four octet IP address only.");
// Environment.Exit(-1);
continue;
}
for (int ii = ipArr.Length - 1; ii >= 0; ii--)
{
ulong ipOctet = Convert.ToUInt64(ipArr[3 - ii]);
// ipOctet * 256 * 256 * 256;
if (ii == 3) { ipNum += ipOctet << 24; }
// ipOctet * 256 * 256;
else if (ii == 2) { ipNum += ipOctet << 16; }
// ipOctet * 256;
else if (ii == 1) { ipNum += ipOctet << 8; }
// ipOctet * 1
else { ipNum += ipOctet }
// System.Threading.Thread.Sleep(1000);
}
Console.WriteLine
(Convert.ToString(ip)+ " converted to integer " + Convert.ToString(ipNum));
}
}
}
}

For more ways to convert an IP see Part 2  and for tools see Part 3.
Cheers,
Shiva

Data Profiling – An example in Talend Profiler

Data is loaded into Data Warehouse (DW) from disparate systems and sometimes from external partners who have their own coding standards.  In any case, the quality of the data loaded into the data warehouse is often variable,and especially while discovering the data one may need to load some sample data and do some analysis including initial data profiling.  During this process one may discover differences which when resolved result in much smoother data flow along the process flow.   Or it may be at later stage, say after the summarization is completed one may need to do some analysis on type of data.  In all these cases data profiling helps and Talend provides a tool (Talend Open Profiler – TOP) to quickly and efficiently perform profiling.

Data profiling – the process of examining available data in different data sources, including databases, applications, files, data transfer from external systems etc., and collecting statistics and information – improves data quality and better reporting.

In date dimension, we have nearly 220,000 rows covering Jan.01,1900 to Dec.31,2500 (7 hundred year dates) and one of the column is ‘day_of_week_name’ (cardinality 7 – MONDAY, TUESDAY….).  This table has 70 columns including date, weeks, months, names, etc. For testing purpose, I wanted to check the nulls and pattern frequency (distribution) for ‘day_of_week_name’ column.

To do so, select the column to profile (day_of_week_name), drag and drop into “Analyzed columns” of “Analysis Settings” tab.  Then pick the indicators i.e., how you want the column measured (count, range, stats, etc.) and I picked row count and NULL count along with “Pattern Frequency Table”.  Pattern frequency will count different patterns. The results in “Analysis Results” tab shows as below.

There were 219,146 row count with no NULLs and the pattern frequency indicates 31,307 of pattern AAAAAAAAA (9A Uppercase letters), 31,307 of 7A pattern, 62,613 of 8A pattern and 93,919 of 5A pattern.

9A pattern count is of ‘WEDNESDAY’ rows, 5A pattern covers ‘SUNDAY’, ‘MONDAY’ and ‘FRIDAY’.  Similarly for other days.

You can also have your own ‘UDI’ – User Defined Indicators that can add more functionality to existing indicators.  You can build them in Java jar and import them.  But anytime you do processing in the profiler, the data needs to get transferred from database and possibly slowing down the profiling.  For smaller data set it may not be noticeable but any profiling on large tables can bog down either due to memory limitation and/or network delay.

Profiler converts your indicators to appropriate query and runs it.  For example, for the above pattern match, MySQL query looks like:

SELECT REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(`day_of_week_name`,'a','a'),'b','a'),'c','a'),'d','a'),'e','a'),'f','a'),'g','a'),'h','a'),'i','a'),'j','a'),'k','a'),'l','a'),'m','a'),'n','a'),'o','a'),'p','a'),'q','a'),'r','a'),'s','a'),'t','a'),'u','a'),'v','a'),'w','a'),'x','a'),'y','a'),'z','a'),'ç','a'),'â','a'),'ê','a'),'î','a'),'ô','a'),'û','a'),'é','a'),'è','a'),'ù','a'),'ï','a'),'ö','a'),'ü','a'),'A','A'),'B','A'),'C','A'),'D','A'),'E','A'),'F','A'),'G','A'),'H','A'),'I','A'),'J','A'),'K','A'),'L','A'),'M','A'),'N','A'),'O','A'),'P','A'),'Q','A'),'R','A'),'S','A'),'T','A'),'U','A'),'V','A'),'W','A'),'X','A'),'Y','A'),'Z','A'),'Ç','A'),'Â','A'),'Ê','A'),'Î','A'),'Ô','A'),'Û','A'),'É','A'),'È','A'),'Ù','A'),'Ï','A'),'Ö','A'),'Ü','A'),'0','9'),'1','9'),'2','9'),'3','9'),'4','9'),'5','9'),'6','9'),'7','9'),'8','9'),'9','9'), COUNT(*) c 
FROM `dw_mktg`.`dim_date` t
GROUP BY REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(`day_of_week_name`,'a','a'),'b','a'),'c','a'),'d','a'),'e','a'),'f','a'),'g','a'),'h','a'),'i','a'),'j','a'),'k','a'),'l','a'),'m','a'),'n','a'),'o','a'),'p','a'),'q','a'),'r','a'),'s','a'),'t','a'),'u','a'),'v','a'),'w','a'),'x','a'),'y','a'),'z','a'),'ç','a'),'â','a'),'ê','a'),'î','a'),'ô','a'),'û','a'),'é','a'),'è','a'),'ù','a'),'ï','a'),'ö','a'),'ü','a'),'A','A'),'B','A'),'C','A'),'D','A'),'E','A'),'F','A'),'G','A'),'H','A'),'I','A'),'J','A'),'K','A'),'L','A'),'M','A'),'N','A'),'O','A'),'P','A'),'Q','A'),'R','A'),'S','A'),'T','A'),'U','A'),'V','A'),'W','A'),'X','A'),'Y','A'),'Z','A'),'Ç','A'),'Â','A'),'Ê','A'),'Î','A'),'Ô','A'),'Û','A'),'É','A'),'È','A'),'Ù','A'),'Ï','A'),'Ö','A'),'Ü','A'),'0','9'),'1','9'),'2','9'),'3','9'),'4','9'),'5','9'),'6','9'),'7','9'),'8','9'),'9','9')
ORDER BY c DESC LIMIT 10

In other DBMS like Oracle, PostgresSQL and DB2 it will be much smaller query since those systems provide TRANSLATE function.
Profiler provides many built-in indicators like Soundex Frequency, Mean, Median, Mode, Range, Quartile Range, etc.

HTH,
Shiva

Logging queries for performance monitoring

Before making any performance improvements, one need to measure it either you are trying to improve computer applications or DW queries, etc.  In one of the Data Warehouse subject area where I run more than 120K (120,00) queries everyday in an ETL environment, I log all queries to a table and track over time to see which queries are showing performance issue(s). With the following table, I was able to improve the performance by more than 50% some time.  For a query that repeatedly runs with different parameters in where clause, a small improvement adds up quickly and other times due to data skewing you can clearly see any changes needed to improve performance.   Other times any mistakes in indexes (DBA dropped it 🙂 or application user modified their query, etc. you will have a reference point to check against why a particular query or queries are slow.

Query Performance Monitoring table definition
Column Null? Type Comment
ID No int (11) Auto Increment
RUN_NUMBER No tinyint Useful when job is recovered or re-ran
APPLICATION No varchar(128) Application name to track
DOMAIN Yes varchar(128) Application domain or subject area
QUERY No varchar(4096) Actual query
DURATION No decimal(10,5) How long did query run?
ROW_COUNT Yes int Number of rows affected
RESULT No varchar(32) Query result type – succeeded or failed?
COMMENT Yes varchar(128) User comment that can be logged with each query
CREATED_DT No date Query run date
CREATED_DT_TM No timestamp/ datetime Query run date and time
CREATED_BY Yes varchar(64) Query user name

In the above table, one can use CREATED_DT_TM wherever CREATED_DT is needed while doing query performance analysis but most of these analyzes are done at date level (as most of DW ETL and summarizations are daily processes). By having date column and indexed, the query analysis queries are much faster as there is no need to apply date function on each row.

More interesting analysis including job failures, errors and query result set (ROW_COUNT) can be done with this data. You can also analyze specific database query performances. For example when a same query runs against different (mysql) databases (say, more than 100), some databases show performance getting worse due to table size or load, etc.

Once you know that there is some query performance issue, you can take remedial action.  Let me know how you have implemented query performance monitoring?

Cheers,
Shiva