Monitoring MySQL and MariaDB Instances

Database performance monitoring and tuning is one of the biggest DBA responsibilities. The old maxim that if something can go wrong, it will, seems to be the general rule in the world of database administration. So, DBAs need effective tools and procedures to identify performance problems, determine the root cause of the problems, and provide assistance to optimize and tune the database environment for efficiency.

But let’s back up a minute. What is database performance? If we are going to monitor it, we should know what it is. A good definition for database performance is the optimization of resource usage to increase throughput and minimize contention, enabling the largest possible workload to be processed. Of course, there are many aspects that must be monitored in order to achieve database performance.

Which brings us to Navicat Monitor, a new database performance monitoring solution on the market from Navicat (whose other offerings I have discussed on this blog before). Navicat Monitor is an agentless remote server monitoring tool that currently can be used to monitor MySQL, MariaDB and Percona Server (with Microsoft SQL Server support upcoming on the horizon).

The server-based software runs on Windows, Mac or Linux and it can be accessed from anywhere via a web browser. Navicat Monitor can be installed on any local computer or virtual machine and does not require any software installation on the servers being monitored. Furthermore, access is supported on all of the most popular web browsers including Firefox, Chrome, Internet Explorer 11 or later, Microsoft Edge 39 or later and Safari 9.1.3 or later. Because the tool is accessed on the web it is easy to keep track of your servers around the world, any time and any place you have a web connection.

But what can Navicat Monitor do? Navicat Monitor collects pertinent database performance metrics such as CPU load, RAM usage, and many other metrics for each database instance being monitored. The product enables DBAs to quickly view all registered database instances and availability groups on a single web-based console. This makes it easy to monitor your entire MySQL/MariaDB database infrastructure from one place, regardless of the locations of the servers.

The interactive dashboard assists the DBA to quickly see how all the database instances are running. It shows all of your monitored instances with high-level summary information and the healthy status of your instances, using the familiar green/yellow/red indicators to depict the performance, availability and health of each instance. See Figure 1.

Fig1

Figure 1. Navicat Monitor dashboard.

Modern DBAs are required to manage many database instances across many servers both on premises and in the cloud. Without a tool like Navicat Monitor to get access to all of these database instances through a consolidated interface assuring performance across hundreds of servers can be difficult, if not impossible to achieve.

Drilling down into an instance Navicat Monitor presents you with a slew of useful details, starting with the top 5 database and top 5 tables based on size. See Figure 2.

Fig2

Figure 2. Instance Details

As you would expect from a performance monitor, you get a comprehensive view of each instance, starting with whether the system is up or down, and moving on to more detailed information such as CPU, memory and swap usage. See Figure 3.

Fig3

Figure 3. More In-depth Instance Details

With Navicat Monitor, your DBAs can perform real-time analysis of your MySQL and MariaDB database servers, but you can also examine historical metrics over time. For example, Figure 4 shows CPU usage over the course of a day for a specific MySQL instance. Navicat Monitor maintains a repository to store alerts and metrics for historical analysis. The repository database can be an existing MySQL, MariaDB, PostgreSQL, or Amazon RDS instance. Using the historical data you can examine metrics and diagnose whether performance trends are deviating from expectations.

Fig5

Figure 4. CPU Usage History

Alerting is also an integral capability of Navicat Monitor. DBAs can set powerful rules and get notifications via email, SMS, or SNMP. The product comes preconfigured with more than 40 fully-customizable alert policies. These rules can be customized to track the metrics and threshold most important to your environment. Navicat Monitor alerts are arranged by common themes and managed by advanced filters. DBAs can see the frequency with which each alter is triggered along with what action, if any, is needed to resolve the issue. You can even see who responded to each alert in the past. Figure 5 shows that you can view the Alert history over time for each instance.

Fig4

Figure 5. Alert History

Of course, identifying performance issues is only part of the battle, but Navicat Monitor provides further assistance with advanced root cause analysis. You can drill down to uncover more in-depth information when an issue is found. Navicat Monitor includes a rich set of real-time and historical graphs that allow you to drill down into server statistic details. It gives you a detailed view of each server load and performance regarding its availability, disk usage, network I/O, table locks and more, which allows you to easily track the deviations and traffic among servers, as well as examine possible solutions and adjust your server settings.

You can even drill down to the query level, monitoring your queries in real with Navicat Monitor’s Query Analyzer feature. Query Analyzer displays summary information for all of your running queries enabling you to discover problematic queries. For example, it can quickly show you the top resource-consuming queries with cumulative execution time or the slowest queries with unacceptable response times. Query Analyzer can even help you to detect locking issues that are causing contention, such as deadlocks when two or more queries are blocking each other.

If you are tasked with managing the performance of MySQL and MariaDB take a look at Navicat Monitor. It may be able to alleviate the strain of managing multiple instances.

 

 

Note: Currently Navicat Monitor can be used to monitor MySQL 5.1.73 or later and MariaDB 10.0 or later.

Advertisements
Posted in MySQL, performance | Leave a comment

Everybody Lies!

I was walking though the airport in Las Vegas after the IBM Think conference this year when I saw a book with the intriguing title of: Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are by Seth Stephens-Davidowitz.

Everybody Lies Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are by Seth Stephens-Davidowitz

This book is chock full of interesting observations and studies on human behavior, most of it culled from the massive search history data sets compiled by Google. As a data guy, I am always interested in learning more about what the data actually tells us, instead of what we think is true. And boy, does this book deliver.

If you are at all curious about what big data and analytics can tell us about human behavior, this book offers up study after study that sheds light on that. But make sure you have a strong stomach because a lot of what it has to say is not flattering. The key idea behind the book is right there in the title – everybody lies. But not, perhaps, when they are searching the internet. Why would you lie to Google when you are looking for something? In the privacy of our homes as we seek out nuggets of data that interest us we are more honest than at any other time. There is no one to impress, no one to worry about seeing what we are looking for… and that leads to many enlightening, and frankly, distrubing discoveries.

I won’t spoil any of the studies and conclusions that the book reaches, but suffice it to say, you’ll learn things about humans that might explain some things going on around us these days.

The book is an easy read and I suggest it to anybody interesting in big data, analytics, and social media. If you are a fan of Freakonomics or any of Nate Silver’s books (such as The Signal and the Noise), then you owe it to yourself to read Everybody Lies… I’m sure it will entertain and enlighten you.

Posted in analytics, Big Data, book review, information | Leave a comment

SQL Injection Still Causing Trouble

An on-going and important aspect of managing database security is designing your applications to avoid SQL injection attacks. SQL injection is a form of web hacking whereby SQL statements are specified in the fields of a web form to cause a poorly designed web application to dump database content to the attacker.

This type of attack has been known for years now, but still there are new stories where SQL injection was used for nefarious purposes. SQL injection played a role in a hacking incident during the 2016 US presidential election, TalkTalk — a UK-based telecoms company — suffered a data breach in 2015 due to SQL injection, and the hardware manufacturer Archos suffered a SQL injection attack late in 2014.

And remember the Heartland Payment Systems breach from 2009? That SQL injection attack cost $300 million and the hackers that pulled it off were recently sent to federal prison (February 2018).

And these are just a few of the very public instances. The State of Software Security 2017 Report indicates that SQL injection attacks have been steady for years. The percentage of SQL injection attacks has ranged between 28% and 32% between the years 2011 through 2017. And the reality of it all is this: SQL injection attacks work only because of poor coding practices!

What is SQL Injection?

In order for SQL injection to succeed, the application code used by the website must be vulnerable to an injection attack. SQL injection relies upon programs that do not adequately filter for string literal escape characters embedded in SQL statements or where user input is not strongly typed. So instead of inputting data into a form, SQL statements are supplied. The SQL is “injected” from the web form into the database causing it to be executed and access (or even modify) unintended data.

Perhaps it is easiest to comprehend SQL injection by example. Consider a web-based application using dynamic SQL. The website requires users to login with their e-mail address and a password. Almost all sites of this type also offer an option to retrieve your password by supplying your e-mail address. Perhaps the SQL looks something like this:

SELECT userid, password
FROM   uid_pwd_table
WHERE  field = '$EMAIL';

The variable $EMAIL represents the input from the form on the website. A savvy hacker can attempt a SQL injection attack by entering:

   anything' OR '1'='1

If the application does not check the input properly the injection causes the SQL to now look like this:

   SELECT userid, password
   FROM   uid_pwd_table
   WHERE  field = 'anything' OR '1'='1';

Executing this statement causes a complete dump of every userid and password in the database because the OR ‘1’=’1′ component will always evaluate to TRUE. It does not matter what the first part of the injection was, it could be anything, because the second part of the injection gives the hacker everything in the table.

Another form of SQL injection relies upon improper typing, for example not checking whether data that should be numeric is actually numeric. Consider, for example:

   statement := "SELECT * FROM userinfo WHERE id = " + in_var + ";"

In this case, the SQL is being built into the statement variable; in_var is the variable used to supply the input. Let’s assume that the id column is numeric. However, if the program does not check the data type of the in_var variable to ensure that numeric data is supplied, SQL injection can occur. For example, instead of just supplying a numeric value, the hacker can supply something like this:

    4;DROP TABLE customer

If this SQL statement is executed the customer table (if one exists) will be dropped from the database.

SQL Injection Prevention

Using well-designed query language interpreters and coding applications appropriately can prevent SQL injection attacks. When possible use static SQL instead of dynamic SQL to improve the security of your database applications and data. Static SQL is hard-coded into the application and cannot be changed at runtime. Dynamic SQL is flexible and can change at runtime. When the SQL can change at runtime, a sufficiently motivated and skilled hacker can potentially change the SQL or potentially deploy a SQL injection attack to gain access to unauthorized data.

Static SQL is common in mainframe Db2 applications, but not so much for other platforms and database systems. The Db2 bind command “hardens” the SQL and optimizes access to the data.

Always validate user input by testing type, length, format, and range. The program should make absolutely no assumptions about the data that is received. Test the size and data type of input and enforce appropriate limits. Doing so can help to prevent buffer overruns. Test the content of string variables and allow only expected values to be processed. Any input that contain binary data, escape sequences, and comment characters should be summarily rejected.

Avoid concatenating user input that has not been validated. String concatenation is the primary point of entry for SQL injection attacks. Furthermore, consider using stored procedures to validate user input.

Analyze input and reject anything that contains special characters such as the semi-colon (;), the string delimiter (‘), comment delimiters (–, /*…*/), V$ (the beginning of Oracle DBA views), and xp_ (the beginning of SQL Server catalog stored procedures).

With foreknowledge of SQL injection techniques and proper development procedures, all SQL injection attacks can be prevented.

Summary

Understanding SQL injection techniques and coding your database applications appropriately is an important aspect of modern database security.

Posted in data breach, Database security, DBA, SQL | Leave a comment

Minimizing Database Outages with Replication

A primary goal of most organizations is to minimize downtime and improve the availability of their IT systems and applications. According to a recent study on achieving 2018 database goals by Unisphere Research, “Enterprises seek to base most, if not all, decision making on insights drawn from data, which means there is no margin for error if the flow of that data is ever disrupted.” This means that DBAs are charged with keeping databases up and running to meet the business need for non-stop access to data.

If the DBMS is down, data cannot be accessed. If the data is not available, applications cannot run. And if your applications cannot run, your company is not making crucial decisions, or worse yet, making decisions without the data to back them up. At any rate, the Unisphere study indicates that customer service will suffer the most when a database goes down (cited by 74% of data managers).

All of these things can translate to losing business, which means lower earnings and perhaps even a lower stock valuation for your company. These are all detrimental to the business and therefore, the DBA is called upon to do everything in his or her power to ensure that databases are kept on line and operational. One respondent to the Unisphere study said ““Management already considers databases to be an integral part of the infrastructure that is always up. Any downtime reflects poorly on the DBA team and time (however much) must be put in to restore the systems until they are up again.” Does that sound familiar?

Assuring availability becomes that much more difficult as organizations expand the amount of data that is generated, captured, and managed. Although putting an exact figure on how fast data is growing can be difficult, there are surveys that show the growth of data on premises to range between 10% and 50% for most organizations. And that does not even take into account cloud data, which is growing rapidly and still must be managed.

Further complicating the issue is that fewer DBAs are being asked to manage more data. Although data continues to expand that is not translating into additional DBAs being hired. According to research firm Computer Economics, data management staff as a percentage of IT staff has risen only .5% in four years, and IT spending per user continues to decline.

Nevertheless, DBAs are required to provide uninterrupted availability for many, probably most of their systems. There are many tactics at the DBA’s disposal to improve data availability and performance. One of the most powerful tools for supporting availability is data replication.

When data is replicated, it is stored in more than one site or node, thereby improving the availability of the data. With full replication, where a copy of the whole database is stored at every site, rapid failover from a failing system to an operational one can be achieved to avoid outages. You can also implement partial replication, whereby some fragments of the database are replicated and others are not.

Replication improves the reliability of your applications because your data will be replicated across multiple machines, instead of on a single machine (aka single point of failure). With replicated data you may be able to achieve improved performance with parallel access. And by locating replicas closer to where transactions are executing, data replication may be able to decrease the need for data movement.

Nevertheless, implementing data replication is not free. Concurrency control and recovery techniques tend to be more complicated and therefore more expensive. And in some cases you may need to buy additional hardware to support it properly. That said, replication can be cost effective because it is much more likely that the cost of an outage on mission critical data and applications will dwarf the cost of data replication.

Of course, data replication is not a panacea. You cannot stop taking backups or cease performance management efforts by replicating data. For example, consider a human error that pushes the wrong data to the database. The replication system will dutifully replicate the wrong data out to all replicated nodes. So you can’t rely on replicated data to recover from such a scenario. I mention this not because it is a fault of data replication, but because some folks wrongly think that replication solves recovery issues.

The Bottom Line

So the bottom line here is that organizations are adding more data and relying on that data to be available around-the-clock. DBAs are charged with assuring the availability of database data and cannot be caught having a hardware error cause a prolonged outage. Replication, therefore, is a useful technology for ensuring that downtime is for vacations, not databases!

The days are gone when organizations enjoyed a long batch window where databases could be offline for extended periods to perform nightly processing and maintenance tasks. Replication and intelligent failover is the modern way to eliminate unplanned downtime.

Posted in availability, data integration, DBA | 1 Comment

Supporting Multiple DBMS Products with Navicat Premium

DBAs are tasked with various and disparate responsibilities that consume their day. I’ve written DBA roles and responsibilities (What Does a DBA Do?) here on the blog before, so if you need a refresher, check out the link and meet us back here to hear about automating and simplifying database administration.

With such a varied list of duties, automation of DBA tasks makes sense. Add to the mix that many DBAs are required to support multiple DBMS products and the job becomes more complicated. As such, most DBAs develop scripts and jobs to automate daily tasks. But in-house developed scripts require maintenance and upkeep to stay current.  And some tasks are quite complicated to requiring a lot of effort to develop and maintain. For this reason, and many others, many organizations groups rely on DBA tools that are developed, maintained and supported by vendors dedicated to keeping the software current and efficient.

One such tool worth considering for automating and simplifying a myriad of database administration tasks is Navicat Premium from PremiumSoft CyberTech Ltd. The tool is available for Windows, macOS and Linux.

Navicat Premium provides support for simplifying a plethora of DBA tasks from database design through development and implementation. And it supports a wide range of different database management systems, including MySQL, MariaDB, Oracle Database, PostgreSQL, Microsoft SQL Server, SQLite and multiple cloud offerings (including Amazon, Oracle, Microsoft, Google and Alibaba). You will need to do some verification of specific features as supported features and platforms vary a bit for specific database systems and operating systems.

Perhaps one of the best things that Navicat Premium brings to the table is an easy-to-use GUI that enables users to connect simultaneously to MySQL, MariaDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application. If you are a DBA that has to manage multiple different DBMS products and installations then you know just how handy that can be. Unfortunately, Navicat does not offer support for IBM’s Db2, so keep that in mind if your organization uses Db2.

More importantly, however, is what you can do using Navicat. Let’s start at the beginning, with database design. Using the Navicat Premium intelligent database designer DBAs can create, modify and manage all database objects with professional built-in guidance. The tool provides physical database modeling capabilities complete with forward and reverse engineering capabilities. That means you can go from model to DDL or from a live database instance to a model. Using Navicat Premium’s graphical database design and modeling tool provides DBAs can model, create, and understand complex databases with ease. Automatic layout capabilities make it easy to create and print readable and useful data models, including to PDF files

And Navicat Premium makes populating and moving data to and from your database a snap. Data can be imported and exported to and from most of the formats DBAs use regularly including TXT, CSV, XML, JSON, DBF, and ODBC. A data viewer and editor make it possible to view and edit the data in your databases using a grid view or a form. This comes with the ability to filter the data as needed and to find/replace data in the table as needed. And you can even navigate and select data as needed based on your referential constraints.

It is important to note that Navicat Premium adds intelligence to data movement, as DBAs can compare and synchronize data between databases and schemas with a detailed analytical process. This means that DBAs can quickly and easily compare the structure of two database instances to identify differences and generate the needed DDL to make them identical. But it is not just for the database structures, Navicat Premium also lets the DBA compare the data, too, so that you can reliably ensure that two tables that are supposed to have the same data, actually do have the same data!

Furthermore, Navicat Premium facilitates database backup and restore for your availability and recovery requirements. That means DBA can use Navicat Premium to design, populate, and synchronize database structures.

Many organizations are adopting cloud databases these days and Navicat Premium can be used to manage cloud databases like Amazon RDS, Amazon Aurora, Amazon Redshift, SQL Azure, Oracle Cloud, Google Cloud and Alibaba Cloud. Additionally, Navicat Cloud adds collaboration and synchronization features including the ability to create projects and add member to collaborate, and to synchronize connections, queries, models and virtual groups.

Using Navicat Premium DBAs can setup automated tasks as needed, including scheduling backups, query execution, and data movement tasks like import, export, transfer and synchronization. There are numerous reports that can be generated by Navicat Premium and DBAs can schedule printing reports as needed. And notification emails can be sent automatically as part of the automation.

Application and SQL development tasks are also available using Navicat Premium. With the SQL editor and visual SQL builder you can create and modify efficient error-free SQL statements to query and modify data across your databases. The SQL console provides an integrated method for testing and executing SQL statement. And Navicat even provides debugging capabilities for PL/SQL and PL/PGSQL in stored procedures.

If you are looking for a tool that can assist your DBAs in supporting a complex, multi-DBMS environment, Navicat Premium is worth considering. It delivers an intuitive and well-designed interface that can simplify your database administration and development efforts.

Posted in automation, backup & recovery, database design, DBA | 1 Comment

On the High Cost of Enterprise Software

Enterprise Software Should Not Cost an Arm and a Leg

I believe that enterprise software is too costly and, indeed, it would seem that many others agree with me or we wouldn’t see the explosion of open source software being adopted industry-wide these days. Now don’t get me wrong, I am not saying that open source software is free to run – of course there are maintenance and management costs. But if commercial enterprise software provided needed functionality and was reasonably-priced then we wouldn’t really need open source, now would we?

Before the open source zealots start jumping down my throat, yes, I acknowledge that open source is more than just cost. The creativity and group-development aspects also led to its strong adoption rate… but I am getting off topic… so let’s go back to the high cost of enterprise software – that is, the software that runs the computing infrastructure of medium to large businesses.

It is not uncommon for companies to spend multiple millions of dollars on licenses and support contracts for enterprise software packages. This comprises not only operating systems, but database systems, business intelligence and analytics, transaction processing systems, web servers, portals, system management tools, ERP systems, and so on.

Yes, there is intrinsic value in enterprise software. Properly utilized and deployed it can help to better run your business, deliver value, and sometimes even offer competitive advantage. But what is a fair value for enterprise software?

That is a more difficult question to answer.

But let’s look at something simple, like a performance monitor. Nice software, helps you find problems, probably costs anywhere from tens of thousand dollars to over a million depending on the size of the machines you are running it on. Sometimes the software used to monitor is more expensive than what it is being used to monitor! Why does it cost that much? Well, because companies have been willing to pay that much. Not because the software has to cost that much to develop. I mean, how many lines of code are in that monitor? Probably less than Microsoft Excel and I can get that for a hundred bucks or so. And I can almost guarantee that Excel has a larger development and support team than whatever monitor you choose to mention.

So the pricing is skewed not based on what it costs to develop, but what the market will bear. That is fine, after all we live in a free market economy (depending on where you live, I guess). But I don’t believe that the free market will continue to support such expensive software. And the open source movement is kind of bearing that out. Nevertheless, there are still companies that prefer to purchase commercial software rather than to rely on open source software, at least for some things.

As I think about enterprise software a bit further… In many cases, enterprise software vendors have migrated away from selling new software licenses to selling maintenance and support. For some companies, more than half of their revenue comes from maintenance and support instead of selling new software. This is especially true for some mainframe software vendors.

Viewed another way, you could be excused for thinking that some of these companies are doing little more than asking their customers to pay for the continued right to use the software because their is little maintenance going on. Sounds like a nice little racket… you know what I’m talking about? So you pay several million for the software and then hundreds of thousands, maybe millions more for the continued right to use it and get fixes.

Another problem with enterprise software is feature bloat. Enterprise software can be so expensive because vendors want to price it as if all of its feature will be used by the enterprise. But usually only a few features are needed and used on a regular basis. Part of the problem, though, is that those few features can be (and usually are) different for each organization. One way vendors deal with this is to offer many separately-priced features enabled by key, but that is complicated for the user (as well as the software vendor).

So what is the answer? Gee, I wish I knew… if you have any ideas, please share them in the comments section… I’ve got some ideas and thoughts and perhaps I’ll share them with you all in a later blog post. But I think I’ve babbled on enough for today…

Posted in enterprise computing | 1 Comment

Implementing a Data Governance Initiative?

I recently received the following question and I think it is interesting enough to blog about.

Q: My company is looking to implement a data governance initiative. We want to be careful to avoid merely employing a series of extemporized data quality projects and are taking great pains to involve executive management, so we have formed a structure that segregates the responsibilities and activities into layers: a strategic level, tactical level, and execution level. What specific responsibilities should be assigned to each level?

Here is my answer:

The strategic level needs to involve upper level management and high-level technicians. Of course, this should include the CIO and CTO, but should include their lieutenants as well. Additionally, a senior data administrator or data architect should be included. A high level consultant could be used to help steer this committee and keep it on track. The strategists will map out the purpose and intent of the data governance initiative. Why is it needed? What problems will it solve? What will its impact be on the business?

The tactical level needs to involve the folks most literate on data and database systems. This should include data architects, database administrators, and technical end users; perhaps senior programmer/analysts, as well. Consultants may be needed to help flesh out the needed tasks and requirements. These folks will outline the necessary components of the initiative to meet the strategy as outlined by the executive committee. Budgeting goals will need to be set as guided by the executive committee and streamlining or adjusting the tactics may to occur to stay within the budget guidelines as this group works on its mission

The execution level needs to be staffed with the appropriate folks who can actually implement the tactics outlined by the tactical committee. This will likely include members of the tactical committee, as well as more junior DBA, DA, and programming staff.

Finally, I would suggest that you should engage the services of a skilled consultant in the area of data governance for advice on setting up your organization. I can recommend both Bob Seiner  who is quite knowledgeable on the topic of data governance, as well as his book: Non-Invasive Data Governance: The Path of Least Resistance and Greatest Success by Robert S. Seiner (2014-08-22), which describes a way to introduce data governance without adopting onerous, intrusive processes.

Posted in data governance | Leave a comment

A High-Level Guide to SQL Tuning

SQL tuning is a complicated task and to cover it adequately requires a full-length book of its own – actually, perhaps several if you use multiple DBMS products. That said, there are some good high-level SQL tuning suggestions that should apply regardless of the DBMS you are using. Well, as long as it supports SQL!

Here are some helpful rules of thumb:

  • Create indexes to support troublesome queries.
  • Whenever possible, do not perform arithmetic in SQL predicates. Use the host programming language (Java, COBOL, C, etc.) to perform arithmetic.
  • Use SQL functions to reduce programming effort.
  • Look for ways to perform as much work as possible using only SQL; optimized SQL typically outperforms host language application code.
  • Build proper constraints into the database to minimize coding edit checks.
  • Do not forget about the “hidden” impact of triggers. A DELETE from one table may trigger many more operations. Although you may think the problem is a poorly performing DELETE, the trigger is really the culprit.

Furthermore, a large part of the task of tuning SQL is identifying the offending code. A SQL performance monitor is the best approach to identifying poorly performing statements. Such a tool constantly monitors the DBMS environment and reports on the resources consumed by SQL statements.

Some DBMSs provide rudimentary bundled support for SQL monitoring, but many third-party tools are available. These tools provide in-depth features such as the ability to identify the worst-performing SQL without the overhead of system traces, integration to SQL coding and tuning tools, and graphical performance charts and triggers. If you find yourself constantly being bombarded with poor performance problems, a SQL monitor can pay for itself rather quickly.

At a high-level then, the guidance I want to provide is as follows:

  1. Ingrain the basics into your development environment. Make sure that not just the DBAs, but also the application developers understand the high-level advice in the bulleted list above.
  2. Build applications with database performance in mind from the beginning.
  3. Make sure that you have an easy, automated way of finding and fixing poorly performing SQL code.

Sound simple? Maybe it is, but many organizations fail at these three simple things…

Posted in performance, SQL | 1 Comment

Data Technology Today in 2017

Here we are in the first week of 2018 so it is time, once again, to look at what happened this past year in the blog. First of all, there were 16 new blog posts this year so I averaged a little more than one a month. That is less than what I would like to accomplish, but also a rate I can live with. I just want to make sure that I have enough new content here to keep you guys interested!

And it seems like there is continued interest. The graphic below shows 2017’s blog activity. There were over 46 thousand page views by over 36 thousand visitors.

2017-stats

The most popular post this year on the blog was one I posted a few years ago titled: An Introduction to Database Design: From Logical to Physical. There were 8,233 views of this particular post. The second most popular was a post on backup and recovery from 2014 that received about half as many views.

Who are you, out there, reading this blog? Well, I know where most of you live! Almost 20 thousand of you from my home country, the United States. But I also have a lot of great readers in India, as well as many others across the world, as can be seen here…

viewers

And how did most people find the blog? Unsurprisingly, it was by using a search engine, with terms like ‘types of database design’… ‘logical design to physical implementation’… ‘production data’… and many other data-related search terms. Twitter was the second most popular way for readers to find the blog, followed by my web site, LinkedIn, and Planet Db2.

So to end this brief synopsis of 2017, thank you to all of my regular readers – please keep visiting and suggesting more topics for 2018 and beyond. And if this is your first visit to the blog, welcome. Take some time to view the historical content – there are several informative posts that are popular every year… and keep checking back for new content on data, database, and related topics!

Posted in DBA | Leave a comment