Managing MongoDB Databases with Navicat

I’ve written about Navicat tools for managing data in this blog before (performance monitoring, heterogeneous database administration ), so I thought I’d take a look at their most recent offering which provides many useful DBA features for MongoDB.

MongoDB is a NoSQL, open-source, cross-platform document-oriented database management system. MongoDB uses JSON-like documents with schemas. Use cases for which MongoDB excels include web commerce applications, content management, blogs, real-time analytics, and social networking. It is not particularly well-suited for systems with high transaction rates.

But I don’t really want to discuss MongoDB in-depth here. As a proponent of performing database administration as a management discipline though, the world of NoSQL database systems lacks the in-depth management and administration tooling enjoyed by relational database systems. That has to change, and Navicat has obviously recognized this fact, with its new Navicat for MongoDB offering.

Navicat for MongoDB delivers GUI interface for MongoDB database management, administration and development (see Figure 1). You can use it to connect to local and remote MongoDB servers and it is compatible with MongoDB Atlas.

Navicat for MongoDB

Figure 1. Navicat for MongoDB GUI – main screen

Navicat for MongoDB offers many useful features for DBAs to manage, monitor, query, and visualize your MongoDB data. It supports adding, modifying, and deleting documents using built-in editors including a tree view, JSON view, and the classic spreadsheet-like grid view.

One of the bigger headaches of using a new database technology like MongoDB can be moving data around. Navicat for MondoDB makes this easier as it comes with an Import Wizard that can be used to transfer data into and out of your MongoDB databases. It supports multiple, diverse formats like Excel, Access, CSV and more. You also can ingest data from ODBC after setting up a data source connection. It provides strong data transfer and synchronization capabilities to enable the migration of your data. You can transfer data across databases, compare the data in your databases, and synchronize the data.

Querying data in MongoDB is a snap with the Navicat Visual Query Builder. It can be used to create, edit and run queries without having to worry about syntax and proper usage of commands. Additional features, like Code Completion and customizable Code Snippet simplify your coding efforts by providing suggestions for keywords and eliminating repetitious coding.

For DBAs, Navicat for MongoDB provides an Intelligent Object Designer. It enables you to create, modify and manage all database objects using built-in professional advice and guidance. You can preview results on each step and debug the sampled data before running your jobs. And the Smart Schema Analyzer can be used to help you visually discover and explore your schema. With it, you can analyze your documents and display the structures within your collections, thereby making it easy to understand your data’s schema, find schema anomalies and inspect outliers.

Navicat for MongoDB even provides an intuitive Backup Utility that can be used to automate your backup process and reduces the potential for errors. You can set up a repeatable deployment process for job and script execution at a specific time or day.

Security is built into the product, too. It uses SSH Tunneling and SSL to ensure every connection is secure, stable, and reliable. You can use different authentication methods of database servers such as Kerberos and X.509 authentication.

Finally, you can use Navicat for MongoDB in the cloud to enable collaboration with your co-workers. Share your connection settings, queries and virtual groups with your coworkers anytime and anywhere.

All-in-all, Navicat for MongoDB goes a long way toward making your MongoDB environment as manageable as your relational environments.

You can download and try Navicat for MongoDB here:

Posted in DBA | Tagged , , , | 1 Comment

My Computer Mug Collection, Part 4

Recently I tweeted a mug a day from my collection of coffeee mugs given to me over the years by computer and software companies. Since I finished up tweeting all of my mugs, I have been posting them here to my blog, as well. So far I have posted three previous posts showcasing my computer mug collection. Today, as promised, here are the remaining mugs I have yet to blog…

First up is a mug I forgot to include in Part 2 (mugs from computer conferences). This one is from SHARE, and I received it as a Best Session Award for my DB2 for z/OS Performance Roadmap presentation. It is clear glass, so it is a bit difficult to see:


Next up is a series of mugs from German software vendor Software Engineering. I think I was lucky enough to collect all of the mugs in the series:


And here is a mug from from ComputerWorld  with one of their IT cartoons on it. I sure hope the ESC key worked!


And this mug is back from the days when Oracle actually developed and marketed a version of their DBMS for MVS! Sure, you can run Oracle on a mainframe today, but it has to be in a Linux partition.


Here are several mugs from IBM in my collection. The first one says “Tame you Data Monster with IBM” – and that is a good overall summation of what I’ve done my entire career!  And the there is the newest of these mugs, the IBM Champion mug. I use this one every day as a pen and pencil holder! And the last one is a joke, of sorts. Documentation and memos that are not meant to be shared are often marked “Internal Use Only” as is this mug, probably referring to the coffee it will hold.

Next we have a mug from Memorex. Some of you might question as to whether it is actually a “computer” mug, but it is! This is from back in the day when Memorex was a big manufacturer of floppy discs.


Here is a nice little mug from Peppers & Rogers that I think I got when I took a product management class from them back in the mid-1990s:


And finally, here is a mug from Software Marketing Journal. I only subsribed to this magazine for a short time in the late 1990s when I was VP of marketing for Platinum’s database solutions… so I’m pretty sure that it is from that timeframe:


And that concludes my cavalcade of computer mugs… I think. There may be another mug or two hiding around here somewhere… if I discover any more I’ll be sure to share them with you.

So what next? I have an extensive button/pin collection from various computer companies and conferences. Anybody want me to start sharing those? Or have you had enough?

Posted in DBA | Leave a comment

My Computer Mug Collection, Part 3

So far I have published two posts showing the computer-related mugs in my collection.  Over several weeks I first tweeted a mug a day from my collection, and now I am blogging them for those that missed any of the tweets.

In part 1 of my computer mug collection I highlighted the mugs from companies where I worked; and in part 2 I showed the mugs I received from conferences and user groups. Today’s post I call the dearly-departed — mugs from companies that have been acquired or gone out of business.

First up, we have Goal Systems. This is not the current company named Goal Systems (transport industry software), but the company that sold the performance monitor Insight for Db2 before CA acquired them.


And here are two more companies that were bought by CA: Pansophic and I just had to include a Platinum mug again.

Then we have BGS Systems, the maker of mainframe capacity planning and performance software that I believe was acquired first by Boole and Babbage, and then by BMC Software.


And here is a nice mug from Easel Corporation, which was a popular software development firm for GUI design. It was acquired by VMARK in the mid-1990s.


Here we have a mug from R&O, the original makers of the Rochade repository. They have since been acquired by Allen Systems Group.


Then we have this mug, from Cogito, the makers of EZ-Db2… they have since been acquired by Syncsort.


And then there is XA Systems, which was acquired by Compuware back in the early 1990s.


And finally, here is a mug from Sablesoft. They made the Compile/QMF (which at some point was renamed to Compile/QQF) product. Sabelsoft was acquired by Platinum technology, inc., and then CA.


That conclude today’s post… but there are still a few more mugs I have yet to blog. Stay tuned for the final post in this series coming next week!


Posted in data, DB2, mugs | Leave a comment

My Computer Mug Collection, Part 2

Last week I posted part 1 of my computer mug collection and promised to continue posting the rest of the collection in later blog posts.

Well, here is the second in this series of posts. Today we will look at the mugs I received from user groups and industry conferences. And we’ll start with IDUG!

As you can see, I have several IDUG mugs, the newest of which is from this year. That would be the one with the orange lettering celebrate IDUG’s 30th Anniversary.

Next we have a mug from the old IBM DB2 Tech Conference (yes, I know Db2 is now spelled with a lower-case “b” but it never was when the Tech Conference was held, so I used an upper case “B” there)! This one is from the 1992 event held in Miami Beach. I remember attending this conference quite well… better than I should given that it was over 25 years ago!


Then we have this nice mug from the Reno CMG conference in 2006. CMG always puts on a great annual conference for those of us who toil away in the realm of performance and capacity management.


And how many of you remember DCI (Digital Consulting, Inc.) and their series of conferences? They host quite a few, as this mug attests. I remember speaking at Database World in Chicago (but I can’t remember the year).


Finally, for today anyway, here is a mug I was given for speaking at the Heart of America Db2 User Group many years ago. Interestingly enough, I’ll be speaking there again this September (2018), so if you are in the Kansas City area, stop in and say “Hi!”


More mugs to come in the next installment…

Posted in IDUG, mugs | 1 Comment

My Computer Mug Collection, Part 1

Recently I have been tweeting a mug a day from my collection of coffeee mugs given to me over the years by computer and software companies. I just tweeted the last one (I think), and since these posts seemed to be populat, I thought I’d post a blog entry with all of the mugs.

Places I Have Worked

First up, are all of the mugs from the companies I have worked for, starting with my first consulting company in Pittsburgh, ASSET, Inc. Myself and a couple of friends started this and we kept it going for a few years (hi Bernard and Cindy):


Next up is Platinum Technology, inc. I worked there for a number of years on two separate runs. It is probably the most fun I ever had working for a corporation. Here are the numerous mugs I have from Platinum:

After Platinum I worked at BMC Software for awhile. Here are the mugs I accumulated while at BMC:

I also worked at NEON Enterprise Software for a spell:


And finally, I have my own consulting company, so I made myself a mug!

That is all of the mugs from the software companies I worked. Over the course of the next few posts I will show the remaining mugs from other software and hardware companies, computer conferences, and other miscellaneous mugs.

Hope you enjoy!

Posted in DB2, mugs, tools | 2 Comments

Don’t Forget the Free Space

When you design your databases you must take care to specify appropriate free space to allow for data growth.

Free space, sometimes called fill factor, is a DDL option that can be specified when creating or altering your table spaces and indexes. It is used to leave a portion of a the database obect empty and available to store newly added data. The specification of free space in a table space or index can reduce the frequency of reorganization, minimize contention, and improve the efficiency of inserting new data.

Each DBMS provides a method of specifying free space for a database object in the CREATE and ALTER statements. A typical parameter is PCTFREE, where the DBA specifies the percentage of each data page that should remain available for future inserts. Another possible parameter is FREEPAGE, where the DBA indicates the specified number of pages after which a completely empty page is available.

Ensuring a proper amount of free space for each database object provides the following benefits:

  • Inserts are faster when free space is available.
  • As new rows are inserted, they can be properly clustered.
  • Variable-length rows and altered rows have room to expand, potentially reducing the number of relocated rows.
  • Having fewer rows on a page results in better concurrency because less data is unavailable to other users when a page is locked.

However, free space also has several disadvantages:

  • Disk storage requirements are greater.
  • Scans take longer.
  • Fewer rows on a page can necessitate more I/O operations to access the requested information.
  • Because the number of rows per page decreases, the efficiency of data caching can decrease because fewer rows are retrieved per I/O.

The DBA should monitor free space and ensure that the appropriate amount is defined for each database object. The correct amount of free space must be based on:

  • The expected frequency of inserts and modifications
  • Amount of sequential versus random access
  • Impact of accessing unclustered data
  • Type of processing
  • Likelihood of row chaining, row migration, and page splits

Be careful, though, not to fall prey to a common database design mistake, namely specifying the same default amount of free space for every object. I’ve seen many database implementations where 20% (or 10%) free space has been specified for every object. This is surely not a good idea. Why?

Well, not every table grows at the same rate. A transaction history table will likely grow rapidly, but perhaps all new transactions are added to the end. Free space is not needed for such a table because your rarely, if ever, go back and edit old transactions. A customer table, hopefully grows over time. Perhaps the data is clustered by Customer Name, in which case some free space will be helpful.

Finally, consider static tables. For example, a code table or maybe a table that contains the 50 state codes with the state name. This type of table rarely changes, so do not define any free space for it because it does not need room in which to expand.


Armed with appropriate knowledge of your applications and data you can define reasonable free space into your database objects to allow for data growth with minimal impact on performance. Be sure to understand how your DBMS enables free space and define it accordingly for your production databases.

Posted in database design, DBA, free space, performance | Leave a comment

Monitoring MySQL and MariaDB Instances

Database performance monitoring and tuning is one of the biggest DBA responsibilities. The old maxim that if something can go wrong, it will, seems to be the general rule in the world of database administration. So, DBAs need effective tools and procedures to identify performance problems, determine the root cause of the problems, and provide assistance to optimize and tune the database environment for efficiency.

But let’s back up a minute. What is database performance? If we are going to monitor it, we should know what it is. A good definition for database performance is the optimization of resource usage to increase throughput and minimize contention, enabling the largest possible workload to be processed. Of course, there are many aspects that must be monitored in order to achieve database performance.

Which brings us to Navicat Monitor, a new database performance monitoring solution on the market from Navicat (whose other offerings I have discussed on this blog before). Navicat Monitor is an agentless remote server monitoring tool that currently can be used to monitor MySQL, MariaDB and Percona Server (with Microsoft SQL Server support upcoming on the horizon).

The server-based software runs on Windows, Mac or Linux and it can be accessed from anywhere via a web browser. Navicat Monitor can be installed on any local computer or virtual machine and does not require any software installation on the servers being monitored. Furthermore, access is supported on all of the most popular web browsers including Firefox, Chrome, Internet Explorer 11 or later, Microsoft Edge 39 or later and Safari 9.1.3 or later. Because the tool is accessed on the web it is easy to keep track of your servers around the world, any time and any place you have a web connection.

But what can Navicat Monitor do? Navicat Monitor collects pertinent database performance metrics such as CPU load, RAM usage, and many other metrics for each database instance being monitored. The product enables DBAs to quickly view all registered database instances and availability groups on a single web-based console. This makes it easy to monitor your entire MySQL/MariaDB database infrastructure from one place, regardless of the locations of the servers.

The interactive dashboard assists the DBA to quickly see how all the database instances are running. It shows all of your monitored instances with high-level summary information and the healthy status of your instances, using the familiar green/yellow/red indicators to depict the performance, availability and health of each instance. See Figure 1.


Figure 1. Navicat Monitor dashboard.

Modern DBAs are required to manage many database instances across many servers both on premises and in the cloud. Without a tool like Navicat Monitor to get access to all of these database instances through a consolidated interface assuring performance across hundreds of servers can be difficult, if not impossible to achieve.

Drilling down into an instance Navicat Monitor presents you with a slew of useful details, starting with the top 5 database and top 5 tables based on size. See Figure 2.


Figure 2. Instance Details

As you would expect from a performance monitor, you get a comprehensive view of each instance, starting with whether the system is up or down, and moving on to more detailed information such as CPU, memory and swap usage. See Figure 3.


Figure 3. More In-depth Instance Details

With Navicat Monitor, your DBAs can perform real-time analysis of your MySQL and MariaDB database servers, but you can also examine historical metrics over time. For example, Figure 4 shows CPU usage over the course of a day for a specific MySQL instance. Navicat Monitor maintains a repository to store alerts and metrics for historical analysis. The repository database can be an existing MySQL, MariaDB, PostgreSQL, or Amazon RDS instance. Using the historical data you can examine metrics and diagnose whether performance trends are deviating from expectations.


Figure 4. CPU Usage History

Alerting is also an integral capability of Navicat Monitor. DBAs can set powerful rules and get notifications via email, SMS, or SNMP. The product comes preconfigured with more than 40 fully-customizable alert policies. These rules can be customized to track the metrics and threshold most important to your environment. Navicat Monitor alerts are arranged by common themes and managed by advanced filters. DBAs can see the frequency with which each alter is triggered along with what action, if any, is needed to resolve the issue. You can even see who responded to each alert in the past. Figure 5 shows that you can view the Alert history over time for each instance.


Figure 5. Alert History

Of course, identifying performance issues is only part of the battle, but Navicat Monitor provides further assistance with advanced root cause analysis. You can drill down to uncover more in-depth information when an issue is found. Navicat Monitor includes a rich set of real-time and historical graphs that allow you to drill down into server statistic details. It gives you a detailed view of each server load and performance regarding its availability, disk usage, network I/O, table locks and more, which allows you to easily track the deviations and traffic among servers, as well as examine possible solutions and adjust your server settings.

You can even drill down to the query level, monitoring your queries in real with Navicat Monitor’s Query Analyzer feature. Query Analyzer displays summary information for all of your running queries enabling you to discover problematic queries. For example, it can quickly show you the top resource-consuming queries with cumulative execution time or the slowest queries with unacceptable response times. Query Analyzer can even help you to detect locking issues that are causing contention, such as deadlocks when two or more queries are blocking each other.

If you are tasked with managing the performance of MySQL and MariaDB take a look at Navicat Monitor. It may be able to alleviate the strain of managing multiple instances.



Note: Currently Navicat Monitor can be used to monitor MySQL 5.1.73 or later and MariaDB 10.0 or later.

Posted in MySQL, performance | 3 Comments

Everybody Lies!

I was walking though the airport in Las Vegas after the IBM Think conference this year when I saw a book with the intriguing title of: Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are by Seth Stephens-Davidowitz.

Everybody Lies Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are by Seth Stephens-Davidowitz

This book is chock full of interesting observations and studies on human behavior, most of it culled from the massive search history data sets compiled by Google. As a data guy, I am always interested in learning more about what the data actually tells us, instead of what we think is true. And boy, does this book deliver.

If you are at all curious about what big data and analytics can tell us about human behavior, this book offers up study after study that sheds light on that. But make sure you have a strong stomach because a lot of what it has to say is not flattering. The key idea behind the book is right there in the title – everybody lies. But not, perhaps, when they are searching the internet. Why would you lie to Google when you are looking for something? In the privacy of our homes as we seek out nuggets of data that interest us we are more honest than at any other time. There is no one to impress, no one to worry about seeing what we are looking for… and that leads to many enlightening, and frankly, distrubing discoveries.

I won’t spoil any of the studies and conclusions that the book reaches, but suffice it to say, you’ll learn things about humans that might explain some things going on around us these days.

The book is an easy read and I suggest it to anybody interesting in big data, analytics, and social media. If you are a fan of Freakonomics or any of Nate Silver’s books (such as The Signal and the Noise), then you owe it to yourself to read Everybody Lies… I’m sure it will entertain and enlighten you.

Posted in analytics, Big Data, book review, information | Leave a comment

SQL Injection Still Causing Trouble

An on-going and important aspect of managing database security is designing your applications to avoid SQL injection attacks. SQL injection is a form of web hacking whereby SQL statements are specified in the fields of a web form to cause a poorly designed web application to dump database content to the attacker.

This type of attack has been known for years now, but still there are new stories where SQL injection was used for nefarious purposes. SQL injection played a role in a hacking incident during the 2016 US presidential election, TalkTalk — a UK-based telecoms company — suffered a data breach in 2015 due to SQL injection, and the hardware manufacturer Archos suffered a SQL injection attack late in 2014.

And remember the Heartland Payment Systems breach from 2009? That SQL injection attack cost $300 million and the hackers that pulled it off were recently sent to federal prison (February 2018).

And these are just a few of the very public instances. The State of Software Security 2017 Report indicates that SQL injection attacks have been steady for years. The percentage of SQL injection attacks has ranged between 28% and 32% between the years 2011 through 2017. And the reality of it all is this: SQL injection attacks work only because of poor coding practices!

What is SQL Injection?

In order for SQL injection to succeed, the application code used by the website must be vulnerable to an injection attack. SQL injection relies upon programs that do not adequately filter for string literal escape characters embedded in SQL statements or where user input is not strongly typed. So instead of inputting data into a form, SQL statements are supplied. The SQL is “injected” from the web form into the database causing it to be executed and access (or even modify) unintended data.

Perhaps it is easiest to comprehend SQL injection by example. Consider a web-based application using dynamic SQL. The website requires users to login with their e-mail address and a password. Almost all sites of this type also offer an option to retrieve your password by supplying your e-mail address. Perhaps the SQL looks something like this:

SELECT userid, password
FROM   uid_pwd_table
WHERE  field = '$EMAIL';

The variable $EMAIL represents the input from the form on the website. A savvy hacker can attempt a SQL injection attack by entering:

   anything' OR '1'='1

If the application does not check the input properly the injection causes the SQL to now look like this:

   SELECT userid, password
   FROM   uid_pwd_table
   WHERE  field = 'anything' OR '1'='1';

Executing this statement causes a complete dump of every userid and password in the database because the OR ‘1’=’1′ component will always evaluate to TRUE. It does not matter what the first part of the injection was, it could be anything, because the second part of the injection gives the hacker everything in the table.

Another form of SQL injection relies upon improper typing, for example not checking whether data that should be numeric is actually numeric. Consider, for example:

   statement := "SELECT * FROM userinfo WHERE id = " + in_var + ";"

In this case, the SQL is being built into the statement variable; in_var is the variable used to supply the input. Let’s assume that the id column is numeric. However, if the program does not check the data type of the in_var variable to ensure that numeric data is supplied, SQL injection can occur. For example, instead of just supplying a numeric value, the hacker can supply something like this:

    4;DROP TABLE customer

If this SQL statement is executed the customer table (if one exists) will be dropped from the database.

SQL Injection Prevention

Using well-designed query language interpreters and coding applications appropriately can prevent SQL injection attacks. When possible use static SQL instead of dynamic SQL to improve the security of your database applications and data. Static SQL is hard-coded into the application and cannot be changed at runtime. Dynamic SQL is flexible and can change at runtime. When the SQL can change at runtime, a sufficiently motivated and skilled hacker can potentially change the SQL or potentially deploy a SQL injection attack to gain access to unauthorized data.

Static SQL is common in mainframe Db2 applications, but not so much for other platforms and database systems. The Db2 bind command “hardens” the SQL and optimizes access to the data.

Always validate user input by testing type, length, format, and range. The program should make absolutely no assumptions about the data that is received. Test the size and data type of input and enforce appropriate limits. Doing so can help to prevent buffer overruns. Test the content of string variables and allow only expected values to be processed. Any input that contain binary data, escape sequences, and comment characters should be summarily rejected.

Avoid concatenating user input that has not been validated. String concatenation is the primary point of entry for SQL injection attacks. Furthermore, consider using stored procedures to validate user input.

Analyze input and reject anything that contains special characters such as the semi-colon (;), the string delimiter (‘), comment delimiters (–, /*…*/), V$ (the beginning of Oracle DBA views), and xp_ (the beginning of SQL Server catalog stored procedures).

With foreknowledge of SQL injection techniques and proper development procedures, all SQL injection attacks can be prevented.


Understanding SQL injection techniques and coding your database applications appropriately is an important aspect of modern database security.

Posted in data breach, Database security, DBA, SQL | Leave a comment

Minimizing Database Outages with Replication

A primary goal of most organizations is to minimize downtime and improve the availability of their IT systems and applications. According to a recent study on achieving 2018 database goals by Unisphere Research, “Enterprises seek to base most, if not all, decision making on insights drawn from data, which means there is no margin for error if the flow of that data is ever disrupted.” This means that DBAs are charged with keeping databases up and running to meet the business need for non-stop access to data.

If the DBMS is down, data cannot be accessed. If the data is not available, applications cannot run. And if your applications cannot run, your company is not making crucial decisions, or worse yet, making decisions without the data to back them up. At any rate, the Unisphere study indicates that customer service will suffer the most when a database goes down (cited by 74% of data managers).

All of these things can translate to losing business, which means lower earnings and perhaps even a lower stock valuation for your company. These are all detrimental to the business and therefore, the DBA is called upon to do everything in his or her power to ensure that databases are kept on line and operational. One respondent to the Unisphere study said ““Management already considers databases to be an integral part of the infrastructure that is always up. Any downtime reflects poorly on the DBA team and time (however much) must be put in to restore the systems until they are up again.” Does that sound familiar?

Assuring availability becomes that much more difficult as organizations expand the amount of data that is generated, captured, and managed. Although putting an exact figure on how fast data is growing can be difficult, there are surveys that show the growth of data on premises to range between 10% and 50% for most organizations. And that does not even take into account cloud data, which is growing rapidly and still must be managed.

Further complicating the issue is that fewer DBAs are being asked to manage more data. Although data continues to expand that is not translating into additional DBAs being hired. According to research firm Computer Economics, data management staff as a percentage of IT staff has risen only .5% in four years, and IT spending per user continues to decline.

Nevertheless, DBAs are required to provide uninterrupted availability for many, probably most of their systems. There are many tactics at the DBA’s disposal to improve data availability and performance. One of the most powerful tools for supporting availability is data replication.

When data is replicated, it is stored in more than one site or node, thereby improving the availability of the data. With full replication, where a copy of the whole database is stored at every site, rapid failover from a failing system to an operational one can be achieved to avoid outages. You can also implement partial replication, whereby some fragments of the database are replicated and others are not.

Replication improves the reliability of your applications because your data will be replicated across multiple machines, instead of on a single machine (aka single point of failure). With replicated data you may be able to achieve improved performance with parallel access. And by locating replicas closer to where transactions are executing, data replication may be able to decrease the need for data movement.

Nevertheless, implementing data replication is not free. Concurrency control and recovery techniques tend to be more complicated and therefore more expensive. And in some cases you may need to buy additional hardware to support it properly. That said, replication can be cost effective because it is much more likely that the cost of an outage on mission critical data and applications will dwarf the cost of data replication.

Of course, data replication is not a panacea. You cannot stop taking backups or cease performance management efforts by replicating data. For example, consider a human error that pushes the wrong data to the database. The replication system will dutifully replicate the wrong data out to all replicated nodes. So you can’t rely on replicated data to recover from such a scenario. I mention this not because it is a fault of data replication, but because some folks wrongly think that replication solves recovery issues.

The Bottom Line

So the bottom line here is that organizations are adding more data and relying on that data to be available around-the-clock. DBAs are charged with assuring the availability of database data and cannot be caught having a hardware error cause a prolonged outage. Replication, therefore, is a useful technology for ensuring that downtime is for vacations, not databases!

The days are gone when organizations enjoyed a long batch window where databases could be offline for extended periods to perform nightly processing and maintenance tasks. Replication and intelligent failover is the modern way to eliminate unplanned downtime.

Posted in availability, data integration, DBA | 2 Comments