A (Very) Quick Intro to Cloud Computing

Cloud computing refers to storing and accessing data and programs over the Internet instead of on your own computers. The “cloud” is a metaphor for the Internet. When you hear the term “cloud,” you can translate that to “someone else’s computer.”

However, things are not quite as uncloudy as that in terms of grasping what cloud means. Sometimes, when folks refer to cloud they are talking about a type of development using APIs and microservices. And for private cloud (or local cloud) implementations there may be no Internet access involved. It can get confusing.

The Trends

The clear direction these days is that many components of the IT infrastructure are moving from on premises to the cloud. Enterprises are choosing which applications, and how much of the infrastructure supporting those applications, should be moved into the cloud.

There are several options that can be chosen. The Public Cloud refers to an infrastructure that is provisioned by a cloud provider for open use by the general public. The Private Cloud is where the infrastructure is typically provisioned solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Finally, a Hybrid Cloud solution is a combination of two or more clouds bound together, delivering the benefits of multiple deployment models.

Another type of hybrid development is where an organization combines components on premises and in the cloud to deliver services and applications. Many organizations are taking this approach as they try cloud computing options.

Facts and Figures

There is a pervasive belief, especially among industry analysts, that the cloud is going to take over everything and on premises computing will wither away. Gartner estimates that the cloud computing market will reach $411 billion by 2020 (and that is just next year)!

Organizations’ confidence in the cloud, including the ability to protect and secure data and applications is rising. And this increased confidence should correlate with growing cloud adoption. According to a 2017 study, cloud usage increased from 24 percent of workloads in 2015 to 44 percent at the time of the study. Furthermore, they predicted that by 2019 65% of workloads would be in the cloud.

My Take

Clearly, there are benefits to cloud computing including economies of scale and the ability to scale up and down as needed. But there are deteriments, too, including less control of your data and latency issues. What happens to all of those applications built on the cloud if your Internet service is disrupted?

Contrary to the current popular belief, on-premises computing is not going to disappear any time soon… think of all those mainframes still humming away out there. And according to various sources (compiled by Syncsort):

  • Mainframes handle 68 percent of the world’s production IT workloads
  • 71 Percent of Fortune 500 Companies Use Mainframes
  • Mainframes handle 87 percent of all credit card transactions

So mainframes alone, which are not going away, will still handle a large amount of enterprise computing workload.

That said, I believe that public cloud adoption will most likely be much lower than most predictions through 2022, and probably even beyond that. Even if demand is high, Cloud Service Providers (CSPs) cannot possibly build out their infrastructure fast enough to support all the existing data center capacity “out there.”  Mark Thiele shared a concise, insightful article on LinkedIn that is worth reading summarizing these thoughts quite well.

The Bottom Line

At any rate, cloud computing is a viable method for building enterprise applications. It can be used to reduce the cost of managing and maintaining your IT systems. At the same time, the cloud can enhance flexibility, deliver quick scalability, and ensure that you are running with current, up-to-date system software.

Posted in cloud | Leave a comment

A New and Improved Navicat Monitor

DBAs are always looking for ways to better manage the performance of their database systems and the applications that access those databases. Indeed, monitoring and tuning are perhaps the most frequent tasks that DBAs perform. So, it makes sense that DBAs want to use tools that ease this burden.

One tool that should be on the radar for DBAs looking to improve their performance management capabilities is the latest release of Navicat Monitor, version 2.0, which now includes support for Microsoft SQL Server. That means that Navicat Monitor can now support the monitoring of Microsoft SQL Server, MySQL, MariaDB, and Percona Server databases.

Navicat Monitor is an agentless remote server monitoring tool that runs on Windows, Mac or Linux and can be accessed from anywhere via a web browser. Navicat Monitor can be installed on any local computer or virtual machine. It also supports cloud services including AWS, Oracle Cloud, Alibaba Cloud, Google Cloud, Microsoft Azure and others. It does not require any software installation on the servers being monitored.

It is common for DBAs these days to manage multiple different types of DBMSes, and Navicat Monitor can help these DBAs with its intuitive dashboard interface. Navicat can display summary information for all your database instances on its main dashboard screen. And with the compact view you can see over a hundred instances on a single screen!

In Figure 1 we can see a screen shot of the dashboard showing how multiple servers can be monitored from a single pane. You can filter by DBMS type and search for specific instances, simplifying the way in which DBAs manage the performance of all the different databases under their purview.

dashboard

Figure 1. Navicat Monitor Dashboard

Using the dashboard DBAs can view a one-page summary of the real-time health and performance of all the database instances they are responsible for. And the dashboard can be customized to enable DBAs to view the information they want in the manner they want to see it.

Microsoft SQL Server Support

But the big news for Navicat Monitor Version 2.0 is the addition of support for Microsoft SQL Server. All the performance management capabilities you have been using for MySQL and MariaDB are now available for SQL Server. This means you can measure database performance metrics such as CPU load, RAM usage, I/O bottlenecks, locking issues and more for each database type and instance you support.

A major component of performance monitoring and tuning for Microsoft SQL Server involves determining the cause of observed or reported SQL Server issues and implementing the requisite changes to improve performance. Changes may be required to any component of the technology stack supporting the applications including the SQL Server database, application code, operating system configuration, and hardware components. From a database perspective, tuning can require modifications to many different facets of your SQL Server environment, including Transact-SQL (whether in queries, stored procedures, or programs), optimized execution plans, indexing, database structures, your SQL Server configuration, operating system configuration, and the physical hardware you use including memory, disk and other data storage mechanisms, and really, any parameters or configuration options.

The challenge is to identify what is causing the performance issue and that is where Navicat Monitor shines. DBAs can use Navicat Monitor to gain visibility into instance resource utilization, performance, and operational health. Using Navicat Monitor you can get a complete overview of all your instances and how they are performing. You can interact with Navicat Monitor using its graphical visualization of performance metrics for a high-level view, and then drill down into a more detailed analysis.

Navicat Monitor uses a performance repository to capture historical performance metrics which you can use to evaluate performance trends and diagnose problems. DBAs can set up customized rules to alert when specific performance thresholds are reached, delivering notifications via email, SMS, SNMP, or Slack. And if you do not have time to customize your own rules Navicat Monitor comes preconfigured with more than 40 alert policies right out-of-the-box. Of course, these rules can be customized later to conform to the metrics and thresholds most important to your environment.

But probably the single most vexing issue for DBAs and developers is SQL performance. It is estimated that as much as 80% of relational database performance problems can be attributed to poorly performing SQL. And without the capabilities of Navicat Monitor it can be extremely challenging to identify, analyze and improve the performance of poorly performing SQL queries.

Navicat Monitor’s Query Analyzer feature delivers the ability to identify, analyze and optimize the performance of SQL queries. Using Query Analyzer to regularly track the performance of your top resource-consuming SQL statements can help you to constantly improve the overall performance of your applications by finding, and then tuning the worst performers first.query_analyzer

Figure 2. Navicat Monitor Query Analyzer

Refer to Figure 2. You can use Query Analyzer to implement a common best practice, identifying and tuning a top five list of problem queries. Query Analyzer gathers information about SQL queries and identifies the Top 5 Queries, which are the 5 most time-consuming query statements, along with additional details.

Take a look at the Query Table section of the Query Analyzer, shown toward the bottom of the screen shot in Figure 2. Here we see that Navicat has identified the Top 5 queries based on total execution time, along with additional details including the SQL text, a count of how many times the SQL was run, and cumulative execution time.

You can use Query Analyzer to dive down and acquire more information on all of the longest-running queries in your environment. Figure 3 shows the Long Running Queries screen. Here we can perform an in-depth analysis of the long running queries examining when they are run along with additional details including execution plan details, lock waits, I/O, and all the relevant database and system details to help you optimize your SQL.

long_running_queries

Figure 3. Long Running Queries

 

Summary

If you already use Navicat Monitor, you’ll love the new version. If you have yet to use it, now is the time to take a look. And with the new Microsoft SQL Server support, Navicat Monitor 2.0 will be beneficial to even more organizations and DBA teams than ever before.

Posted in DBA, performance, SQL | Leave a comment

Digital Transformation and Database Scalability

Today’s business systems are undergoing a revolutionary transformation to work within the digital economy. This phenomenon is known as digital transformation. There are four over-arching trends that are driving digital transformation today summarized by the acronym SMAC: social, mobile, analytics and cloud.

Mobile computing, is transforming the way most people interact with applications. Just about everybody has a smartphone, a tablet, or both. And the devices are being used to keep people engaged throughout the day, no matter where they are located. This change means that customers are engaging and interacting with organizations more frequently, from more diverse locations than ever before, and at any time around-the-clock. End users are constantly checking their balances, searching for deals, monitoring their health, and more from mobile devices. And their expectation is that they can access their information at their convenience.

Cloud computing, which is the practice of using a network of remote servers hosted on the internet to store, manage, and process data and applications, rather than a local host, enables more types and sizes of organizations than ever before to be able to deploy and make use of computing resources—without having to own those resources. Applications and databases hosted in the cloud need to be resilient in order to adapt to changing workloads.

And the Big Data phenomenon has boosted the amount of data being created and stored.
The amount and types of data that can be accessed and analyzed continue to grow by leaps and bounds. And when analytics is performed on data from mobile, social, and cloud computing, it becomes more accessible and useful by anyone, anywhere, at any time.

All of these trends have caused organizations to scale their database implementations and environments to accommodate data and workload growth. But how can databases be scaled?

Well, at a high level, there are two types of database scalability: vertical scaling and horizontal scaling.

Vertical scaling , also known as scaling up, is the process of adding resources, such as
memory or more powerful CPUs to an existing server. Removing memory or changing to a less powerful CPU is known as scaling down.

Adding or replacing resources to a system typically results in performance gains, but to realize such gains can require reconfiguration and downtime. Furthermore, there are limitations to the amount of additional resources that can be applied to a single system, as well as to the software that uses the system.

Vertical scaling has been a standard method of scaling for traditional RDBMSs that are architected on a single-server type model. Nevertheless, every piece of hardware has limitations that, when met, cause further vertical scaling to be impossible. For example, if your system only supports 256 GB of memory when you need more memory you must migrate to a bigger box, which is a costly and risky procedure requiring database and application downtime.

Horizontal scaling , sometimes referred to as scaling out, is the process of adding more
hardware to a system. This typically means adding nodes (new servers) to an existing system. Doing the opposite, that is removing hardware, is known as scaling in.

With the cost of hardware declining, it can make sense to adopt horizontal scaling  using low-cost commodity systems for tasks that previously required larger computers, such as mainframes. Of course, horizontal scaling can be limited by the capability of software to exploit networked computer resources and other technology constraints. And keep in mind that traditional database servers cannot run on more than a few machines. Scaling is limited, in that you are scaling to several machines, not to 100x or more.

Horizontal and vertical scaling can be combined, with resources added to existing servers to scale vertically and additional servers added to scale horizontally when required. It is wise to consider the tradeoffs between horizontal and vertical scaling as you consider each approach.

Horizontal scaling results in more computers networked together and that will cause increased management complexity. It can also result in latency between nodes and complicate programming efforts if not properly managed by either the database system or the application. Vertical scaling can be less costly; it typically costs less to reconfigure existing hardware than to procure and configure new hardware. Of course, vertical scaling can lead to overprovisioning which can be quite costly. At any rate, virtualization perhaps can help to alleviate the costs of scaling.

 

Posted in Big Data, DBMS, digital transformation, scalability | Leave a comment

Craig Mullins Presenting at IDUG Db2 Tech Conference in Charlotte, NC

Those of you who are Db2 users know that the 2019 IDUG North American Db2 Tech Conference is being held this year in Charlotte, NC the week of June 2-6, 2019. IDUG stands for the International Db2 User Group and it has been meeting every year since 1988.

If you will be there, be sure to attend my speaking presentations on Tuesday, June 4th. My conference session is bright and early Tuesday at 8:00 AM (Session E05) titled Coding Db2 for Performance: By the Book. This session is based on my latest book and it is aimed at application developers. The general idea is to give an overview of the things that you can do as you design and code your Db2 programs (for z/OS or LUW) with performance in mind. All too often performance is an afterthought – and that can be quite expensive. Nail down the basics by attending this session!

Also on Tuesday, but later in the morning, I will deliver a vendor-sponsored presentation for Infotel, at 10:40 AM. This presentation, titled Improving Db2 Application Quality for Optimizing Performance and Controlling Costs, will be delivered in two parts. I will be presenting the first half of the VSP discussing the impact of DevOps on Db2 database management and development. The second half will be delivered by Carlos Almeida of Infotel, who will talk about how their SQL quality assurance solutions can aid the DevOps process for Db2 development.

I hope to see you there!

Posted in DB2, DBA, IDUG, speaking engagements | Leave a comment

Craig Mullins Presenting at Data Summit 2019

The world of data management and database systems is very active right now. Data is at the center of everything that modern organizations do and the technology to manage and analyze it is changing rapidly. It can be difficult to keep up with it all.

If you find yourself in need of being up to speed on everything going on in the world of data, you should plan on attending Data Summit 2019, May 21-22 in Boston, MA.

logo2019

Craig Mullins will be talking about the prevailing database trends during his talk The New World of Database Technologies (Tuesday at Noon).  Keep abreast of the latest trends and issues in the world of database systems, including how the role of DBA is evolving and participating in those trends.

This presentation offers an overview of the rapidly changing world of data management and administration as organizations digitally transform. It examines how database management systems are changing and adapting to modern IT needs.

Issues covered during this presentation include cloud/DBaaS, analytics, NoSQL, IoT, DevOps and the database, and more. We’ll also examine what is happening with DBAs and their role within modern organizations. And all of the trends are backed up with references and links for your further learning and review.

I hope to see you there!

Posted in DBA | Leave a comment

Craig Mullins to Deliver Database Auditing Webinar – May 15, 2019

Increasing governmental and industry regulation coupled with the need for improving the security of sensitive corporate data has driven up the need to track who is accessing data in corporate databases. Organizations must be ever-vigilant to monitor data usage and protect it from unauthorized access.

Each regulation places different demands on what types of data access must be monitored and audited. Ensuring compliance can be difficult, especially when you need to comply with multiple regulations. And you need to be able to capture all relevant data access attempts while still maintaining the service levels for the performance and availability of your applications.

Register for the next Idera Geek Sync webinar, Database Auditing Essentials: Tracking Who Did What to Which Data When, on Wednesday May 15 @ 11 am CT to be delivered by yours truly.

As my regular readers know, database access auditing is a topic I have written and spoken about extensively over the years, so be sure to tune in to hear my latest thoughts on the topic.

You can learn more about the issues and requirements for auditing data access in relational databases. The goal of this presentation is to review the regulations impacting the need to audit at a high level, and then to discuss in detail the things that need to be audited, along with pros and cons of the various ways of accomplishing this.

Register here →

Posted in auditing, compliance, Database security, DBA, speaking engagements | Leave a comment

Inside the Data Reading Room – 1Q2019

It has been awhile since I have published a blog post in the Inside the Data Reading Room series, but that isn’t because I am not reading any more!  It is just that I have not been as active reviewing as I’d like to be. So here we go with some short reviews of data and analytics books I’ve been reading.

Let’s start with Paul Armstrong’s Disruptive Technologies: Understand Evaluate Respond.  Armstrong is a technology strategist who has worked for and with many global companies and brands (including Coca Cola, Experian, and Sony, among others). In this book he discusses strategies for businesses to work with new and emerging technologies.

Perhaps the strongest acclaim that I can give the book is that after reading the book, you will feel that its title is done justice. Armstrong defines what a disruptive technology is and how embrace the change required when something is “disruptive.”

The books offers up a roadmap that can be used to assess, handle, and resolve issues as you identify upcoming technology changes and respond to them appropriately. It idendifies a decision-making framework that can be used that is based on the dimensions of Technology, Behaviour and Data (TBD).

The book is clear and concise, as well as being easy to read. It is not encumbered with a lot of difficult jargon. Since technology is a major aspect of all businesses today (digital transformation) I think both technical and non-technical folks can benefit from the sound approach as outlined in this book.

Another interesting book you should take a look at if you are working with analytics and AI is Machine Learning: A Constraint-Based Approach by Marco Gori. This is a much weightier tome that requires attention and dilgence to digest. But if you are working with analytics, AI, and/or machine learning in any way, the book is worth reading.

The book offers an introductory approach for all readers with an in-depth explanation of the fundamental concepts of machine learning. Concepts such as neural networks and kernel machines are explained in a unified manner.

Information is presented in a unified manner is based on regarding symbolic knowledge bases as a collection of constraints. A special attention is reserved to deep learning, which nicely fits the constrained- based approach followed in this book.

The book is not for non-mathematicians or those only peripherally interested in the subject. Over more than 500 pages the author

There is also a companion web site that procides additional material and assistance.

The last book I want to discuss today is Prashanth H. Southekal’s Data for Business Performance. There is more data at our disposal than ever before and we continue to increase the rate at which we manufacture and gather more data. Shouldn’t we be using this data to improve our businesses? Well, this book provides guidance and techniques to derive value from data in today’s business environment.

Southekal looks at deriving value for three key purposes of data: decision making, compliance, and customer service. The book is structured into three main sections:

  • Part 1 (Define) builds fundamental concepts by defining the key aspects of data as it pertains to digital transformation. This section delves into the different processes that transform data into a useful asset
  • Part 2 (Analyze) covers the challenges that can cause organizations to fail as they attempt to deliver value from their data… and it offers solutions to these challenges that are practical and can be implemented.
  • Part 3 (Realize) provides practical strategies for transforming data into a corporate asset. This section also discusses frameweorks, procedures, and guidelines that you can implement to achieve results.

The book is well-organized and suitable for any student, business person, or techie looking to make sense of how to use data to optimize your business.

If you’ve read any of these books, let me know what you think… and if you have other books that you’d like to see me review here, let me know. I’m always looking for more good books!

Posted in AI, book review, books, business planning, data, data governance, Machine Learning | Leave a comment

Navicat Enables DBAs to Adopt Modern Platforms and Practices

Database administration is a tried and true IT discipline with well-defined best practices and procedures for ensuring effective, efficient database systems and applications. Of course, as with every discipline, best practices must constantly be honed and improved. This can take on many forms. Sometimes it means automating a heretofore manual process. Sometimes it means adapting to new and changing database system capabilities. And it can also mean changing to support new platforms and methods of implementation.

To be efficient, effective, and up-to-date on industry best practices, your DBA team should be incorporating all of these types of changes. Fortunately, there are tools that can help, such as Navicat Premium which can be used to integrate all of these forms of changes into your database environment.

What is Navicat Premium? Well, it is a data management tool that supports and automates a myriad of DBA tasks from database design through development and implementation. Additionally, it supports a wide range of different database management systems, including MariaDB, Microsoft SQL Server, MongoDB, MySQL, Oracle Database, PostgreSQL, SQLite and multiple cloud offerings (including Amazon, Oracle, Microsoft, Oracle, Google, Alibaba, Tencent, MongoDB Atlas and Huawei).

The automation of the DBA tasks using Navicat reduce the amount of time, effort, and human error involved in implementing and maintaining efficient database systems. And for organizations that rely on multiple database platforms – which is most of them these days – Navicat helps not only with automation, but with a consistent interface and methodology across the different database technologies you use.

Navicat can also assist DBAs as their organizations adapt to new capabilities and new platforms. For example, cloud computing.

Although Navicat Premium is typically installed on your desktop, it connects not only to on-premises databases, but also cloud databases such as Amazon RDS, Amazon Aurora, and Amazon Redshift. Amazon removes the need to set up, operate, and scale a relational database, allowing you to focus on the database design and management. Together with an Amazon instance, Navicat Premium can help your DBAs to deliver a high-quality end-to-end database environment for your business applications.

Let’s face it, you probably have a complex data architecture with multiple databases on premises, as well as multiple different databases in the cloud. And almost certainly you are using more than one flavor of DBMS. Without a means to simplify your administrative tasks things are going to fall through the cracks, or even worse, be performed improperly. Using Navicat Premium your DBA team will have an intuitive GUI to manipulate and manage all of your database instances – on premises and in the cloud with a set of comprehensive features for database development and maintenance

You can navigate the tree of database structures just like for on premises data. And then connect to the database in the cloud to access and manage it, as we see here for “Amazon Aurora for MySQL connection”:

navicat_cloud_connection

Perhaps one of the more vexing issues with cloud database administration is data movement. Navicat Premium provides a Data Transfer feature that automates the movement of data across database platforms – local to local, local to cloud, or to an SQL file.

Another important consideration is the ability to collaborate with other team members, especially for organizations with remote work teams. The Navicat Cloud options provides a central space for your team to collaborate on connection settings, queries and models. Multiple co-workers can contribute to any project, creating and modifying work as needed. All changes are synced automatically, giving all team members the latest information.

For example, here we see the Navicat Cloud Navigation pane:

navicat_cloud_navigationpane

Another reality of modern computing is that a lot of work is done on mobile devices, such as phones and tablets. DBA work is no longer always conducted on a laptop or directly on the database server. Being able to perform database administration tasks from mobile devices enables DBAs to react quickly, wherever they are whenever their help is needed. You can run Navicat on iOS to enable your mobile workforce to use the devices they always have with them.

When migrating from the large screen common on PCs and laptops, to the smaller screen, common on mobile phones and tablets, you do not want the same layout because it can be difficult to navigate on the smaller devices. Users want the interface to conform to the device, and that is what you get with Navicat iOS.

Let’s look at some examples. Here we see a data grid view for a MySQL table as it would look on an iPhone and an iPad:

02.product_01_mysql_ios_gridview

But you may want to design databases from your mobile device. That is possible with Navicat iOS… here we see the Object Designer interface on the iPhone and iPad:

02.product_01_mysql_ios_objectdesigner

Another common task is building SQL queries, which is also configured appropriately for the mobile experience, as shown here:

02.product_01_mysql_ios_sqlbuilder

Adapting to mobile technologies is important because, mobile workers are here to stay. And we need to be ready to support them with robust software designed to operate properly in a mobile, modern workforce.

The Bottom Line

We must always be adapting to new and changing requirements by adopting tools and methodologies that not only automate tasks, but also incorporate new and modern capabilities. Take a look at what Navicat can do to help you accomplish these goals.

Posted in cloud, database design, DBA, mobile, SQL | Leave a comment

Common Database Design Errors

Before we begin today’s blog post, wherein I explain some of the more common mistakes that rookies and non-database folks make (heck, even some database folks make mistakes), I first want to unequivocally state that your organization should have a data architecture team that is responsible for logical and conceptual modeling… and your DBA team should work in tandem with the data architects to ensure well-designe databses.

OK, so what if that isn’t your experience? Frankly, it is common for novices to be designing databases these days, so you aren’t alone. But that doesn’t really make things all that much better, does it?

The best advice I can give you is to be aware of design failures that can result in a hostile database. A hostile database is difficult to understand, hard to query, and takes an enormous amount of effort to change.

So with all of that in mind, let’s just dig in and look at some advice on things not to do when you are designing your databases.

Assigning inappropriate table and column names is a common design error made by novices. Database names that are used to store data should be as descriptive as possible to allow the tables and columns to self-document themselves, at least to some extent. Application programmers are notorious for creating database naming problems, such as using screen variable names for columns or coded jumbles of letters and numbers for table names. Use descriptive names!

When pressed for time, some DBAs resort to designing the database with output in mind. This can lead to flaws such as storing numbers in character columns because leading zeroes need to be displayed on reports. This is usually a bad idea with a relational database. It is better to let the database system perform the edit-checking to ensure that only numbers are stored in the column.

If the column is created as a character column, then the developer will need to program edit-checks to validate that only numeric data is stored in the column. It is better in terms of integrity and efficiency to store the data based on its domain. Users and programmers can format the data for display instead of forcing the data into display mode for storage in the database.

Another common database design problem is overstuffing columns. This actually is a normalization issue. Sometimes a single column is used for convenience to store what should be two or three columns. Such design flaws are introduced when the DBA does not analyze the data for patterns and relationships. An example of overstuffing would be storing a person’s name in a single column instead of capturing first name, middle initial, and last name as individual columns.

Poorly designed keys can wreck the usability of a database. A primary key should be nonvolatile because changing the value of the primary key can be very expensive. When you change a primary key value you have to ripple through foreign keys to cascade the changes into the child table.

A common design flaw is using Social Security number for the primary key of a personnel or customer table. This is a flaw for several reasons, two of which are: 1) a social security number is not necessarily unique and 2) if your business expands outside the USA, no one will have a social security number to use, so then what do you store as the primary key?

Actually, failing to account for international issues can have greater repercussions. For example, when storing addresses, how do you define zip code? Zip code is USA code but many countries have similar codes, though they are not necessarily numeric. And state is a USA concept, too.

Of course, some other countries have states or similar concepts (Canadian provinces). So just how do you create all of the address columns to assure that you capture all of the information for every person to be stored in the table regardless of country? The answer, of course, is to conduct proper data modeling and database design.

Denormalization of the physical database is a design option but it can only be done if the design was first normalized. How do you denormalize something that was not first normalized? Actually, a more fundamental problem with database design is improper normalization. By focusing on normalization, data modeling and database design, you can avoid creating a hostile database.

Without proper upfront analysis and design, the database is unlikely to be flexible enough to easily support the changing requirements of the user. With sufficient preparation, flexibility can be designed into the database to support the user’s anticipated changes. Of course, if you don’t take the time during the design phase to ask the users about their anticipated future needs, you cannot create the database with those needs in mind.

Summary

Of course, these are just a few of the more common database design mistakes. Can you name more? If so, please discuss your thoughts and experiences in the comments section.

Posted in data, data modeling, database design, DBA | Tagged | 2 Comments

Happy New Year 2019

Just a quick post today to wish everybody out there a very Happy New Year!

Happy-New-Year-

I hope you have started 2019 off with a bang and that the year is successful and enjoyable for one and all!

Posted in Happy New Year | Leave a comment