Reducing MIPS and Mainframe Software Costs with CloudFrame

There is no denying that mainframe computing can be costly. The hardware alone can cost several million US dollars, and over time the software can be just as, if not more costly. As such, reducing mainframe software costs is an important goal for every organization that uses the platform.

Please note that I am not saying that the mainframe is too costly, or more expensive long-term than other commodity servers. The mainframe is very cost-effective when managed appropriately. And part of that, management needs to be understanding mainframe pricing strategies and cost control.

A significant component of overall mainframe cost is the software you are billed for monthly, known as Monthly License Charge (MLC) software. Not all mainframe software falls into the MLC category, but most of the big system software offerings do, such as z/OS, Db2, CICS, IMS, MQ, and COBOL. Furthermore, this software is billed based on usage, therefore reducing usage can reduce your monthly software bill. Of course, there are many nuances to how this usage-based pricing and billing occurs (we will mention a few in a moment), so it is not as easy as simply reducing workload to reduce costs.

But in a moment, I will tell you about a tool you can use to help estimate cost savings as you modernize your mainframe applications. First, let’s talk more about mainframe software costs.

Using zIIPs to Reduce Cost

One method of reducing cost is to increase the usage of zIIP specialty processors. The zIIP is a dedicated processor available on the IBM Z mainframe. When you activate zIIP processors, some percentage of the relevant workload can be redirected off of the general processors onto the zIIP. Why would you do this? Well, a workload that runs on the zIIP is not subject to monthly software charges.   You can save money by running work on the zIIP instead of the general-purpose processor.

But not all workload is eligible to run on the zIIP. You need to understand what types of processing can utilize zIIPs  to accrue cost savings. Java programs are zIIP eligible and therefore represent a very ripe opportunity for cost reduction in mainframe shops. If you  convert traditional workload, such as COBOL programs, to run on a Java Virtual Machine, that workload becomes zIIP eligible and can deliver significant cost savings.

Of course, this means you will need options for converting from COBOL to Java.

CloudFrame Solutions for Application Modernization

So, chances are you are sitting there with a large portfolio of COBOL applications written over the course of multiple decades. They run in batch and online. They access databases. And they run much of your business. Who has the time, let alone the resources, to re-write all of that code into Java?

Nobody does, right. And that is where CloudFrame’s application modernization solutions can help. They provide two different offerings, supporting different use cases, both of which can be used to reduce cost by converting COBOL to Java.

The first option is CloudFrame Renovate which converts your COBOL code to Java. The resulting source code is not JOBOL (that is, Java that looks like COBOL) but well-written object-oriented Java source code. Using this approach, you can get rid of the COBOL and start working with Java, running your code on the mainframe, another platform, or in the cloud – anywhere that Java runs.

Another approach is offered by CloudFrame Relocate. In this case, you keep the COBOL source code but convert the executable code to run in a Java Virtual Machine. No change is necessary to your data or other processes, but now that the code runs as Java, it is zIIP-eligible and can help reduce costs.

Both are viable methods of generating cost savings and can be deployed separately or together for different workloads and use cases.

How Much Can You Save?

Of course, to this point, we have been discussing cost savings as a broad generality under the assumption that moving workload to zIIPs using Java will result in savings. But how much? To that end, CloudFrame has put together an ROI estimation tool for customers to help them estimate how much money they can save using the CloudFrame solutions. The tool is  a spreadsheet with calculations to analyze cost reduction based on workload mix, average MIPS cost, cost of licensing CloudFrame, and other metrics.

You can click here to request access to use the cost estimation tool, and CloudFrame will help walk you through it. After you complete the registration, a CloudFrame representative will schedule time to help you walk through the cost estimation process. You will need to provide information about your environment to make the tool worthwhile. For example, you’ll need to know the average cost per MIP for your organization and the monthly MIPS usage for the workload you plan to convert from COBOL to Java. You will also need to provide additional information about your environments, such as the number of mainframes installed and the total number of disaster recovery environments you use.

You will also be asked to provide some details about the workload mix, such as the mixture of batch vs. online processing. The actual MSU rating for a CPC model will generally be highest for batch-type workloads and lowest for online type workloads, so this type of information is helpful.

The cost estimation tool will also consider the percentage of Db2/SQL workload involved, as that can impact the zIIP offload and cost savings that may be possible.

After supplying the appropriate input, the tool will show an estimate of your net savings (or ROI) over a three-year period, along with an Executive Summary that shows year by year break down of the annual cost to run your applications, the projected annual cost savings, the cost of the CloudFrame subscription, and the over net savings. Of course, the results will depend significantly on the accuracy of the information you provide.

You would do well to keep the following issues in mind as you work through your ROI evaluation. First of all, the spreadsheet is based on MIPS instead of MSUs. IBM no longer really talks about MIPS, but many mainframe shops are still more comfortable talking about MIPS than MSUs. That said, the customer’s monthly MLC software bills are calculated based on MSUs reported by the SCRT. So, you may need to do some MIPS to MSU conversions along the way. Loosely speaking, 1 MSU equals approximately 8.5 MIPS.

Another thing that you will need to know is the average cost of a MIP in your organization. This can be a difficult thing to ascertain. The CloudFrame tool provides some assistance with some analyst-sourced cost/MIPS estimates that you can plug in based on the size of your environment. That said, an average is just that, an average. It may or may not be the actual cost at your site, which can change from month to month based on the type of mainframe software pricing your organization uses.

This brings up another nuance, sub-capacity pricing, which most organizations use. Subcapacity pricing, such as AWLC or VWLC, means your MLC software is billed monthly based on a calculated rolling four-hour average (R4HA) of LPAR MSU usage. The monthly LPAR peak of the R4HA, by product, determines your software bill. This is a good thing because it means you are paying for capacity based on the R4HA instead of the maximum capacity of your system.

So, what does this mean for your cost estimation when converting COBOL to Java? Well, MSUs run on the zIIP are not factored into the R4HA, so if the workload contributed to the monthly peak R4HA period, then you can accrue savings. However, if the converted workload does not run during the peak monthly R4HA, you will not accrue any savings for that workload. 

Of course, if you are using a full-capacity pricing metric, or perhaps tailored-fit pricing, then every saved MSU can help to reduce your software bill. The bottom line is that you will need to understand the type of pricing your organization uses and when the workload to be converted runs to understand the type of savings you might be able to achieve.

That said, many organizations’ monthly mainframe software bill is so high that analyzing and estimating methods of reducing that cost makes a lot of sense!

Cost of CloudFrame

The CloudFrame pricing model differs from those used by most mainframe software vendors. The goal is to provide a simple, understandable method for charging. The CloudFrame Relocate software is priced based on a flat fee subscription license per IBM CPC instance. The CloudFrame Renovate offering is based on an annual subscription license based on lines of code migrated. Additionally, the CloudFrame Developer Kit is priced based on the number of developer seats required.

So, it is relatively easy to determine the cost of the CloudFrame solutions.

I discussed the pricing and business case calculator with CloudFrame’s COO, Hans Otharsson, and here’s what he told me: “CloudFrame has created a pricing model based on input from customers and our years of enterprise software experience that can be summed up with this phrase, ‘easy to understand.’ We’re eliminating surprises and hidden fees and presenting pricing that offers great value to customers.” That will be music to the ears of most mainframe software buyers!

The Bottom Line

Using CloudFrame to renovate and/or relocate COBOL programs to Java can reduce costs, perhaps significantly. Why not get in touch with CloudFrame today and use the ROI estimation tool to see what type of savings you can achieve?!?!  

Posted in capacity planning, cloud, DB2, digital transformation, enterprise computing, IBM, IT, legacy data, mainframe, software cost | Leave a comment

Bad Database Standards

Several years ago I wrote a series of blog posts on bad database standards, that is, things you should avoid doing when creating the standards for your organization regarding database systems and development.

These posts, although over a decade old now, are still relevant, so I thought I’d post an index linking to the articles to bring them back up for your review and comments!

Bad Database Standards Part 1 – Limits on Indexing

Bad Database Standards Part 2 – Too Many Columns!

Bad Database Standards Part 3 – Limiting The Number of Tables in “Online” Joins

Bad Database Standards Part 4 – Duplication of Data

Bad Database Standards Part 5 – None Shall Pass! Standards Can be too Rigid

Bad Database Standards Part 6 – What’s In a Name?

Bad Database Standards Part 7 – What Does Support Mean?

What do you think? Do you have any issues with any of these observations?

Are there any “bad database standards” that you have seen that need to be added to this list?

Feel free to drop a comment and startup a discussion here…

Posted in DBA, DBMS, SQL, standards | Leave a comment

A Few Good SQL Book Recommendations

Every professional programmer that accesses data in a relational/SQL database should have a good book (or four) on SQL… Actually, the same goes for DBAs because they also regularly use SQL in their day-to-day work. There are many SQL books to choose from, and many of them are very good. Here are four of my favorites from over the years…

The first SQL book I recommend is SQL Performance Tuning by Peter Gulutzan and Trudy Pelzer. This books offers up a treasure trove of tips for improving SQL performance on all of the major database systems. This book does not teach SQL syntax, but instead helps the reader to understand the differences between the most popular DBMS products, including Oracle, DB2, SQL Server, Sybase ASE, MySQL, Informix, Ingres, and even InterBase.

Throughout this book the authors present and test techniques for improving SQL performance, and grade each technique for its usefulness on each of the major DBMSs. If you deal with heterogeneous database implementations this book will be a great assistance, whether you are a programmer, consultant, DBA, or technical end user. The contents of this book can help you to decide which tuning techniques will work for which DBMS.

My next SQL book recommendation is altogether different in purpose than the first. It is SQL in a Nutshell, by Kevin Kline, Daniel Kline, and Brand Hunt.

This book offers a great cross-platform syntax reference for SQL. It probably is not the easiest reference to use for finding the exact syntax for one particular DBMS; but it is absolutely the best reference for those who work with multiple DBMSs.

Next up is The Art of SQL by Stephane Faroult, which is a guide to SQL written using the approach of “The Art of War” by Sun-Tzu.

The author actually uses the exact same title chapters for The Art of SQL that Sun Tzu used in The Art of War. Amazingly enough, the tactic works. Consider, for example, the chapter titled “Laying Plans,” in which Faroult examines how to design databases for performance. As anyone who ever built database applications knows an improperly designed database can be the biggest impediment to flawless application performance.

The chapter titled “Tactical Dispositions” covers the topic of indexing and in “The Nine Situations” the author examines several calssic SQL patterns and how best to approach them.

This book is not for a novice who wants to learn SQL from scratch. The authors assume the reader is conversant with SQL as they describe how to apply SQL in a practical manner. If you can’t code an outer join or don’t know what a nested table expression or in-line view is, then this is not the book for you.

Neither is the book a list of SQL scripts that you can pluck out and use. Instead, The Art of SQL skillfully manages to explain how to properly attack the job of coding SQL to effectively and efficiently access your data. The book offers best practices that teach experienced SQL users to focus on strategy rather than specifics.

The Art of SQL skillfully manages to explain how to properly attack the job of coding SQL to effectively and efficiently access your data. The book offers best practices that teach experienced SQL users to focus on strategy rather than specifics.” You know, if Sun Tzu coded SQL, he might have written a book like “The Art of SQL”. But since Sun Tzu is dead, I’m glad Stephane Faroult was around to author this tome.

The final SQL book recommendation is the latest edition of SQL for Smarties by the grand master of SQL, Joe Celko. Celko was a member of the ANSI SQL standards committee for ten years, and is highly qualified to write such a text.

The 3rd edition was completely revised and boasts over 800 pages of advanced SQL programming techniques. If you have any of the past two editions of this book, you owe it to yourself to get the newly revised third edition.

This book offers tips, techniques, and guidance on writing effective, sometimes complex, SQL statements using ANSI standard SQL. It touches on topics ranging from database design and normalization to using proper data types to grouping and set operations, optimization, data scaling, and more. Every developer who codes SQL statements for a living will find something useful in SQL for Smarties!

Joe also wrote an introductory SQL book titled SQL Programming Style (Morgan Kaufmann: ISBN 0-12-088797-5) that offers useful guidance on how to write standard SQL. If SQL for Smarties is too much for you, start off with SQL Programming Style.

Finally…

In all cases, if there is a newer edition of the book than the ones in the links then, by all means, buy the latest edition. These authors have proven themselves to write quality content and I trust that any new edition would improve on their existing material.

Happy SQL coding everybody!

Posted in books, performance, SQL | Leave a comment

I Will be A Regular Contributor on Data Management Topics for Elnion

Just a short blog post today to let my readers know that I will be contributing to elnion as a data management content provider. Elnion is a thought leadership site for technology news and issues, with a focus on cloud, data, the digital enterprise, and more. It was founded by well-known social influencer Dez Blanchfield.

My first article, titled DBA and the Cloud: Not Never, Always, All, or Nothing! was published late in April 2022.

Be sure to check out elnion on a regular basis for more of my data management content, as well as informed and interesting pieces from other thought leaders.

Posted in data, data management, social influencer, thought leadership | Leave a comment

Let’s Create the Future with IBM Using Data, AI, and Cloud

This post is sponsored by IBM

As IT professionals, creation is what we do. Every day developers are writing new code to make it easier to run our business, or writing (creating) code to maintain existing systems. DBAs are creating new databases and structures, as well as creating new ways of making sure vital corporate data is available all the time, every day and in multiple ways. And data scientists and analysts are using the data in those databases to gain a deeper understanding of the business, discover heretofore unknown trends, and create new ways to gain competitive advantage by transforming their business. And when you put them all together with IBM technology and services, there are virtually no limitations to what you can create.

In my experience as an IBM partner, creation using IBM solutions for hybrid cloud, AI, and data fabric delivers unparalleled capabilities for modern, useful systems.

It is important to understand that on-premises and cloud computing must co-exist to deliver the highest value to most organizations. Large enterprises like banks, insurance companies, airlines, and retailers have billions of lines of code invested in applications that run their business. And they are not going to simply jettison that investment to re-write everything in the technology du jour. But it is possible to continue to benefit from existing workloads while modernizing them.

A hybrid cloud approach is the standard method for creating modern enterprise systems and applications. With a hybrid cloud approach, you can choose what makes sense for each component, task, and project that you tackle. Building applications that mix and integrate on-premises systems with cloud capabilities enables new platforms to benefit from the rich heritage of existing systems and integrate them with new capabilities and techniques as appropriate.

This means you can integrate the capabilities of existing on-premises applications, such as those running on IBM Z and IBM Power Systems platforms, with the secure, flexible hybrid capabilities of the IBM Cloud. Creating useful applications and content on all platforms is easier when you embrace your legacy applications and many organizations are embracing digital transformation to create new ways to benefit from existing systems, by extending their IBM Z and Power workloads to the IBM hybrid cloud. Creating in this manner enables organizations to nimbly modernize their systems on the IBM Cloud while continuing to benefit from their long-time core applications running on IBM’s proven hardware.

As all IT professionals know, it is not enough to build applications, but they also need to be supported. Assuring development and operations to support what is developed is the reason the whole DevOps movement is flourishing. IBM provides the tools needed for managing and deploying your hybrid cloud applications. On the dev side, IBM Red Hat OpenShift, which is the leading multicloud container development platform, provides microservices frameworks, serverless support, continuous integration and delivery (CI/CD) integration, and more. And on the ops side, IBM Cloud Schematics brings the power of both Red Hat Ansible and Terraform to IBM Cloud users for automating the end-to-end deployment of cloud infrastructure and applications. 

Turning our attention to data, it is undeniable that data is the underlying lynchpin of IT. In all cases, data is required to create business value whether you are performing analytics, coding applications, running machine learning models, or even just generating a report. None of it can be done without data.

But all too frequently data is not treated as the valuable corporate asset – required to drive the business – that it is. Data typically exists in silos, managed independently, and perhaps not documented as well as it could be. What is needed is a data fabric that delivers the data management services required to discover, curate, govern, and orchestrate your data regardless of where it is deployed — on-premises, or in the cloud.

And IBM provides a data fabric solution, IBM Cloud Pak for Data, that can improve creativity by improving your data health and management. Using IBM Cloud Pak for Data you can define and connect your disparate data where it exists across your hybrid cloud, integrating data and creating a catalog of re-usable data assets. And when you know where your data is, it makes it easier to provide business users with the data they need to improve customer experiences, as well as to enforce data usage and access policies. All while leaving the data in place, without having to move it to make it accessible. With a data fabric in place, you can unite, govern. and secure your data for faster, more accurate insights.

Furthermore, the data fabric powers and enables the adoption of AI, which most organizations are utilizing to create modern applications and systems that deliver valuable insights. AI needs data, without it, there isn’t much that can be accomplished, because trustworthy AI relies on accurate data for models and decision-making. With a data fabric in place, your managed, useful corporate data becomes available to data scientists for AI projects.

IBM delivers the solutions to help you infuse AI into your creations. Your data scientists and analysts will use IBM Watson Studio, which works with IBM Cloud Pak for Data for building, running, and managing AI models, and building AI applications for optimizing decisions based on your data anywhere throughout your organization.

Additionally, people can interact with your creations using natural language with IBM Watson Assistant, which uses AI that understands users in context to provide fast, consistent, and accurate answers across any application, device, or channel. You can create a highly-intelligent, AI-powered virtual agent for your application(s) without writing a single line of code.

Putting It All Together

Imagine this: a hybrid cloud application accessing data on-premises from Db2 for z/OS and Db2 on Linux, as well as data in Db2 on the IBM Cloud. IBM Cloud Pak for Data is in place creating a data fabric such that your data is clearly defined and governed making it accessible to all. You’ve used Red Hat OpenShift and Ansible to ensure that the application is delivered and managed in a simple, intuitive manner.

Because you’ve integrated Watson Assistant into this system all of your users can interact with it using natural language, so nobody has to learn a cumbersome new interface.

And best of all, you can use these services and tools from IBM solutions to create any applications you need in the hybrid cloud.

So, what are you waiting for? Let’s create incredible, practical new systems that integrate our existing and valuable resources with all these useful technologies from IBM. And then we can experience the true power of modern computing!

Posted in AI, analytics, cloud, data, Watson | Leave a comment

Let’s Talk About Database Performance Analyzer

Recently, I published a tweet thread on database performance and SolarWinds’ Database Performance Analyzer. Today’s blog post captures those tweets for posterity!

Let’s talk about #DatabasePerformance and SolarWinds (Tweet #1)

Applications that access relational databases are only as good as the performance they achieve. And, every user wants their software to run as fast as possible, right? It is for that reason that database and application performance tuning and management is one of the biggest demands on the DBA’s time. When asked what is the single most important or stressful aspect of their job, DBAs typically respond “assuring optimal performance.” Indeed, a recent Forrester Research survey indicates that performance and troubleshooting tops the list of most challenging DBA tasks.

Handling performance problems should be an enterprise-wide endeavor. And, most organizations monitor and tune the performance of their entire IT infrastructure encompassing servers, networks, applications, desktops, and databases. However, the task of enterprise performance management frequently becomes the job of the DBA group. Anyone who has worked as a DBA for any length of time knows that the DBMS is usually “guilty until proven innocent.” Every performance problem gets blamed on the database regardless of its true source cause. DBAs need to be able to research and decipher the true cause of all performance degradation if only to prove that it is not caused by a database problem.

Truly, it is the case that optimizing data access and modification has been important ever since the invention of the first DBMS. And managing database performance has only gotten more complex over time. (Tweet #2)

Most organizations have multiple different databases, but they want them all to perform well! However, Oracle, SQL Server, Db2, PostgreSQL, MySQL, and other popular database systems all work differently. (Tweet #3)  Assuring optimal performance of applications across a heterogeneous database environment is a significant ongoing operational challenge… one that SolarWinds can help you tackle. (Tweet #4)  SolarWinds Database Performance Analyzer (DPA) is designed to help you uncover and resolve your most complex database performance problems (Tweet #5).

Figure 1. SolarWinds Database Performance Analyzer

At a glance you can see where database performance problems exist and then navigate easily to learn more details. SolarWinds DPA can do this across all major database systems from, a single installation (Tweet #6).

Figure 2. View Database Performance Issues at-a-glance

And SolarWinds DPA offers expert tuning advisors that deliver precise database tuning guidance to help you remediate poor performing databases, applications, and SQL statements. (Tweet #7)

Tuning SQL statements becomes easier when you use SolarWinds DPA to focus on the SQL that is consuming the most resources and causing the most problems. Each color on the graph represents a separate SQL statement. (Tweet #8)

Figure 3. Resource-Consuming SQL Statements

And then SolarWinds DPA can even drill down into the SQL statement text for more details. (Tweet #9)

Figure 4. Drill Down to the SQL Statement Text

Gathering all of the details needed to analyze SQL problems can take a lot of time and effort, but SolarWinds DPA minimizes the effort involved. The response time analysis feature of Database Performance Analyzer provides operational intelligence about database performance over time. It tracks every query in every active session and captures the events that impose delays on the queries. (Tweet #10)

Furthermore, with SolarWinds DPA dynamic baselines you can view detailed metrics and years of history to see the trends and patterns that tell the complete performance story over time. (Tweet #11)

SolarWinds Database Performance Analyzer is a smart choice for optimizing database performance. It requires no agents and imposes minimal impact on monitored databases. (Tweet #12)  Read what DBAs and performance analysts are saying about how SolarWinds Database Performance Analyzer improves database performance. (Tweet #13)

The Bottom Line

Any organization looking to improve #DatabasePerformance should consider adopting SolarWinds Database Performance Analyzer to optimize their applications and systems. (Tweet #14)

Posted in automation, DBA, optimization, performance, SQL | Tagged | 1 Comment

Best Practices for Upgrading DBMS Versions

Change is a fact of life and each of the major DBMS products introduces changes quite frequently. A typical release cycle for DBMS software is 18 to 36 months for major releases with constant bug fixes and maintenance delivered in between major releases. As such, keeping DBMS software up-to-date can be almost a full-time job.

The DBA must develop an approach to upgrading DBMS software that conforms to the needs of their organizations and minimizes the potential for disrupting business due to outages and database unavailability.

You may have noticed that I use the terms “version” and “release” somewhat interchangeably. That is fine for a broad discussion of DBMS upgrading, but a more precise definition is warranted. Vendors typically make a distinction between versions and releases of software products. A new version of the software is a major concern, with many changes and new features. A release is typically minor, with fewer changes and not as many new features.

Consider Oracle, for example. Oracle Database 18c indicates the version number, in this case, version 18. It encompasses all later releases until there is a new version. Oracle Database 18.3 is Version 18 Release 3. The second numeral designates the maintenance release number. With Oracle (and other DBMSes), numbers after the release can indicate lower-level deliveries. There may be an update revision, such as 18.1.1, 18.1.2, etc. And also an increment of a maintenance release, such as 18.1.0.1, 18.1.0.2, and so on.

Usually, significant functionality is added for version upgrades, less so for point releases. But upgrading from one point release to another can have just as many potential pitfalls as version upgrades. It depends on the nature of the new features provided in each specific release. The issues and concerns we will discuss here article pertain to both types of DBMS upgrades: to a new release and to a new version.

In a complex, heterogeneous, distributed database environment a coherent upgrade strategy is essential. Truthfully, even organizations with only a single DBMS should plan accordingly and approach DBMS upgrades cautiously. Failure to plan a DBMS upgrade can result in improper and inefficient adoption of new features, performance degradation of new and existing applications, and downtime.

An effective DBMS upgrade strategy must balance the benefits against the risks of upgrading to arrive at the best timeline for migrating to a new DBMS version or release. There are many potential risks of upgrading to a new DBMS release.

An upgrade to the DBMS almost always involves some level of disruption to business operations. At a minimum, as the DBMS is being upgraded databases will not be available. This can result in downtime and lost business opportunities if the DBMS upgrade has to occur during normal business hours (or if there is no planned downtime). Clustered database implementations may permit some database availability as individual database clusters are migrated to the new DBMS version.

Other disruptions can occur including the possibility of having to convert database file structures, the possibility that previously supported features were removed from the new release (thereby causing application errors), and delays to application implementation timelines.

The cost of an upgrade can be a significant barrier to DBMS release migration. The cost of the new version or release must be planned for (price increases for a new DBMS version can amount to as much as 10% to 25%). The upgrade cost also must factor in the costs of planning, installing, testing, and deploying not just the DBMS but also any applications using databases. Additionally, be sure to include the cost of any new resources (memory, storage, additional CPUs, etc.) required by the DBMS to use the new features delivered by the new DBMS version.

DBMS vendors usually tout the performance gains that can be achieved with a new DBMS release. But when SQL optimization techniques change it is possible that a new DBMS release will generate SQL access paths that perform worse. DBAs must implement a rigorous testing process to ensure that new access paths are helping, not harming, application performance. When performance suffers, application code may need to be changed – a very costly and time-consuming endeavor. A rigorous test process should be able to catch most of the access path changes in the test environment.

To take advantage of improvements implemented in a new DBMS release, the DBA may have to apply some invasive changes. For example, if the new version increases the maximum size for a database object, the DBA may have to drop and re-create that object to take advantage of the new maximum. This will be the case when the DBMS adds internal control structures to facilitate such changes.

Another potential risk is the possibility that supporting software products may lack immediate support for a new DBMS release. Supporting software includes the operating system, transaction processors, message queues, purchased applications, DBA tools, development tools, and query and reporting software.

When the risks of a new release outweigh the benefits, some organizations may decide to skip an interim release. Skipping releases is not always supported by the DBMS vendor but can be possible, at times, even if no direct support is offered. Although a multiple release upgrade takes more time, it enables customers to effectively control when and how they will migrate to new releases of a DBMS, instead of being held hostage by the DBMS vendor. When attempting a multiple release upgrade of this type be sure to fully understand the features and functionality added by the DBMS vendor for each release level between the previously installed level and the new level being implemented. For example, if moving from Version 8 to Version 10, the DBAs will need to research and prepare for the new features not just of Version 10, but of Version 9 and Version 10.

After weighing the benefits of upgrading against the risks of a new DBMS release, the DBA group must create an upgrade plan that works for the organization. Sometimes the decision will be to upgrade immediately upon availability, but often there is a lag between the general availability of a new release and widespread adoption of that release.

An appropriate DBMS upgrade strategy depends on many factors.  Perhaps the biggest factor in determining when and how to upgrade to a new DBMS release is the functionality supported by the new release. Tightly coupled to functionality is the inherent complexity involved in supporting and administering the new features.

Regardless of the new “bells and whistles” that come along with a release upgrade, there are always administration and implementation details that must be addressed before upgrading. The DBA group must ensure that standards are modified to include the new features, educate developers and users as to how new features work and should be used, and prepare the infrastructure to support the new DBMS functionality.

The type of changes required to support the new functionality must be factored into the upgrade strategy. When the DBMS vendor makes changes to internal structures, data page layouts, or address spaces, the risks of upgrading are greater. Additional testing is warranted in these situations to ensure that database utilities, DBA tools, and data extraction and movement tools still work with the revised internals.

Complexity is another concern. The more complex your database environment is, the more difficult it will be to upgrade to a new DBMS release. The first complexity issue is the size of the environment. The greater the number of database servers, instances, applications, and users, the greater the complexity will be. Additional concerns include the type of applications being supported. A DBMS upgrade is easier to implement if only simple, batch-oriented applications are involved. As the complexity and availability concerns of the applications increase, the difficulty of upgrading also increases.

You should also take into account the support policies of the DBMS. As new releases are introduced, DBMS vendors will retire older releases and no longer support them. The length of time that the DBMS vendor will support an old release must be factored into the DBMS release migration strategy. You should never run a DBMS release in production that has been de-supported by the vendor. If problems occur, the DBMS vendor will not be able to resolve any problems for you if their software is no longer supported. Sometimes a DBMS vendor will provide support for a retired release on a special basis and at an increased maintenance charge. If you absolutely must continue using a retired DBMS release (for business or application issues), be sure to investigate the DBMS vendor’s policies regarding support for retired releases of its software. This is a particularly important issue to consider whenever you delay a DBMS upgrade for any reason.

Consider also, your organization’s risk tolerance.  Every organization displays characteristics that reveal its style when it comes to adopting new products and technologies. Industry analysts at Gartner Inc. have ranked organizations into three distinct groups labeled Type A, B, and C. A type-A enterprise is technology-driven, and as such, is more likely to risk using new and unproven technologies to try to gain a competitive advantage. A type-B organization is less willing to take risks but will adopt new technologies once the bugs have been shaken out by others. Finally, a type-C enterprise is very cost-conscious and risk-averse and will lag behind the majority when it comes to migrating to new technology.

Only type-A organizations should plan on moving aggressively to new DBMS releases immediately upon availability; and even then, not for every new DBMS release, only when the new features of the release are judged to deliver advantages to the company. Type-C enterprises should adopt a very conservative strategy to ensure that the DBMS release is stable and well-tested by type-A and type-B companies first. And type-B organizations will fall somewhere in between types A and C; almost never upgrading immediately, instead adopting the new release when the early adopters have shaken out the biggest problems, but well before type-C enterprises.

When a DBMS vendor unleashes a new release of its product, not all platforms and operating systems are immediately supported. The DBMS vendor most likely will support the platforms and operating systems for which it has the most licensed customers first. The order in which platforms are supported for a new release is likely to differ for each DBMS vendor. For example, Linux for System z is more strategic to IBM than to Oracle, so a new Db2 release will most likely support Linux for System z very quickly, whereas this may not be so for Oracle. The issue is even more difficult to manage for Unix platforms because of the sheer number of Unix variants in the marketplace. Of course, Linux has supplanted Unix as the most popular DBMS operating system, but many DBMS vendors support the most popular Unix platforms such as IBM’s AIX Hewlett-Packard’s HP-UX. When it comes to Linux you need to be mindful of the support for the distribution you are deploying (e.g. Red Hat, SUSE, etc.) as some are supported more frequently and rapidly than others.

When planning your DBMS upgrade, be sure to consider the DBMS platforms you use and try to gauge the priority of your platform to your DBMS vendor. Be sure to build some lag time in your release migration strategy to accommodate the vendor’s delivery schedule for your specific platforms.

Be sure, also, to consider the impact of a DBMS upgrade on any supporting software. Supporting software includes purchased applications, DBA tools, reporting and analysis tools, and query tools. Each vendor of a supporting software application or tool will have a different timeframe for supporting and exploiting a new DBMS release.  Many software vendors specifically differentiate between supporting and exploiting a new DBMS version or release. Software that supports a new release will continue to function the same as before the DBMS was upgraded, but with no new capabilities.

So, if a DBA tool, for example, supports a new version of Oracle, it can provide all of the services it did for the past release, as long as none of the new features of the new version of Oracle are used. A DBA tool that exploits a new version or release provides the requisite functionality to operate on the new features of the new DBMS release. So, to use a concrete example, IBM added support for Universal tablespaces (UTS) in Version 9 of Db2 for z/OS. A DBA tool can support DB2 Version 9 without operating on UTS, but it must operate on UTS to exploit DB2 Version 10.

Some third-party tool vendors follow guidelines for supporting and exploiting new DBMS releases. Whenever possible ask your vendor to state their policies for DBMS upgrade support. It is likely that your vendors will not commit to any firm date or date range to support new versions and releases. That is to be expected because some DBMS versions are larger and more complicated and therefore will take longer to fully exploit.

Hardware Requirements

Every DBMS has a basic CPU requirement, meaning a CPU version and minimum processor speed required for the DBMS to operate… and this can — and will — change from version to version. Some DBMSs have specific hardware models that are either required or unsupported. Usually, the CPU criterion will suffice for an Intel environment, but in a mainframe or enterprise server environment, the machine model can make a difference with regard to the DBMS features supported. For example, certain machines have built-in firmware that can be exploited by the DBMS if it is available.

Furthermore, each DBMS offers different “flavors” of their software. I use the term flavor to differentiate this concept from the terms “version” and “release” which are used to specify different iterations of the same DBMS. However, DBMS vendors frequently offer different flavors of the DBMS (at the same release level) for specific needs such as parallel processing, pervasive computing environments (such as handheld devices), data warehousing, and/or mobile computing needs. Be sure to choose the correct DBMS flavor for your needs and to match your hardware to the requirements of the DBMS.

Storage requirements

A DBMS requires disk storage to run. And not just for the obvious reason – to create databases that store data. Storage also will be required for the indexes to be defined on the databases. But a DBMS will use disk storage for many other reasons too, such as for:

  • the system catalog or data dictionary used by the DBMS to manage and track databases and related information. The more database objects you plan to create, the larger the amount of storage required by the system catalog.
  • any other system databases required by the DBMS, for example, to support distributed connections or management tools.
  • the log files that record all changes made to every database. This includes active logs, archive logs, rollback segments, and any other type of change log required by the DBMS.
  • any startup or control files that must be accessed by the DBMS when it is started or initialized.
  • any work files used by the DBMS to sort data or for other processing needs.
  • any default databases used by the DBMS for system structures or as a default catchall for new database objects as they are created.
  • temporary database structures used by the DBMS (or by applications accessing databases) for transient data that is not required to be persistent, but needs reserved storage during operations.
  • any system dump and error processing files.

Don’t forget any support or DBA databases used for administration, monitoring, and tuning. For example, databases used for testing new releases, migration scripts, and so on.

Be sure to understand and adequately plan for the storage requirements for every new DBMS version well in advance of upgrading.

Memory requirements

Relational DBMSs, as well as their databases and applications, love memory. A DBMS will use memory for most internal processes such as basic functionality, maintaining the system global area, and performing many DBMS tasks.  Memory requirements for specific features, old and new, can change from version to version.

One of the primary reasons a DBMS requires a significant amount of memory is to cache data in memory structures to avoid I/O. Reading data from a disk storage device is always more expensive and slower than moving the data around in memory. The DBMS will use memory structures called buffer pools or data cache to reduce physical I/O requests. By caching data that is read into a buffer pool, the DBMS can avoid I/O for subsequent requests for the same data, as long as it remains in the buffer pool. In general, the larger the buffer pool, the longer the data can remain in memory and the better overall database processing will perform.

The DBMS will cache other structures in memory as well as data. Most DBMSs set aside memory to store program structures required by the DBMS to process database requests. The program cache will store things like “compiled” SQL statements, database authorizations, and database structure blocks that are used by programs as they are executed. By caching these structures, database processing can be optimized by avoiding additional I/O requests to access them from a physical storage device.

Memory typically is required by the DBMS to support many other features such as handling lock requests, facilitating distributed data requests, sorting data, and for some optimization processes and SQL processing.

When upgrading to a new version be prepared to ensure that the DBMS has a more than adequate supply of memory at its disposal. Doing so will help to optimize the performance of database processing and minimize potential problems.

Configuring the DBMS

The manner in which the DBMS functions and the resources made available to the DBMS are controlled by configuring the system parameters of the DBMS. Each DBMS allows its system parameters to be modified in different ways, but the installation process usually sets the DBMS’s system parameters using radio buttons, menus, or panel selections. During the installation process, the input provided to the installation script will be used to establish the initial settings of the system parameters.

Each DBMS also provides a method to change the system parameters once the DBMS is operational. Sometimes the system parameters can be set using DBMS commands, sometimes you must edit a file that contains the current system parameter settings. If you must edit a file, do so very carefully because an erroneous system parameter setting can be fatal to the operational status of the DBMS.

What sort of things do the system parameters control? Well, for example, system parameters can be used to control DBA authorization to the DBMS, the number of active database logs, to set the amount of memory used for data and program caching, and to turn on and off DBMS features. Although every DBMS has system parameters that control its functionality, each DBMS has a different method of setting and changing the values. And, indeed, each DBMS has different “things” that can be set using system parameters.

Be sure to analyze any generated scripts containing configuration parameters. Compare the new parameter values to the existing values and find any changes. Consult the documentation for the new version to understand why a change was made and whether it is within the operating requirements of your environment. Failure to do so can result in an incorrectly configured database environment and that can cause performance problems, data integrity problems, or possibly even DBMS failure.

Connect the DBMS to Supporting Infrastructure Software

Part of the DBMS upgrade process must be the verification that all system software connections to the DBMS are still viable and operational. Typical infrastructure software that may need to be configured to work with the DBMS includes networks, transaction processing monitors, message queues, other types of middleware, programming languages, systems management software, operations, and job control software, web servers, and application servers.

Each piece of supporting infrastructure software will have different requirements for interfacing with the DBMS. Typical configuration procedures can include installing DLL files, creating new parameter files to establish connections, and possibly revisiting the installation procedures for the supporting software to install components required to interact with the DBMS.

Fallback Planning

Each new DBMS version or release should come with a manual that outlines the new features of the release and describes the fallback procedures to return to a prior release of the DBMS. Be sure to review the fallback procedures provided by the DBMS vendor in their release guide. You may need to fallback to the previous DBMS release if a bug is found with the upgrade, performance problems ensue, or other problems are encountered during or immediately after migration. Keep in mind that fallback is not always an option for every new DBMS release.

If fallback is possible be sure to follow the guidance of the DBMS vendor to enable falling back. At times, you may need to delay the implementation of certain new features for fallback to remain an option. Be sure to fully understand the limitations imposed by the DBMS vendor on falling back, and exploit new features only when falling back is no longer an option for your organization.

Migration Verification

Similar to new installation verification, be sure to implement procedures to verify that the DBMS release upgrade is satisfactory. Be sure to perform the same steps as with a brand new DBMS install, but also be sure to test a representative sampling of your in-house applications to verify that the DBMS upgrade is working correctly and performing satisfactorily.

Verification should include running a battery of tests to verify that the DBMS has been properly installed and configured. Most DBMS vendors supply sample programs that can be used for this purpose. Additionally, you can ensure proper installation by testing the standard interfaces to the DBMS. One standard interface supported by most DBMSs is an interactive SQL interface where you can submit SQL statements directly to the DBMS.

Create a set of SQL code that is comprised of SELECT, INSERT, UPDATE, and DELETE statements issued against sample databases. Running such a script after installation helps to verify that the DBMS is installed correctly and operating as expected.

Further, be sure to verify that all required connections to supporting software are operational and functioning properly. If the DBMS vendor does not supply sample programs you may need to create simple test programs for each environment that can be run to ensure the supporting software connections are functioning correctly with the DBMS.

Synopsis

In general, be sure to design a DBMS release upgrade policy based on the issues discussed above. Each specific DBMS upgrade will be unique, but the guidelines presented above will help you to achieve success more readily. A well-thought-out DBMS upgrade strategy will enable you to be prepared to support new DBMS releases with a minimum impact to your organization and in a style best-suited to your company.


Posted in DBA, DBMS, operational | Leave a comment

The Never-Ending To-Do List of the DBA

If you are currently a DBA, I bet you can relate to the title of this article. Doesn’t it seem like there are always more things to do at the end of any given day?  The sad truth, though, is that many people do not know what a DBA is, does, or why they are even needed! Sometimes, that includes your manager!

So today we’re going to take a little time to hash through the primary responsibilities and tasks that DBAs are typically charged with performing. Not all of these tasks are needed every day, but some are, and many are of the utmost importance when they are needed.

So let’s start at the beginning… Every organization that manages data using a database management system (DBMS) requires a database administration group to oversee and assure the proper usage and deployment of the company’s data and databases. With the growing mountain of data and the need to organize that data effectively to deliver value to the business, most modern organizations use a DBMS for their most critical data. So, the need for database administrators (DBAs) is greater today than ever before. However, the discipline of database administration is not well understood or universally practiced in a coherent and easily replicated manner.

Implementing a DBA function in your organization requires careful thought and planning. A successful DBA must acquire a large number of skills — both technological and interpersonal. Let’s examine the skills required of an effective DBA.

General database management. The DBA is the central source of database knowledge in the organization. As such he must understand the basic rules of relational database technology and be able to accurately communicate them to others.

Data modeling and database design. The DBA must be skilled at collecting and analyzing user requirements to derive conceptual and logical data models. This is more difficult than it sounds. A conceptual data model outlines data requirements at a very high level; a logical data model provides in-depth details of data types, lengths, relationships, and cardinality. The DBA uses normalization techniques to deliver sound data models that accurately depict the data requirements of the business. (Of course, if your organization is large enough a completely separate group of data administrators may exist to handle logical database design and data modeling.)

Metadata management and repository usage. The DBA must understand the technical data requirements of the organization. But this is not a complete description of his duties. Metadata, or data about data, also must be maintained. The DBA must collect, store, manage, and provide the ability to query the organization’s metadata. Without metadata, the data stored in databases lacks true meaning. (Once again, if your company has a data administration group then this task will be handled by that group. Of course, that does not mean the DBA can ignore metadata management.)

Database schema creation and management. A DBA must be able to translate a data model or logical database design into an actual physical database implementation and to manage that database once it has been implemented. The physical database may not conform to the logical model 100 percent due to physical DBMS features, implementation factors, or performance requirements. The DBA must understand all of the physical nuances of each DBMS used by his organization in order to create efficient physical databases.

Capacity planning. Because data consumption and usage continue to grow at an alarming pace, the DBA must be prepared to support more data, more users, and more connections. The ability to predict growth based on application and data usage patterns and to implement the necessary database changes to accommodate that growth is a core capability of the DBA.

Programming and development. Although the DBA typically is not coding new application programs, s/he does need to know how to write effective programs. Additionally, the DBA is a key participant in production turnover, program optimization (BIND/REBIND) and management, and other infrastructure management to enable application programs to operate effectively and efficiently.

SQL code reviews and walk-throughs. Although application programmers usually write SQL, DBAs are likely to be blamed for poor performance. Therefore, DBAs must possess in-depth SQL knowledge so they can understand and review SQL and host language programs in order to recommend changes for optimization.

Performance management and tuning. Dealing with performance problems is usually the biggest post-implementation nightmare faced by DBAs. As such, the DBA must be able to proactively monitor the database environment and to make changes to data structures, SQL, application logic, and the DBMS subsystem itself in order to optimize performance.

Ensuring availability. Applications and data are more and more required to be up and available 24 hours a day, seven days a week. Globalization and e-business are driving many organizations to implement no-downtime, around-the-clock systems. To manage in such an environment, the DBA must ensure data availability using non-disruptive administration tactics.

Data movement. Data, once stored in a database, is not static. The data may need to move from one database to another, from the DBMS into an external data set, or from the transaction processing system into the data warehouse. The DBA is responsible for efficiently and accurately moving data from place to place as dictated by organizational needs.

Backup and recovery. The DBA must implement an appropriate database backup and recovery strategy for each database file based on data volatility and application availability requirements. Without a backup and recovery strategy, system and user errors could render a database inoperable and useless. Furthermore, the backup strategy must be developed with recovery time objectives in mind, so that data is not unavailable for long periods when problems inevitably occur. This is probably one of the, if not the absolute, most important database administration task.

Ensuring data integrity. DBAs must be able to design databases so that only accurate and appropriate data is entered and maintained. To do so, the DBA can deploy multiple types of database integrity including entity integrity, referential integrity, check constraints, and database triggers. Furthermore, the DBA must ensure the structural integrity of the database. Data integrity is right up there with backup and recovery in importance level.

Procedural coding and debugging. Modern databases are comprised of more than just data — they also contain program code. The DBA must possess procedural skills to help design, debug, implement, and maintain stored procedures, triggers, and user-defined functions that are stored in the DBMS and used by application systems.

Extensible data type administration. The functionality of a modern DBMS can be extended using user-defined data types. The DBA must understand how these extended data types are implemented by the DBMS vendor and be able to implement and administer any extended data types implemented in their databases.

Data security. The DBA is charged with the responsibility to ensure that only authorized users have access to data. This requires the implementation of a rigorous security infrastructure for production and test databases. Data security comprises both DBMS security (revoke/grant) and security on external resources (file structures, userids, and so on).

Database auditing. Being able to report on who did what to which data when, along with how they acted upon that data, is a requirement for many governmental and industry standards and compliance specifications. DBAs need to be involved in terms of setting up and enabling the DBMS for database auditing capabilities.

General systems management and networking. After a database is implemented it will be accessed throughout the organization and interact with other technologies. Therefore, the DBA has to be able to function as a jack of all trades in order to integrate database administration requirements and tasks with general systems management requirements and tasks (like job scheduling, network management, transaction processing, and so on).

Business knowledge. DBAs must understand the requirements of the application users and be able to administer their databases to avoid interruption of business. Without a firm understanding of the value provided to the business by their databases and data, the DBA is not likely to be able to implement strategies that optimize the business’s use of that data.

Data archiving. When data is no longer needed for business purposes but must be maintained for legal purposes, the data needs to be removed from the operational database but stored in such a way that it remains accessible for e-discovery and legal requirements. This is database archiving.

Enterprise resource planning (ERP). Enterprise Resource Planning (ERP) software packages place additional burdens on the DBA. Most ERP applications (SAP, Peoplesoft, etc.) use databases differently than homegrown applications, requiring DBAs to know how the ERP applications impact the business and how the databases used by those packages differ from traditional relational databases.

Web-specific technology expertise. For e-businesses, DBAs are required to have knowledge of Internet and Web technologies to enable databases to participate in Web-based applications. Examples of this type of technology include HTTP, FTP, XML, CGI, Java, TCP/IP, Web servers, firewalls, and SSL. Other technologies that fall into this category include database gateways and APIs.

Storage management techniques. The data stored in every database resides on disk somewhere (unless, perhaps, it is stored using an in-memory DBMS). The DBA must understand the storage hardware and software available for use, and how it interacts with the DBMS being used. As such, DBAs must be able to allocate, monitor, and manage the storage used by databases.

Summing Up…

The bottom line is that the DBA must be a well-rounded staff member capable of understanding multiple facets of the business and technology. The DBMS is at the center of today’s IT organization — so as the one tasked with keeping the DBMS performing as desired, the DBA will be involved in most IT initiatives.

Did I forget anything?

Posted in DBA | Leave a comment

Happy New Year 2022

Just a short post today to wish my readers a very happy 2022! I hope that you have enjoyed the holiday season and that the upcoming year will be filled with joy and prosperity.

Happy New Year 2022

It also looks like 2022 will see a return to in-person conferences, and I am looking forward to that. I plan on going to Dallas for SHARE in March and to Boston for IDUG in July, as well as others perhaps. Of course, a lot will depend on the COVID situation and how these events manage socially distancing and masking. Personal safety is still much more important than anything else.

And if you are going out tonight to celebrate, please do so safely and courteously. Wear your masks as appropriate and don’t impinge on other’s space. And don’t drink too much, especially if you plan on driving later!

So as we bid adieu to 2021, and look forward to 2022, I hope we will get the chance to see each other again… but safely!

Happy New Year!

Posted in Happy New Year | 1 Comment

Heterogeneous Database Management with Navicat Premium 16

Navicat Premium is a heterogeneous database development and management tool that makes it easy to quickly build, manage, and maintain your databases. With Navicat Premium you can simultaneously connect to heterogeneous database systems. Support is available for MySQL, MariaDB, Microsoft SQL Server, MongoDB, Oracle, PostgreSQL, and SQLite databases, and it is compatible with cloud databases like Alibaba Cloud AsparaDB, Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud, and MongoDB Atlas.

Figure 1. Main screen of Navicat Premium (Windows)

The product can be used as the hub of operations for managing your disparate database environments with capabilities for database design and implementation, query building and execution, data migration and synchronization, data visualization, data generation, and even analysis capabilities for improving your databases and queries. So Navicat Premium is an ideal solution for DBAs that are required to manage multiple different DBMS products and installations. One small issue to keep in mind is that Navicat, unfortunately, does not offer support for IBM’s Db2.

One of the nice, long-term features of Navicat Premium is its cross-platform UI support, offering support for Windows, maxOS, and multiple Linux distributions (e.g. Debian, Fedora, Ubuntu, and others). Although Navicat has been available for some time now, the latest and greatest version was released in late November 2022. So, let’s take a look at what’s new.

What’s New

If you have used Navicat Premium before then one of the first things you’ll probably notice is the refreshed user interface (refer to Figure 2 for a screen shot of the Linux UI). All of the buttons and icons have been enhanced and modified to improve the user experience. But importantly the user flow has not changed so the same sequence of actions and commands can be used. If there is one thing I hate it is when user flow changes from release to release for no apparent reason, but Navicat has done well here.

Additionally, many existing features, such as Connection Profile, Query Summary, and Value Picker have been updated to increase the overall efficiency of your database development.

Figure 2. A screen shot of the Linux UI

But there are several additional nice, new features in Navicat Premium 16, such as the new data generation tool. As any database developer will tell you, creating and managing appropriate data for testing applications is one of the more frustrating aspects of database programming and testing. Coming up with reasonable data, especially for brand new applications where no data exists anywhere, is a chore. And copying production data is not always possible (or legal). So how is test data created? You cannot simply just churn out random text; the data has to match the data types defined in the database. Furthermore, referential integrity constraints and business rules must be understood and adhered to in order to create proper test data that fully works out the application code.

Fortunately, the new data generation tool provided in Navicat Premium offers a comprehensive range of functions to generate a large volume of quality testing data. You can rapidly create realistic data sets with referential integrity based on business rules and constraints.

Figure 3. A screen shot of data generation (macOS UI)

Navicat Premium 16 drives that data generation with a wizard that walks you through the process of choosing tables in the proper order. The test data that it generates will be displayed so that you can view it, edit it if needed, or even regenerate it again. Navicat’s test data generation capability can save development teams a lot of time and effort.

But that’s not all. Working together in teams, or collaboration has become an increasingly important aspect for both database developers and DBAs, especially with the growing importance of DevOps. Although there are many aspects of DevOps (and the purpose of this piece is not to define it), the core underlying principle of DevOps is to improve the way your team works together throughout the entire software development lifecycle.

Although past versions of Navicat have made it easier for teams to collaborate in past versions, Navicat Premium 16 improves upon things by adding support for Charts and Code Snippets to the Navicat Cloud. Using the Navicat Cloud Portal teams can manage their files and projects, but also monitor cloud services using a single interface. These types of collaboration features help teams as they embrace DevOps practices and procedures.

The next big advance in Navicat Premium 16 is in the form of improved data visualization capabilities. Again, data visualization is not brand new to Navicat Premium, as the ability to chart data was previously available. But there are additional chart types and new functions included with the new release. Navicat Premium 16 supports more than 20 different types of charts. And you can use it to visualize live data.

Figure 4. Navicat Charts Workspace

You can connect to any data source and also extend your data with customized fields by changing field types, concatenating fields, mapping values, or sorting based on another field order. Furthermore, Navicat Premium 16 improves the usability and accessibility of charting with the dashboard, where you can share your charts across your organization.

There is also a new approach for resolving conflict files in the Navicat Cloud solutions. Cloud management is simplified because you can now discard the server file and keep your file, discard your copy without saving changes, or rename your copy to keep both files.

Other helpful new capabilities include:

Connection Profile, which can be used to configure multiple profiles for users who may need to switch between settings based on their location. This is especially useful with the increase in the number of people working from home (or outside of their traditional office).

Figure 5. Connection Profiles

Query Summary, which can be used to produce a detailed summary of each SQL statement. It is an easy-to-read, one-page summary for the health and performance of your queries with a shortcut to jump over to the potential errors.

Figure 6. Query Summary

Field Information, which delivers a quick view of column characteristics for reviewing information between columns within the Table Viewer.

Figure 7. Field Information

Summary

The latest version of Navicat offers up some nice features that make it easier to manage, use, and administer your heterogeneous database environment. Keep in mind that this overview examines the new features of version 16 and does not provide comprehensive detail of all the features and functionality of Navicat Premium. It really does offer a bevy of useful capabilities! So, if you are looking for a feature-laden tool for managing database development and management, you should take a look at Navicat Premium to simplify your effort. You can download a free trial here.

Posted in cloud, DBA, DBMS, Microsoft SQL Server, MySQL, SQL | Leave a comment