What is the Autonomous Digital Enterprise?

Modern organizations are transforming their businesses by adopting and integrating digital technologies and services into the fabric of their operations. This is generally referred to as “digital transformation.” And it is an imperative for success as more business is conducted online, over the web, and using personal devices such as phones and tablets. Businesses have to engage with customers using the same technology and interfaces that their customers use all the time or they risk losing revenue.

But digital transformation is not, in and of itself, sufficient to ensure success. BMC Software has identified what they call the autonomous digital enterprise as the next step on the journey. This means embracing and instilling intelligent automation capabilities throughout the organization.

In an autonomous digital enterprise, automation is a complementary business function that works with – not in place of – humans. By exploiting automation organizations can:

  • Execute with fewer errors
  • Free up employees from mundane tasks
  • Lower costs
  • Improve customer interaction

Note here the term “intelligent” being used with the term “automation.” IT professionals have been automating for a long time. Indeed, everything that IT folks do can be considered some form of automation when compared to manual tasks. But intelligent automation takes things further. It relies on data to drive decision-making, reducing the amount of time it takes to react, thereby reducing latency and saving time.

With artificial intelligence and machine learning capabilities being coupled with automation, the accuracy and ability of automation to intuit what needs to be done improves. As does the agility to effectively implement improvements and correctives measures.

Intelligence enables organizations to transcend impediments to success. For example, in an autonomous digital enterprise, DevOps practices are integrated throughout the enterprise enabling rapid and continuous delivery of applications and services. This requires technology and intelligent automation, but also a shift in organizational mindset to embrace change and instill it everywhere in the company.

An autonomous digital enterprise will have automation everywhere… intelligent automation that improves the customer experiences, speeds up decision-making and implementation, and interacts with customers the way they expect.

This vision if the autonomous digital enterprise is both audacious and compelling – and it is well worth examining for your organization.

Posted in AI, automation, DevOps, digital transformation, enterprise computing | Leave a comment

Data Summit Fall 2020 Presentation Now Available for Replay: Modern Database Administration

I was honored to deliver a presentation at this year’s Data Summit conference on the changing world of the DBA. I spoke for about a half hour on Thursday, October 22nd on DBA and database systems trends and issues.

The conference sessions were conducted live, but were recorded as they were delivered. And now my session can be viewed here!

I hope you take a moment to watch the presentation and consider the issues I bring up.

And, if you are interested in useful tools that help with the trends I discuss, stay around after my presentation (which does not talk about any particular vendor tools) to hear a strategist from Quest give their perspective on the issues and their DBA tools that can help.

Finally, I hope you will comment here on the blog if you have any questions, comments, or additional issues you’d like to discusss!

Posted in AI, analytics, Big Data, data, data breach, Data Growth, DBA, DBMS, DevOps, IoT, Machine Learning, review, speaking engagements, tools, trends | Leave a comment

Craig Mullins Presenting at Data Summit Fall 2020

Keeping abreast of what is happening in the world of data management and database systems is important for all IT professionals these days. Data is at the center of everything that modern organizations do and the amount of data we have to store, manage, and access is growing at an ever-increasing pace. It can be difficult to keep up with it all.

If you find yourself in need of being up to speed on everything going on in the world of data, you should plan on attending the Data Summit Fall 2020 virtual event. Held in person in years past, this year the event is offered as a free online webinar series running from October 20 thru 22, 2020.

logo2019

And this year I will be speaking again at the event, but hopefully more of you will be able to attend than in years past, since there is no travel involved! My presentation will be on the changing world of the DBA (Thursday, October 22nd, at Noon Eastern time. I’ll discuss how the DBA’s job is impacted by big data and data growth, NoSQL, DevOps, the cloud, and more.

I hope to see you there!

Posted in DBA | Leave a comment

COBOL V4.2 to COBOL V6 Migration – The Cost of Procrastination

Today’s post is a guest article written by Dale Vecchio, IT modernization expert and former Gartner analyst.

While no one can argue that the COBOL language has had tremendous staying power over the last 50-60 years, its biggest attribute these days is best summed up in the expression “leave well enough alone”! Yeah, COBOL is here and the applications still work. But the costs of staying wedded to this 3GL procedural language are increasing. As COBOL 4.2 reaches end-of-support, the conversion to COBOL V6 is a non-trivial exercise. Even IBM admitted as much in a 2018 presentation, “Migrating to Enterprise COBOL v6”. Organizations have been upgrading their COBOL versions for a number of decades, but this jump seems particularly onerous. For example, IBM reports that a customer perceived migrating from COBOL V3 to V4 had a difficulty level of “3”, while upgrading from V4 to V6 had a difficulty level of “20”!! Of course, there are improvements in COBOL V6, but they come at a price. Any mainframe organization that is not at current hardware/software levels may find they need to upgrade just to be able to support this version of COBOL. COBOL v6, by IBM’s own admission will require 20x more memory at compile time and will take 5x to 12x longer to compile! But probably most problematic is that 25% of customers migrating to COBOL v6 ran into migration difficulties dues to “invalid data”. One of the many challenges of mainframe modernization is that organizations either “cheated” or simply got away with “unsupported features” in COBOL. Earlier versions of COBOL programs may have accepted data formats that are no longer “valid” in v6. These problems are the most difficult to find, since the program MAY appear to work, but generate wrong results. The best that could happen is the program will fail and then your limited development staff can “go fishing” in a 30-40 year old COBOL program trying to figure out what the heck the problem is!! IBM’s view on this seems to be, “well, you created the problem so you fix it!” The amount of effort necessary to migrate to v6 is greatly exacerbated by this data problem, since it is likely to dramatically increase the testing needed.

Consequently the entire argument that it’s “safer” to stay on COBOL and just upgrade is a specious one. Perhaps the most common modernization strategy of the last 20 years, procrastination, is no longer a viable choice! Prolonging the usage of a procedural, 3-GL language, against the backdrop of a declining skills pool is increasingly risky. I can assure you that many organizations I have spoken to around the world over the last 20 years have the DESIRE to modernization, on or off the mainframe, but the risks and costs have been seen as simply too high. These migration risks are quickly becoming balanced by the risks of NOT modernizing. The modern IT world is increasingly one of Linux, cloud, open source and Java. The innovation is in these areas. The skills are in these areas. No one is saying anything bad about the mainframe here – only that there are acceptable options for running enterprise workload that do NOT require the legacy OS or transactional environments of the past.

While Java is not the only path to a modern IT application environment, it is certainly one of the most common. So the trick is to figure out how to move in that direction, while mitigating the risks. If you are going to have to invest in some of your COBOL applications, why not evolve to a modern Linux world? There are plenty of issues to deal with when modernizing applications, so reducing the risks in some areas is a good idea. Easing your applications into a modern devops environment that is plentiful with skilled developers is a worthwhile investment. You don’t have to modernize every COBOL application any more than you need to upgrade everyone to v6! Modernization is a journey, but you’ll never reach your destination if you don’t take the first step. Code transformation solutions that give you decent , performant Java programs, that can be managed by a devops tool chain, and enhanced by Java developers are a worthwhile consideration. Code transformation solutions that are syntactic, line-by-line transformations are NOT the answer – ones that refactor COBOL in Java classes and methods are! Let’s be realistic – someof your COBOL applications have very little enhancements made annually. If you can get them transformed into Java, and they can then take advantage of the cost benefits of these runtime environment, whether on the mainframe (specialty engines) or off, your modernization journey is off to a good start.

To listen to a webinar discussing this topic, go to https://youtu.be/2b8XrOovHn4

Posted in DBA | Tagged , , | 1 Comment

IBM POWER and SAP HANA: A Powerful and Effective Combination

As organizations looks for differentiators to improve efficiency and improve cost effectiveness, the combination of IBM Power Systems and SAP HANA can provide a potent platform with a significant return on investment.

Why IBM Power Systems?

IBM Power Systems is a family of server computers that are based on IBM’s POWER processors. The POWER processor is actually a series of high-performance microprocessors from IBM, all called POWER followed by a number designating its generation. For example, POWER1, POWER2, POWER3 and so forth up to the latest POWER10, which was announced in mid-August 2020 and is scheduled for availability in 2021.

What makes IBM Power Systems different than typical x86 architecture servers is the RISC, or Reduced Instruction Set Computer, architecture based on IBM research that began in the 1970s. POWER microprocessors were designed specifically for servers and their intrinsic processing requirements.

In contrast, x86 CPUs were initially built for and catered to the personal computer market. They are designed as general-purpose processors that can be used for a variety of workloads, even for home PCs. As the processing power of the x86 microprocessors advanced over time, they were adapted for usage in servers.

So, looking at the two alternatives today, both x86 and IBM Power Systems seem to be competitive architectures for servers running enterprise workloads. However, POWER microprocessors were designed to services high-performance enterprise workloads, such as database management systems, transaction processing and ERP systems. Although x86 microprocessors can be used for those type of workloads, too, it is not typically as efficient because of its general-purpose design, as opposed to the POWER processors specific design for enterprise computing.

IBM Power Systems deliver simultaneous multithreading (SMT), a technique for improving the overall efficiency of CPUs permitting multiple independent threads to execute and utilize the resources of the processor architecture. With IBM Power Systems SMT8 every processor can run eight threads in parallel, which is about 4 times higher than its competitors. Symmetric multithreading helps to mask memory latency, and increase efficiency and throughput of computations.

Virtualization is another differentiator for POWER, because they were built to support virtualization from the get-go. POWER features a built-in hypervisor that operates very efficiently. On the other hand, x86 was not originally designed for virtualization, which means you need to use a third-party hypervisor (e.g. VMware),

Scalability is another issue where IBM POWER excels versus x86. Although you can scale both, x86 scaling typically requires adding more servers. With POWER, the chips themselves are designed to scale seamlessly without having to add hardware (although you can if you so desire).

The bottom line is that the POWER architecture provides benefits for modern workloads, such as for big data analytics and Artificial intelligence (AI). Which brings us to SAP HANA.

Why SAP HANA?

SAP HANA is an in-memory database management system that delivers high-speed data access. It can offer efficient, high performance data access due to its usage of memory and its storage of data in column-based tables as opposed to the row-based tables of a traditional SQL DBMS. Such a columnar structure can often deliver faster performance when queries only need to access certain sets of columns.

SAP HANA provides native capabilities for machine learning, spatial processing, graph, streaming analytics, time series, text analytics/search, and cognitive services all within the same platform. As such, it is ideal for implementing modern next-generation Big Data, IoT, translytical, and advanced analytics applications. 

SAP S/4HANA is the latest iteration of the SAP ERP system that uses SAP HANA as the DBMS, instead of requiring a third-party DBMS (such as Oracle or Db2). It is a completely revamped version of their ERP system.

Organizations implement SAP HANA as both a standalone, highly-efficient database system, and also as part of the SAP S/4HANA ERP environment. And for both of these HANA applications, IBM Power Systems is the ideal hardware for ensuring optimal performance, flexibility, and versatility.

Why IBM POWER + SAP HANA?

IBM Power Systems are particularly good at powering large computing workloads. Their ability to take advantage of large amounts of system memory and to be logically partitioned make them ideal for implementing SAP HANA.

If you need something that can take advantage of 64 TB of memory on board and can host up to 16 production SAP HANA LPARs, the high-end POWER E980 is a good choice. Earlier this year (2020), SAP announced support of Virtual Persistent Memory on IBM Power Systems for SAP HANA workloads. What this means is that using the PowerVM hypervisor located on the firmware it is possible to support up to 24TB for each LPAR. Virtual Persistent Memory is available only on IBM Power Systems for SAP HANA.

There are many benefits that can accrue after adopting Virtual Persistent Memory on IBM Power Systems and SAP HANA. For example, it provides faster restart and faster shutdown processing, which expands the outage window for change control, thereby potentially enabling more work to be done during the outage. Alternatively, the duration of the change control window may be able to be shrunk, thereby reducing the outage to make changes.

And let’s not forget to mention the SMT8 capability of IBM Power Systems, which will improve cache per core, thereby improving SAP HANA performance on a Power machine as compared with other machines.

Of course, there are also midrange IBM Power Systems such as the E950 that can be used if your requirements are not at the high-end.

Cost

Of course, a server can be powerful and efficient, but if it is not also cost-effective it will be difficult for organizations to adopt it. Forrester Research conducted a three-year financial impact study and concluded that IBM Power Systems for SAP HANA delivers a cost-effective solution.

The study required multiple customer interviews and data aggregation, which resulted in Forrester determining the following benefits of running SAP HANA on IBM Power Systems as opposed to other platforms:

  • Avoided cost of system downtime (36%) – the composite organization avoided 4 hours of planned and unplanned downtime per month
  • Reduced cost of power and cooling (4%) – the composite organization saved nearly 438,000 KwH of power per year
  • Avoided cost of alternate server architecture (49%) – other architectures required as many as 20 systems, as compared to an architecture with only 3 IBM Power Systems servers
  • Reduced cost of managing and maintaining infrastructure (11%) – System administrators saved 60% of their productivity due to a reduced management and maintenance burden

The net/net shows a 137% return on investment (ROI) with a 7 month payback.

It is also important to note that IBM offers subscription-based licensing for Power Systems where you pay only for what you use. With this flexible capacity on demand option your organization can stop overpaying for resources you do not use. A metering system is used to determine your usage and you will be billed accordingly, instead of paying for the entire capacity up-front.

Use Cases

There are many examples of customers deploying IBM Power Systems to achieve substantial benefits and returns on their investment.

One example of using IBM Power Systems to reduce footprint and simplify its infrastructure is Würth Group. Located in Germany, Würth Group is a worldwide wholesaler of fasteners and tools with approximately 120,000 different products. The company deployed IBM Power Systems and was able to slim down the number of physical servers and SAP HANA instances from seven to one, an 86% reduction cutting power consumption and operating costs.

Danish Defence has implemented SAP HANA on IBM Power Systems to support military administration and operations with rapid, reliable reporting. As a result, they achieved up to 50% faster system response times enabling employees to work more productively. Additionally, processes completed 4 hours ahead of schedule meaning that reports are always available at the start of each day. And at the same time, they achieved a 60% reduction in in storage footprint, thereby reducing power requirements and cooling costs.

And perhaps most-telling, SAP themselves have replaced their existing SAP HANA HEC platform with the IBM POWER9. According to Christoph Herman, SVP and Head of SAP HANA Enterprise Cloud, “SAP HANA Enterprise Cloud on IBM Power Systems will help clients unlock the full value of SAP HANA in the cloud, with the possibility of enhancing the scalability and availability of mission-critical SAP applications while moving workloads to SAP HANA and lowering TCO.”

Summary

Whether implementing on-premises, in the cloud, or as part of a hybrid multicloud environment, the combination of IBM Power Systems and SAP HANA can deliver a high performance, cost-effective, environment for your ERP and other workloads.

Posted in analytics, Big Data, business planning, data, DBMS, ERP, IBM, In-Memory, optimization, performance, SAP HANA | Tagged , , , , , | Leave a comment

Inside the Data Reading Room – Fall 2020 Edition

Welcome to yet another edition of Inside the Data Reading Room, a regular feature of my blog where I take a look at recent data management books. In today’s post we’ll examine three new books on various data-related topics, beginning with data agglomeration.

You may not have heard of data agglomeration but you’ll get the idea immediately – at least at a high level – when I describe it as gathering data in wireless sensor networks. For more details, you’ll want to read A Beginner’s Guide to Data Agglomeration and Intelligent Sensing by Amartya Mukherjee, Ayan Kumar Panja and Nilanjan Day (Academic Press, 2020, ISBN 978-0-12-620341-5). The authors are all professors who specialize in networking, IoT, and data issues.

The book offers provides a concise treatment of the topic starting out with an overview of the various types of sensors and transducers and how they are used. I always find it easier to learn-by-example, and this book is nice because the authors provide a variety of good examples.

Reading this book will provide you with descriptions and explanations of pertinent concepts like wireless sensor networks, cloud platforms, device-to-cloud and sensor cloud architecture but more importantly, it also describes how to gather and aggregate data from wireless sensor networks.

If you or your organization are involved gathering data from sensors, such as in IoT systems, this book will be a great help to you as you design and implement your applications.

Next up from the shelves of the Data Reading Room we have Rohit Bhargava’s Non Obvious Mega Trends (IdeaPress, 2020, ISBN 978-1-64687-002-8).  

For those who do not know about this book series, every year since 2011 Rohit Bhargava has been publishing what he calls The Non Obvious Trend Report. He began writing these reports in response to the parade of annual articles talking about “the next big trends in the upcoming year,” which he found either to be too obvious (e.g. mobile phones still useful) or too self-serving (e.g. drone company CEO predicts this is the year of the drone) to be useful. In response, he created the Non Obvious Trend Report with the goal of being unbiased and digging deeper for nuances and trends missed elsewhere.

To a large extent, he succeeded. So much so that this book represents the 10th in the series. But what makes this particular book a must-have is that not only does it introduce 10 new trends, but it also documents and reviews all of the trends over the past decade.

For readers of this blog, Chapter 11, Data Abundance, will likely be the most useful chapter (although the entire book is great for research). In Chapter 11 he describes what data abundance is, how understanding it can be used to your advantage, as well as the various trends that have led to the evolution of data abundance.

I look forward to each new, annual edition of Non Obvious, but I think this year’s edition stands out as one that you will want to have on your bookshelf long-term.

The final book for today is Systems Simulation and Modeling for Cloud Computing and Big Data Applications edited by J. Dinesh Peter and Steven L. Fernandes (Academic Press, 2020, ISBN 978-0-12-620341-5).

Models and simulations are an important foundation for many aspects of IT, including AI and machine learning. As such, knowledge of them will be beneficial for data professionals and this book provides an education in using System Simulation and Modeling (SSM) for tasks such as performance testing and benchmarking.

The book analyzes the performance of several big data and cloud frameworks, including benchmarks such as BigDataBench, BigBench, HiBench, PigMix, CloudSuite and GridMix.

If you are dealing with big data and looking for ways to improve your testing and benchmarking through simulation and modeling, this book can be of help.

Posted in Big Data, book review, books, data, simulation, trends | Leave a comment

Inside the Data Reading Room – Summer 2020 Edition

Regular readers of this blog know that I am an avid reader and that I regularly post quick reviews of the technology books that I have been reading. This year has been a great one to catch up on reading, what with the pandemic and social distancing going on. So here are 4 interesting books I’ve been devouring this Summer.

Introducing Artificial Intelligence: A Graphic Guide by Henry Brighton and Howard Selina (ISBN: 978-184831214-2) published by Icon Books.

If you are looking for a nice, introductory treatment of artificial intelligence look no further than this concise, inexpensive, little book. It offers a trove of useful information that will help you to understand what AI is, the issues that can arise as it is adopted, and how it will change the way we use information systems.

The book is not geared for a developer looking for in-depth algorithms and such. Instead, it is focused on giving a nice, broad overview to the layperson. Coverage of the history of AI, its philosophical issues and implications, and the various types of AI (neural networks, machine learning, etc.) are all covered at a high level… and with graphics to support the descriptions and definitions. There’s even a short, but reasonably useful Index.

You won’t become an AI expert after reading this book, but you will have a reasonable foundation from which to learn more. And you can use the Further Reading section to help you along that path.

I’ll probably re-read this short book several more times (at least portions of it) just to make sure that my foundational AI knowledge is sound. If you want a similar foundation, pick up a copy of Introducing Artificial Intelligence: A Graphic Guide and give it a read today. You won’t be sorry.

Business Knowledge Blueprints: Enabling Your Data to Speak the Language of Business (2nd edition) by Ronald G. Ross (ISBN: 978-0-941049-17-7) published by Business Rules Solutions (BRS)

Anybody who has worked with data over the last several decades should know about the work and books of Ronald G. Ross, who is one of the founding luminaries of the concept of business rules. Ross has written extensively about business rules, having written The Business Rule Book, which was the seminal work in this field, but he has also written classic books on entity modeling and database systems.

At any rate, we now have a new, second edition, of his recent book Business Knowledge Blueprints. Herein you will learn the art and science of integrating data discovery and modeling with business communication.

This book contains a wealth of information about designing your data systems with the business in mind. It will be useful for anybody who works with data and needs to be able to communicate about and use the data in a way that is understandable to business and benefits the business.

With chapter like “Defining Things” and “Disambiguating Things”, as well as a whole section on “How to Define Business Terms in Plain English”, Ross takes you on a journey from confusion and messy data to building robust business vocabularies.

This book should be required reading for any professional whose work involves digital transformation and business transformation!

Data Democracy: At the Nexus of Artificial Intelligence, Software Development, and Knowledge Engineering, edited by Feras A. Batarseh and Ruxin Yang (ISBN: 978-0-12-818366-3) published by Academic Press

Not sure what data democracy is? But your interest is piqued because it has sounds “data” in the title? Well, you should be interested and you’ll understand the term well if you give Data Democracy a thorough read.

The author claims the book to be a manifesto for data democracy, and it succeeds in that challenge. Everybody is part of the “data republic” and therefore needs to be aware of their data… who has access to it, how they got access to it, how it is being used, how it is protected, and more.

In short, data democracy is the concept of sharing data instead of letting it be monopolized by a few large concerns. Of course, it is not quite that simple, so a book is needed… this book.

If you consume or create data – and at this point who does not – you will benefit by reading Data Democracy.

Data Governance: How to Design, Deploy, and Sustain an Effective Data Governance Program by John Ladley (ISBN: 978-0-12-815831-9) published by Academic Press

John Ladley’s latest book is the second edition of his Data Governance book, first published in 2012. If you know the first edition of this book, you’ll certainly appreciate this updated second edition. Reading this book will provide you with a comprehensive overview of why data governance is needed, how to design, initiate, and execute a data governance program, and how to keep the program sustainable.

There is a ton of new content in this second edition, including new case studies, updated industry details, and updated coverage of the available data governance tools that can help.

The book will be useful to you whether you are a novice or a seasoned professional. At the heart of this book is the framework that Ladley communicates that you can follow to build and maintain successful data governance at your organization. In combination with the use cases that he walks through in the book, you have a powerful guide for launching your data governance program.

Useful for both small and large organizations, be sure to pick up a copy of this book if you are charged with any aspect of data management and data governance within your shop.

Note: you can click on any of the links to purchase the books from amazon

Posted in AI, book review, books, data, data governance | Leave a comment

Happy Sysadmin Day

In case you did not know it, today is the last Friday in July… and that means it is System Administrator Appreciation Day. And this year, 2020, is the 21st annual celebration of Sysadmin Day!

Let’s face it, most Sysadmins are unsung heroes. If the network is up and running, if performance is optimal, if your transactions are working… that means that the Sysadmin is doing their job. But how often do you thank them just because everything is working well and you can do your job? Probably not very often.

That is what System Administrator Appreciation Day is all about. Take a moment out of your day to thank your local Sysadmins. Yes, plural, because there are probably many of them working tirelessly, day in and day out, to keep the systems running. The hardware and the software that you rely on every day!

And to thank them, why not bring them a nice cup of coffee (instead of that swill in the break room) or a tasty pastry?

They’ll surely appreciate it… and the next time you need to ask them for help that treat you brought them won’t be forgotten!

Posted in DBA | Leave a comment

Whatever Happened to The DBA?

Today’s post is a guest article written by Brian Walters, with Percona.

DBA

Today is DBA Appreciation Day (@dbaday, https://dbaday.org), where we take the time to celebrate the often-unacknowledged role of database admins. Suggested appreciation gifts are pizza, ice cream, and not deploying a migration that forces your DBA to work over the weekend!

DBA Appreciation Day is a welcome gesture, injecting humor into these difficult times when many DBAs are working incredibly hard. But, the celebration also forces us to reflect on how dramatically the world of data storage, retrieval, and information processing have changed in recent times.

When I started my career working with databases in the mid-1990s, relational databases were almost the only game in town. I was trained on Oracle 7.3 and obtained my first certification on Oracle 8i. I was sure then that the demand for relational database management system expertise would never wane. Boy, was I wrong… sort of?!

DBA redefined

Changes in the database technology space have touched every aspect of the data platform. It used to be that a database administrator (DBA) career path included having, or developing, knowledge of the entire application stack. This stretched from storage and infrastructure to the internal workings of the application itself.  If you wanted to progress, then the DBA role could be a stepping-stone to the larger world of full-stack architecture. So, what changed?

Well, to begin with, this career path is no longer so easy to define, as outside influences impact the relevance and value of the DBA role.

The introduction of new technologies such as NoSQL platforms and the rise of Cloud computing models have played a part. Data sources have proliferated and include the introduction of mobile, the birth of IoT, and the rapid expansion of edge devices. Software development and the production of code-based intellectual property and services have been revolutionized with the adoption of agile models, and the desertion of less flexible waterfall models. And, we cannot discount the effect that changes in deployment models and automation have had. The impact of both infrastructure-as-code and containerization is phenomenal.

The Everything-as-a-Service world we inhabit today is unrecognizable from where many of us started, just a few years before. Gone are the days when a DBA defined the low-level storage parameters for optimal database performance. Gone (or mostly gone) are the days when a data architect was a part of the application development team. Gone are the days when a DBA built their entire career around the configuration and tuning of one database technology.

In the majority of organizations today, the value of this role is no longer self-evident.  While some may disagree with this mentality, and many DBAs may not like the trajectory of the trend, at this point, there is no denying that things have changed.

Does this mean that the value of this skill set has also disappeared? Are database gurus extinct? Certainly not. In many cases these experts have simply moved into consulting firms, extending their skills to those experiencing critical issues, who need in-depth expertise.

There are many factors that played a part in taking us from the world where relational DBAs were indispensable, to where we stand today. The move towards DBaaS and the (false) perception that this will provide companies with a complete managed service certainly plays a part.

Skilled and still in-demand

Many of the companies I work with on a regular basis no longer hire in-house DBAs. Instead, they are increasingly choosing to bring in outside database expertise on a contract basis. This represents a dramatic shift in perception and should provoke wider internal and external discussions on the pros and cons of this policy.

Fundamentally, it is important to remember that solid database performance continues to be based on the quality of the queries, the design of the schema, and a properly architected infrastructure. Proper normalization still matters. Data-at-scale continues to require sound data architecture. Business continuity demands robust fault-tolerant infrastructure. However, many companies now don’t have the internal capacity required to achieve these demands in the same way they did in the past.

Database consulting is now a booming market. This is, in part, due to the perceived diminished need for in-house DBA expertise. But, the truth is, there is still a need for that capability and expertise.

With the appetite for employing in-house DBAs gone, filling the expertise gap falls to those few consulting firms that employ people who have these skill-sets.

For the firms who built their stellar reputations on the availability and quality of their DBAs, now made available by the hour and for pre-agreed engagements, it’s a great time to be in database consulting and managed services.

 

Written by Brian Walters
Brian is Director of Solution Engineering with Percona, a leader in providing enterprise-class support, consulting, managed services, training, and software for open source databases in on-premises and cloud environments.
Posted in DBA | 1 Comment

Wish Your DBA Glad Tidings on DBA Appreciation Day

It is the first Friday in July, so I wanted to wish DBAs everywhere a Happy DBA Appreciation Day!

dbaday_300_260

Day in and day out your DBAs are working behind the scenes to make sure that your applications have access to the mission-critical data they need to operate correctly and efficiently. Your DBAs are often called on to work over the weekend or late into the night to perform maintenance and support operations on your databases. And their dedication to keeping your databases available and your applications efficient is rarely noticed, let alone appreciated.

If the DBA is doing their job, then you never really notice that they are there…  so take a moment or two today, July 3rd, to thank your database administrators. Maybe buy them a donut or a pastry… get them a good cup of coffee (not that swill from the break room)… or just nod and tell them that you appreciate what they do.

You’ll make their day!

Posted in DBA | Leave a comment