Whatever Happened to The DBA?

Today’s post is a guest article written by Brian Walters, with Percona.


Today is DBA Appreciation Day (@dbaday, https://dbaday.org), where we take the time to celebrate the often-unacknowledged role of database admins. Suggested appreciation gifts are pizza, ice cream, and not deploying a migration that forces your DBA to work over the weekend!

DBA Appreciation Day is a welcome gesture, injecting humor into these difficult times when many DBAs are working incredibly hard. But, the celebration also forces us to reflect on how dramatically the world of data storage, retrieval, and information processing have changed in recent times.

When I started my career working with databases in the mid-1990s, relational databases were almost the only game in town. I was trained on Oracle 7.3 and obtained my first certification on Oracle 8i. I was sure then that the demand for relational database management system expertise would never wane. Boy, was I wrong… sort of?!

DBA redefined

Changes in the database technology space have touched every aspect of the data platform. It used to be that a database administrator (DBA) career path included having, or developing, knowledge of the entire application stack. This stretched from storage and infrastructure to the internal workings of the application itself.  If you wanted to progress, then the DBA role could be a stepping-stone to the larger world of full-stack architecture. So, what changed?

Well, to begin with, this career path is no longer so easy to define, as outside influences impact the relevance and value of the DBA role.

The introduction of new technologies such as NoSQL platforms and the rise of Cloud computing models have played a part. Data sources have proliferated and include the introduction of mobile, the birth of IoT, and the rapid expansion of edge devices. Software development and the production of code-based intellectual property and services have been revolutionized with the adoption of agile models, and the desertion of less flexible waterfall models. And, we cannot discount the effect that changes in deployment models and automation have had. The impact of both infrastructure-as-code and containerization is phenomenal.

The Everything-as-a-Service world we inhabit today is unrecognizable from where many of us started, just a few years before. Gone are the days when a DBA defined the low-level storage parameters for optimal database performance. Gone (or mostly gone) are the days when a data architect was a part of the application development team. Gone are the days when a DBA built their entire career around the configuration and tuning of one database technology.

In the majority of organizations today, the value of this role is no longer self-evident.  While some may disagree with this mentality, and many DBAs may not like the trajectory of the trend, at this point, there is no denying that things have changed.

Does this mean that the value of this skill set has also disappeared? Are database gurus extinct? Certainly not. In many cases these experts have simply moved into consulting firms, extending their skills to those experiencing critical issues, who need in-depth expertise.

There are many factors that played a part in taking us from the world where relational DBAs were indispensable, to where we stand today. The move towards DBaaS and the (false) perception that this will provide companies with a complete managed service certainly plays a part.

Skilled and still in-demand

Many of the companies I work with on a regular basis no longer hire in-house DBAs. Instead, they are increasingly choosing to bring in outside database expertise on a contract basis. This represents a dramatic shift in perception and should provoke wider internal and external discussions on the pros and cons of this policy.

Fundamentally, it is important to remember that solid database performance continues to be based on the quality of the queries, the design of the schema, and a properly architected infrastructure. Proper normalization still matters. Data-at-scale continues to require sound data architecture. Business continuity demands robust fault-tolerant infrastructure. However, many companies now don’t have the internal capacity required to achieve these demands in the same way they did in the past.

Database consulting is now a booming market. This is, in part, due to the perceived diminished need for in-house DBA expertise. But, the truth is, there is still a need for that capability and expertise.

With the appetite for employing in-house DBAs gone, filling the expertise gap falls to those few consulting firms that employ people who have these skill-sets.

For the firms who built their stellar reputations on the availability and quality of their DBAs, now made available by the hour and for pre-agreed engagements, it’s a great time to be in database consulting and managed services.


Written by Brian Walters
Brian is Director of Solution Engineering with Percona, a leader in providing enterprise-class support, consulting, managed services, training, and software for open source databases in on-premises and cloud environments.
Posted in DBA | 1 Comment

Wish Your DBA Glad Tidings on DBA Appreciation Day

It is the first Friday in July, so I wanted to wish DBAs everywhere a Happy DBA Appreciation Day!


Day in and day out your DBAs are working behind the scenes to make sure that your applications have access to the mission-critical data they need to operate correctly and efficiently. Your DBAs are often called on to work over the weekend or late into the night to perform maintenance and support operations on your databases. And their dedication to keeping your databases available and your applications efficient is rarely noticed, let alone appreciated.

If the DBA is doing their job, then you never really notice that they are there…  so take a moment or two today, July 3rd, to thank your database administrators. Maybe buy them a donut or a pastry… get them a good cup of coffee (not that swill from the break room)… or just nod and tell them that you appreciate what they do.

You’ll make their day!

Posted in DBA | Leave a comment

What causes SQL Server to have performance issues or run slow?

Today’s post is a guest article written by a friend of the blog, Kevin Kline, an expert on SQL and Microsoft SQL Server.

circleImage Source: Pixabay

SQL Server can suffer from all sorts of problems if it is not properly optimized, and thankfully with the help of performance tuning, you can overcome common complications.

Of course, the first step to fixing flaws in SQL Server is understanding what causes them, so here is a look at the common symptoms to look out for and what they may indicate.

Hardware hold-ups

One of the first things to consider when addressing server performance imperfections is that the hardware itself may not be adequate to accommodate the kinds of uses you have in mind.

For example, if your storage is reaching its capacity or your CPU is being pushed to its limits in the handling of the queries that are being fired at the server throughout the day, then the only option may be to upgrade the overburdened components.

Of course, there are ways to make better use of the hardware resources you have available without splashing out on expensive new equipment. This can include increasing the maximum amount of memory that SQL Server can use, which is a widely used quick fix that is worth trying out, especially if you have previously stuck to the default settings without doing any tinkering.

A simple, cost-effective upgrade that can be considered is to migrate data from traditional hard disks to solid state storage, the price of which has fallen significantly in recent years.

Networking imperfections

If you have checked your server’s hardware resources and seen that they are not being taxed, then sluggish performance could instead be down to the network itself.

Analyzing traffic and testing connectivity will allow you to work out whether your SQL database is being hamstrung by the infrastructure which it relies upon to serve end-users.

Software snafus

Before delving any deeper into troubleshooting SQL Server performance, it is worth checking on the software side of the equation to make sure that no errant processes are monopolizing hardware resources unnecessarily.

It is perfectly possible for the OS to throw up unexpected issues of this nature from time to time, and often these can be fixed by simply killing the process in question, so long as it is not a lynchpin of the entire software environment, of course.

Index issues

Well-maintained indexes are key to keeping an SQL database running smoothly, which is why you should aim to look out for index fragmentation if you are experiencing problems, or even if you are not.

Make sure to schedule this on a consistent basis so that you are not left with seriously fragmented indexes that are entirely sub-optimal. Your maintenance schedule should be set according to your own needs, which is why you also need to stay on top of server monitoring so that you can make informed decisions.

Query qualms

If particular queries are used very frequently and tend to take up a lot of your I/O throughput and server CPU grunt, it is likely that there is room for improvement here.

There are lots of ways to optimize SQL queries, and it makes sense to focus your attention on the most commonplace queries, since even if you only make a minor enhancement you should see big performance gains.

It is best to see SQL Server maintenance as an ongoing process, rather than one which can be carried out once and then considered as complete. Being attuned to the likely issues that might arise will let you act swiftly when they do emerge, and even let you preemptively prevent them so that performance is always top-notch.

Written by Kevin Kline
Kevin serves as Principal Program Manager at SentryOne. He is a founder and former president of PASS and the author of popular IT books like SQL in a Nutshell. Kevin is a renowned database expert, software industry veteran, Microsoft SQL Server MVP, and long-time blogger at SentryOne. As a noted leader in the SQL Server community, Kevin blogs about Microsoft Data Platform features and best practices, SQL Server trends, and professional development for data professionals.
Posted in DBA, Microsoft SQL Server, optimization, performance, SQL | Leave a comment

The Cost of Virtual Database Conferences

This year — 2020 — the year of the COVID-19 pandemic and quarantine is wreaking havoc on the technical conference industry. So far, we have already seen many conferences postponed or canceled. Those that continue are soldiering along with online, virtual conferences.

I like the general idea of a virtual conference held online, especially when there are no safe, realistic options for holding in-person events this year.

One example of a well-done, online virtual event was the IBM Think 2020 conference. For those who never attended an IBM Think event, it is IBM’s annual, in-person conference that usually attracts over 10,000 attendees. The cost to attend is over $1000, but IBM offers discounts to some customers and VIPs.

This year’s IBM Think 2020 event hosted nearly 90,000 virtual participants. Which is incredible! But I think one of the key factors was the cost, which was free. Yes, IBM Think 2020 was provided for free to anybody who wanted to attend. Of course, a vendor the size of IBM can bear the cost of a free conference easier than those run by volunteers and organizations.

It will be interesting to watch what Oracle does for its currently-still-scheduled-to-be-an-in-person-event, Oracle World. This year the event is moving (or hopes to be moving) from San Francisco, where is has been held for years, to Las Vegas. It is still scheduled for the week of September 21-24, 2020… but who knows if it will still be held. And, if not, will it go virtual? Will it be free of charge? I’m curious… as are (probably) many of you!

Now take a look at some of some other database-focused events.

First let’s take a look at the IDUG Db2 North American Technical Conference, which will be a virtual event this year. Having postponed the original weeklong conference that was to be held in Dallas this week, IDUG is promoting the virtual event as a kickoff event occurring the week of July 20th, followed by additional labs, workshops, and sessions for three ensuing weeks. That sounds great to me! I’ll be presenting at one of the recorded sessions on the plight of the modern DBA, so if you attend, be sure to look me up and take part in my session!

But the IDUG Virtual Db2 Tech Conference is not free. There is a nominal cost of $199 to participate and attend.

Turning our attention away from Db2 to Microsoft SQL Server, the annual PASS Summit has also gone virtual for 2020. It was originally scheduled as a live conference in Houston (nearby to me, so I’m disappointed that PASS Summit won’t be in-person this year). Instead, the event will be held online, virtually, the week of November 10, 2020, and will offer over 200 hours of content.

Again, though, like IDUG, the PASS Summit 2020 is not free, either. The cost for this event is listed as $499.

But not all of the database events going virtual are charging. The Postgres Vision Conference will be conducted online June 23-24, 2020… and registration and attendance is free of charge.

Now I do not want to be negative about either of these great events. Both of them have histories of providing quality content for their respective database systems (Db2 and SQL Server). And I wish for both of them to survive and thrive both during, and after, this pandemic passes. Nevertheless, I am skeptical that there will be an outpouring of paid attendees for these events. Why do I say this?

Well, first of all, there are already a plethora of educational webinars being offered for free every day of the week on a myriad of technical topics. Sure, you may have to endure some commercial content, but frequently that means learning about technology solutions you may not know about. And to reiterate, these are free of charge.

The other mitigating factor will be attention spans. It can be difficult to allot time from your schedule to attend a series of presentations over a full day, let alone a full week. Without being in attendance, in person, at an event, there will be many distractions that will draw folks’ attention away from the content… phone ringing, texts and IMs, e-mail, is that the doorbell?, and so on.

So, potential attendees, but also their managers, will have to decide if the cost of a virtual event is worth it. I hope that most people will temper their objections and give these events a try, even the ones that are charging. After all, if you have been to these events in the past, and were planning on attending this year too, the cost will be lower than if you paid a full in-person registration fee plus travel expenses. For that reason alone it should be worth giving these events a shot. I mean, we DO want them to start up again after the pandemic, right? So supporting them now, even if you have reservations, is really the right thing to do!

What do you think? Will you be attending IDUG or PASS this year virtually? Why or why not? Leave a comment below and let us know!

Posted in conferences, DB2, DBA, DBMS, Microsoft SQL Server | 1 Comment

Impact of COVID-19 on Tech and IT

The global COVID-19 pandemic has had a significant impact on all of us as we struggle to stay safe, protect others, and still remain productive. I wrote about one important impact of the pandemic on the technology sector back in March (Coronavirus and Tech Conferences), but there have undoubtedly been many more factors that have arisen.

I don’t want to ignore perhaps the most obvious impact, working from home. But I don’t really want to belabor that point as it has been hashed out in the media quite a bit already; see, for example, Work from Home is Here to Stay (The Atlantic), Why Many Employees are Hoping to Work From Home Even After the Pandemic is Over (CNBC), and Telecommuting Will Likely Continue Long After the Pandemic (Brookings), among many, many others.

But there have been other, perhaps less obvious, ways that tech workers and companies have changed as we deal with the pandemic.

Yellowbrick Data, Inc., a data warehousing provider, recently surveyed over 1,000 enterprise IT managers and executives to see how IT departments were grappling with the impact of the COVID-19 pandemic.  At a high level—and contrary to conventional wisdom—not all IT budgets are being cut. Even with the economic challenges that COVID-19 has posed for businesses, almost 38 percent of enterprises are keeping their IT budgets unchanged (flat) or actually increasing them.

The survey also revealed some interesting statistics on how the pandemic has changed the thinking and lives of IT professionals. For example, it indicates that 95.1% of IT pros believe that COVID-19 has made their lives more centered on technology than ever before. To me, this is not surprising (except for the 4.9% who believe otherwise)!

Cost optimization is another vital discovery of this survey. 89.1% say their companies will be focused on cost optimization as a result of COVID-19 disruption, while at the same time revealing that 66% are accelerating their migration of analytics to the cloud due to COVID-19 and 63.9% are investing more in their data platform and analytics due to COVID-19. So cost optimization is important to organizations, not at the expense of ignoring or not managing their data appropriately. That is good news, in my humble opinion.

Migrating to a cloud computing model also remains an important aspect of IT amid the pandemic. Acceleration of cloud adoption has increased at 43.5% of organizations due to COVID-19 and 84.3% said that cloud computing is more important than workplace disruption. Nonetheless, 58.1% said that legacy computing is more important during workplace disruption. Not surprisingly then, 82% indicated a desire for hybrid multi-cloud options to spread any risk from their cloud investments. 50.5% said that the benefit of the hybrid cloud enables them to scale faster without compromising sensitive data. I’ve written about the hybrid multicloud approach before if you are interested in my thoughts on that.

Jeff Spicer, the CMO for Yellowbrick, provided these insights: “The survey brought to light some trends that we have been noticing recently related to the speed at which companies are moving to the cloud and investing in analytics. In fact, more than half of enterprises are accelerating their move to the cloud in light of COVID-19 challenges to their businesses. But what really stands out is that nearly 55 percent of enterprises are looking at a hybrid cloud strategy with a combination of cloud and on-premises solutions. That clearly shows that a cloud-alone strategy is not what most enterprises are looking for—and validates what our customers are telling us about their own best practices combining cloud and on-prem approaches to their biggest data infrastructure challenges.”

There will undoubtedly be additional impacts that will reverberate across the technology sector as we fully come to understand the long-term impact of the pandemic. Nevertheless, I found these insights to be illuminating and I hope that you do, too.

Feel free to share your thoughts below…

Posted in business planning, cloud, data | Leave a comment

A Little Database History: Relational vs. OO

I am a packrat, which means I have closets full of old stuff that I try to keep organized. A portion of my office closet is reserved for old magazines and articles I’ve cut out of IT publications.

For example, here is a binder of really old database-related articles:

db articles

Every few years I try to organize these things, throwing some away, reorganizing others, and so on. And usually, I take some time to read through some of the material as sort of a trip down memory lane.

One area that I was interested in back in the early 1990s was the purported rise of the ODBMS – a non-relational, non-SQL DBMS based on object-orientation. I was rightly skeptical back then, but the industry pundits were proclaiming that ODBMS would overtake the incumbent “relational” DBMSs in no time. Of course, there were some nay-sayers, too.

Don’t remember the OO vs. relational days? Or maybe you weren’t alive or in IT back then… Well, here are some quotes lifted right out of the magazines and white papers of the times:


  • From the pages of the Spring 1993 edition of InfoDB, there is an exchange between Jeff Tash and Chris Date on the merits, definition, and future of ODBMS. As you might guess, Date is critical of ODBMS in favor of relational; Tash counters that relational is defined by the SQL DBMS products more than the theory. Interesting reading; both have valid points, but Date is spot on in his criticism that there was a lack of a precise definition of an object model.
  • In the July/August 1990 issue of the Journal of Object-Oriented Programming, there are several questionable quotes in the article titled “ODBMS vs. Relational” (especially in hindsight): 

    1) “The data types in the relational model are quite constrained relative to the typing capabilities offered by an ODBMS.”  [Note: Today most RDBMS products offer extensible typing with user-defined distinct types.]

    2) “…the (relational) data model is so simple that it cannot explicitly capture the semantics we now expect from an object model.” [Note: The object folks always want to tightly-couple code and data. The relational folks view the separation of the two as an advantage.]

    3) “The apparent rigor of the relational model…” [Note: Not only is it “apparently” rigorous, but it actually is rigorous. This is an example of an object proponent trying to diminish the importance of the sound theoretical framework of the relational model. Of course, it might be reasonable to say that the DBMS vendors kinda did that themselves, too, by not implementing a true relational DBMS.]

  • Finally, we have a July 1992 article from DBMS Magazine titled “The End of Relational?” This type of headline and sentiment was rampant back then. Of course, as I read the article I see a claim that in March 1991 Larry Ellison said that Oracle8 would be an object database. Of course, it was not (O/R is not O — and the O was different IMHO). And then there is this whopper from that same article: “Although it is certain that the next generation of databases will be object databases…” [Note: Certain, huh?]


Perhaps the most interesting piece of data on the object vs. relational debate that I found in my closet is an IDC Bulletin from August 1997. This note discusses Object versus Object/Relational. Basically, what IDC explains in detail over 14 pages is that the marriage of object to relational is less a marriage and more of a cobbling onto relational of some OO stuff. In other words, the relational vendors extended their products to address some of the biggest concerns raised by the OO folks (support for complex data and extensible data types) — and that is basically the extent of it. The ODBMS never became more than a small niche product.

Although this is an interesting dive into a very active timeframe in the history of database systems, I think there is a lesson to learn here. These days similar claims are being made for NoSQL database systems as were being made for object database systems. Of course, the hype is not as blatant and the claims are more subdued. Most folks view NoSQL as an alternative to relational only for certain use cases, which is a better claim than the total market domination that was imagined for object database systems.

Nevertheless, I think we are seeing — and will continue to see — the major RDBMS players add NoSQL capabilities to their database systems. This creates what sometimes is referred to as a multi-model DBMS. Will that term survive? I’m not sure, but these days we rarely, if ever, hear the term Object/Relational anymore.

And over time, we will likely see the market for NoSQL databases consolidate, with fewer and fewer providers over time. Today there are literally hundreds of options (see DB-Engines.com) and most industries cannot support such a diversity of products. Most industries, although they may fluctuate over time, typically consolidate to where the top three providers control 70% to 90% of the market.

After all, history tends to repeat itself, right?

Posted in DBMS, History, OO, relational | Leave a comment

Embracing In-Memory Processing for Optimizing Performance

Organizations are always encouraging their IT professionals to obtain the highest level of performance out of their applications and systems. It only makes sense that business want to achieve value at a high level of return on their investment in IT.  Of course, there are many ways of optimizing applications and it can be difficult to apply the correct techniques to the right applications. Nevertheless, one area that most organizations can benefit from is by better using system memory.

Why is this so? Well, there are three primary factors that impact the performance and cost of computer applications: CPU usage, I/O, and concurrency. When the same amount of work is performed by the computer using fewer I/O operations, CPU savings occur and less hardware is needed to do the same work. A typical I/O operation (read/write) involves accessing or modifying data on disk systems; disks are mechanical and have latency – that is, it takes time to first locate the data and then read or write it. Of course, there are many other factors involved in I/O processing that involve overhead and can increase costs, all depending upon the system and type of storage you are using.

So, you can reduce the time it takes to process your batch workload by more effectively using memory. You can take advantage of things like increased parallelism for sorts and improve single-threaded performance of complex queries when you have more memory available to use. And for OLTP workloads, large memory provides substantial latency reduction, which leads to significant response time reductions and increased transaction rates.

But storing and accessing data in-memory will eliminate mechanical latency and improve performance. And many organizations have not taken advantage of the latest improvements of IBM’s modern mainframes, like the z15, with up to 190 configurable cores, and up to 40TB of memory. That is a lot of memory. And even though you probably do not have 40 TB of memory configured on your mainframe system, chances are you have more than you use. And using it can improve the end-user experience!

Therefore, improved usage of memory can significantly improve the performance of applications, thereby enabling business users and customers to interact more rapidly with your systems. This means more work can be done, quicker, resulting in an improved bottom line.

Growth and Popularity of In-Memory Data

Customers continue to experience pain with issues that in-memory processing can help to alleviate. For example, consider the recent stories in the news about the shortage of COBOL programmers. New Jersey (and other states) put out the call for COBOL programmers because many of the state’s systems use mainframes, including their unemployment insurance systems. With the COVID-19 pandemic, unemployment rates have risen dramatically, causing those systems to experience a record demand for services.

Many of these stories focused on the age of the COBOL applications when they should have focused on the need to support and modify systems that run their states. COBOL is still reliable, useful, and well-suited for many of the tasks and systems that it continues to power. It is poor planning when you do not have skilled professionals to tend to mission-critical applications. And in-memory data processing could help to alleviate the large burden on those systems allowing them to respond quicker and process more users.

We also need to consider the modernization of IBM’s mainframe software pricing. Last year (2019), IBM announced Tailored Fit Pricing (TFP) to simplify the traditionally very complex task of mainframe software pricing and billing. This modernization effort strives to establish a simple, flexible, and predictable cloud-like pricing option. Without getting into all of the gory details, IBM is looking to eliminate tracking and charging based on monthly usage and instead charge a consistent monthly bill based on the previous year’s usage (plus growth).

But TFP is an option and many organizations are still using other pricing plans. Nevertheless, a more predictable bill is coveted by most organizations, so TFP adoption continues to grow. Successfully moving your organization to TFP involves a lot of learning and planning to achieve the desired goal of predictable billing at a reasonable cost. That said, it makes considerable sense for organizations to rationalize their software bills to the lowest point possible the year before the move to TFP. And you guessed it, adopting techniques to access data in-memory can lower usage – and possibly your software bill. Optimizing with in-memory techniques before moving to TFP makes a lot of sense if you want lower software bills.

DataKinetics’ tableBASE: An In-Memory Technique

It should be clear that in-memory optimization is a technique that can improve performance and save you money. But how can you go about optimizing your processes using in-memory data?

Well, there are several different ways to adopt in-memory optimization for your applications and systems, but perhaps the best approach requiring the least amount of time and effort, is to utilize a product. One of the best in-memory data optimization products is DataKinetics’ tableBASE, a proven mainframe technology that manages data using high-performance in-memory tables. The product is ideal for organizations that need to squeeze every ounce of power from their mainframe systems to maximize performance and transaction throughput while minimizing system resource usage at the application level.

Although every customer deployment is different, using tableBASE to optimize in-memory data access can provide a tremendous performance boost. For example, a bank that was running up against its batch window deployed tableBASE and batch runs that took more than 7 hours in total to finish, completed in less than 30 minutes in total afterward. That is an improvement of more than 70 percent!

And tableBASE is a time-tested solution having many customers that have used it to optimize their applications for decades.

The latest news for tableBASE is that IBM has partnered with its vendor, DataKinetics, to deliver an advanced in-memory data optimization solution for Z systems applications. So now you can engage with DataKinetics and implement tableBASE, or work with IBM and their new IBM Z Table Accelerator. Both options can help you implement an advanced in-memory data optimization solution for your Z systems applications.

The Bottom Line

The current resurgence in using in-memory optimization techniques is being driven by organizations that need to improve performance, lower costs, and utilize every bit of their mainframe investment. I have to say, that just sounds like good sense to me!


Posted in data, data availability, In-Memory, performance | 1 Comment

IBM Think 2020: A Digital Event Experience

This week I have been attending the first virtual IBM Think conference. As most of my readers know, I attend the annual IBM Think conference to keep up-to-date on all of the latest news, issues, and trends in IT computing and infrastructure, and specifically IBM’s technologies.

I mean, let’s face it. It can be difficult to keep up with all of the technology shifts and changes going on these days. The IBM Think event helps me because it addresses all of them including things like AI, cloud, analytics, infrastructure, security, and even more nascent trends like quantum computing.

At any rate, most years Think is an in-person event, attracting over ten thousand IT executives and practitioners. But with the global COVID-19 pandemic, an in-person event was not practical, so IBM held it on-line. And I have to say, they did a fantastic job of managing multiple threads of content without experiencing bandwidth or access issues – at least none that I encountered.

I was fortunate enough to be asked to participate in a live podcast meetup to discuss my thoughts on the Think 2020 event with some other great social influencers.


I joined a panel of social influencers including Neil Cattermull, Steve Dickens, Sally Eaves, Tony Flath, Sarbjeet Johal, Antonio Santos, and Melissa Sassi. There I am in the first square from the left on the second line!

Please take a moment (or two) to click on this link for our IBM Think 2020 Digital Meetup and listen to our thoughts related to IBM Think 2020!

And don’t forget that even though the live session for IBM Think 2020 are finished, the On Demand sessions are still available for you to watch and enjoy at your own pace and your own timeframe. Click here –> IBM Think 2020 <– to continue participating in Think 2020 whenever you wish!

IBM Think 2020

Posted in cloud, conferences, data, IBM, Think | Leave a comment

Consider the Cloud: A Quick Recap (Part 6)

Today we conclude our Consider the Cloud blog series with a quick recap and links to the entire series.


In Part 1 we took a look at the growth of cloud computing and examined the benefits that contribute to its phenomenal growth rate.

In Part 2 we reviewed some of the many hyperbolic claims predicting that cloud computing will take over the world… and hinted at our skepticism about that.

Part 3 delved deeper into the reasoning behind our skeptical take, explaining why the cloud won’t be completely taking over everything any time soon.

In Part 4 we talked about data gravity, and how this concept could also contribute to a lower uptake of cloud computing than many pundits believe.

And finally, Part 5 introduces what we believe the future will entail… hybrid multicloud computing.

Cloud computing is here to stay as a vital and viable component of the IT infrastructure. You should learn it, adopt it, and adapt your computing environment to include the cloud. But temper any irrational exuberance you hear… the cloud won’t replace all on-premises computing now, or any time in the near future.


Posted in cloud, data | Leave a comment

Consider the Cloud: A Hybrid Multicloud Future (Part 5)

In this Consider the Cloud blog series we have looked at the benefits of cloud computing, cloud growth predictions, data gravity, and analyzed the accuracy of how fast and complete cloud adoption will be. In today’s installment, I want to discuss what I believe is the future of cloud computing: the hybrid multicloud.

What is needed for a secure, reasonable IT infrastructure of the future (present?) is an architecture that embraces the cloud, but also on-premises workloads, including the mainframe. The mainframe has been at the core of the IT infrastructure of large organizations for a long time and they continue to drive a significant amount of mission-critical workload for big business.

OK, but what is a hybrid multicloud?


Hybrid implies something heterogeneous in origin or composition. In other words, it is something that is composed of multiple other things. Multicloud is pretty simple; it refers to using more than one cloud computing service. So, when you use the term “hybrid” in conjunction with “multicloud,” it implies an IT infrastructure that uses a mix of on-premises and/or private / public cloud from multiple providers.

This is a sensible approach for most organizations because it enables you to maintain and benefit from the systems and data that you have built over time. There is a sunk cost to those systems and many (if not most) existing systems are still delivering value. So the next logical step is to couple existing systems with current best practices for reducing cost and scaling with cloud services where and when it makes sense.

No one, single system or technology is the right solution for every project. No matter what the prognosticators are saying, we will not be moving everything to the cloud and abandoning every enterprise computing system we ever built in the past. But the cloud offers economies of scale and flexibility that make it a great addition to the overall IT infrastructure for companies of all sizes.

With a hybrid multicloud approach, you can choose what makes sense for each component, task, and project that you tackle. Maintain existing platforms to benefit from their rich heritage and integrate them with new capabilities and techniques when appropriate.

It should be obvious that we won’t just abandon existing mission-critical workloads, because our businesses rely on them. In some cases, it can make sense to migrate workload to public or private clouds, and in other cases, your organization will be better served by modernizing and refactoring applications without having to rely on wholesale recoding on another platform.

Using a hybrid multicloud approach means that you embrace multiple platforms, both remote and on-premises. Of course, deploying this approach means that we need to understand the challenges involved in integrating, managing, and utilizing a complex heterogeneous system of different platforms and technologies. Organizations will need to build practices and procedures to secure, manage, and deliver service across their hybrid multicloud (in conjunction with their cloud service providers).

The bottom line here is simple. Your customers don’t care about the technology you use – they just expect to be able to access your systems easily and for their data to be protected and secure. And that is why most organizations will not rip and replace everything they have built over multiple decades.

Posted in cloud, data, enterprise computing, legacy data, mainframe | 1 Comment