IBM POWER and SAP HANA: A Powerful and Effective Combination

As organizations looks for differentiators to improve efficiency and improve cost effectiveness, the combination of IBM Power Systems and SAP HANA can provide a potent platform with a significant return on investment.

Why IBM Power Systems?

IBM Power Systems is a family of server computers that are based on IBM’s POWER processors. The POWER processor is actually a series of high-performance microprocessors from IBM, all called POWER followed by a number designating its generation. For example, POWER1, POWER2, POWER3 and so forth up to the latest POWER10, which was announced in mid-August 2020 and is scheduled for availability in 2021.

What makes IBM Power Systems different than typical x86 architecture servers is the RISC, or Reduced Instruction Set Computer, architecture based on IBM research that began in the 1970s. POWER microprocessors were designed specifically for servers and their intrinsic processing requirements.

In contrast, x86 CPUs were initially built for and catered to the personal computer market. They are designed as general-purpose processors that can be used for a variety of workloads, even for home PCs. As the processing power of the x86 microprocessors advanced over time, they were adapted for usage in servers.

So, looking at the two alternatives today, both x86 and IBM Power Systems seem to be competitive architectures for servers running enterprise workloads. However, POWER microprocessors were designed to services high-performance enterprise workloads, such as database management systems, transaction processing and ERP systems. Although x86 microprocessors can be used for those type of workloads, too, it is not typically as efficient because of its general-purpose design, as opposed to the POWER processors specific design for enterprise computing.

IBM Power Systems deliver simultaneous multithreading (SMT), a technique for improving the overall efficiency of CPUs permitting multiple independent threads to execute and utilize the resources of the processor architecture. With IBM Power Systems SMT8 every processor can run eight threads in parallel, which is about 4 times higher than its competitors. Symmetric multithreading helps to mask memory latency, and increase efficiency and throughput of computations.

Virtualization is another differentiator for POWER, because they were built to support virtualization from the get-go. POWER features a built-in hypervisor that operates very efficiently. On the other hand, x86 was not originally designed for virtualization, which means you need to use a third-party hypervisor (e.g. VMware),

Scalability is another issue where IBM POWER excels versus x86. Although you can scale both, x86 scaling typically requires adding more servers. With POWER, the chips themselves are designed to scale seamlessly without having to add hardware (although you can if you so desire).

The bottom line is that the POWER architecture provides benefits for modern workloads, such as for big data analytics and Artificial intelligence (AI). Which brings us to SAP HANA.

Why SAP HANA?

SAP HANA is an in-memory database management system that delivers high-speed data access. It can offer efficient, high performance data access due to its usage of memory and its storage of data in column-based tables as opposed to the row-based tables of a traditional SQL DBMS. Such a columnar structure can often deliver faster performance when queries only need to access certain sets of columns.

SAP HANA provides native capabilities for machine learning, spatial processing, graph, streaming analytics, time series, text analytics/search, and cognitive services all within the same platform. As such, it is ideal for implementing modern next-generation Big Data, IoT, translytical, and advanced analytics applications. 

SAP S/4HANA is the latest iteration of the SAP ERP system that uses SAP HANA as the DBMS, instead of requiring a third-party DBMS (such as Oracle or Db2). It is a completely revamped version of their ERP system.

Organizations implement SAP HANA as both a standalone, highly-efficient database system, and also as part of the SAP S/4HANA ERP environment. And for both of these HANA applications, IBM Power Systems is the ideal hardware for ensuring optimal performance, flexibility, and versatility.

Why IBM POWER + SAP HANA?

IBM Power Systems are particularly good at powering large computing workloads. Their ability to take advantage of large amounts of system memory and to be logically partitioned make them ideal for implementing SAP HANA.

If you need something that can take advantage of 64 TB of memory on board and can host up to 16 production SAP HANA LPARs, the high-end POWER E980 is a good choice. Earlier this year (2020), SAP announced support of Virtual Persistent Memory on IBM Power Systems for SAP HANA workloads. What this means is that using the PowerVM hypervisor located on the firmware it is possible to support up to 24TB for each LPAR. Virtual Persistent Memory is available only on IBM Power Systems for SAP HANA.

There are many benefits that can accrue after adopting Virtual Persistent Memory on IBM Power Systems and SAP HANA. For example, it provides faster restart and faster shutdown processing, which expands the outage window for change control, thereby potentially enabling more work to be done during the outage. Alternatively, the duration of the change control window may be able to be shrunk, thereby reducing the outage to make changes.

And let’s not forget to mention the SMT8 capability of IBM Power Systems, which will improve cache per core, thereby improving SAP HANA performance on a Power machine as compared with other machines.

Of course, there are also midrange IBM Power Systems such as the E950 that can be used if your requirements are not at the high-end.

Cost

Of course, a server can be powerful and efficient, but if it is not also cost-effective it will be difficult for organizations to adopt it. Forrester Research conducted a three-year financial impact study and concluded that IBM Power Systems for SAP HANA delivers a cost-effective solution.

The study required multiple customer interviews and data aggregation, which resulted in Forrester determining the following benefits of running SAP HANA on IBM Power Systems as opposed to other platforms:

  • Avoided cost of system downtime (36%) – the composite organization avoided 4 hours of planned and unplanned downtime per month
  • Reduced cost of power and cooling (4%) – the composite organization saved nearly 438,000 KwH of power per year
  • Avoided cost of alternate server architecture (49%) – other architectures required as many as 20 systems, as compared to an architecture with only 3 IBM Power Systems servers
  • Reduced cost of managing and maintaining infrastructure (11%) – System administrators saved 60% of their productivity due to a reduced management and maintenance burden

The net/net shows a 137% return on investment (ROI) with a 7 month payback.

It is also important to note that IBM offers subscription-based licensing for Power Systems where you pay only for what you use. With this flexible capacity on demand option your organization can stop overpaying for resources you do not use. A metering system is used to determine your usage and you will be billed accordingly, instead of paying for the entire capacity up-front.

Use Cases

There are many examples of customers deploying IBM Power Systems to achieve substantial benefits and returns on their investment.

One example of using IBM Power Systems to reduce footprint and simplify its infrastructure is Würth Group. Located in Germany, Würth Group is a worldwide wholesaler of fasteners and tools with approximately 120,000 different products. The company deployed IBM Power Systems and was able to slim down the number of physical servers and SAP HANA instances from seven to one, an 86% reduction cutting power consumption and operating costs.

Danish Defence has implemented SAP HANA on IBM Power Systems to support military administration and operations with rapid, reliable reporting. As a result, they achieved up to 50% faster system response times enabling employees to work more productively. Additionally, processes completed 4 hours ahead of schedule meaning that reports are always available at the start of each day. And at the same time, they achieved a 60% reduction in in storage footprint, thereby reducing power requirements and cooling costs.

And perhaps most-telling, SAP themselves have replaced their existing SAP HANA HEC platform with the IBM POWER9. According to Christoph Herman, SVP and Head of SAP HANA Enterprise Cloud, “SAP HANA Enterprise Cloud on IBM Power Systems will help clients unlock the full value of SAP HANA in the cloud, with the possibility of enhancing the scalability and availability of mission-critical SAP applications while moving workloads to SAP HANA and lowering TCO.”

Summary

Whether implementing on-premises, in the cloud, or as part of a hybrid multicloud environment, the combination of IBM Power Systems and SAP HANA can deliver a high performance, cost-effective, environment for your ERP and other workloads.

Posted in analytics, Big Data, business planning, data, DBMS, ERP, IBM, In-Memory, optimization, performance, SAP HANA | Tagged , , , , , | Leave a comment

Inside the Data Reading Room – Fall 2020 Edition

Welcome to yet another edition of Inside the Data Reading Room, a regular feature of my blog where I take a look at recent data management books. In today’s post we’ll examine three new books on various data-related topics, beginning with data agglomeration.

You may not have heard of data agglomeration but you’ll get the idea immediately – at least at a high level – when I describe it as gathering data in wireless sensor networks. For more details, you’ll want to read A Beginner’s Guide to Data Agglomeration and Intelligent Sensing by Amartya Mukherjee, Ayan Kumar Panja and Nilanjan Day (Academic Press, 2020, ISBN 978-0-12-620341-5). The authors are all professors who specialize in networking, IoT, and data issues.

The book offers provides a concise treatment of the topic starting out with an overview of the various types of sensors and transducers and how they are used. I always find it easier to learn-by-example, and this book is nice because the authors provide a variety of good examples.

Reading this book will provide you with descriptions and explanations of pertinent concepts like wireless sensor networks, cloud platforms, device-to-cloud and sensor cloud architecture but more importantly, it also describes how to gather and aggregate data from wireless sensor networks.

If you or your organization are involved gathering data from sensors, such as in IoT systems, this book will be a great help to you as you design and implement your applications.

Next up from the shelves of the Data Reading Room we have Rohit Bhargava’s Non Obvious Mega Trends (IdeaPress, 2020, ISBN 978-1-64687-002-8).  

For those who do not know about this book series, every year since 2011 Rohit Bhargava has been publishing what he calls The Non Obvious Trend Report. He began writing these reports in response to the parade of annual articles talking about “the next big trends in the upcoming year,” which he found either to be too obvious (e.g. mobile phones still useful) or too self-serving (e.g. drone company CEO predicts this is the year of the drone) to be useful. In response, he created the Non Obvious Trend Report with the goal of being unbiased and digging deeper for nuances and trends missed elsewhere.

To a large extent, he succeeded. So much so that this book represents the 10th in the series. But what makes this particular book a must-have is that not only does it introduce 10 new trends, but it also documents and reviews all of the trends over the past decade.

For readers of this blog, Chapter 11, Data Abundance, will likely be the most useful chapter (although the entire book is great for research). In Chapter 11 he describes what data abundance is, how understanding it can be used to your advantage, as well as the various trends that have led to the evolution of data abundance.

I look forward to each new, annual edition of Non Obvious, but I think this year’s edition stands out as one that you will want to have on your bookshelf long-term.

The final book for today is Systems Simulation and Modeling for Cloud Computing and Big Data Applications edited by J. Dinesh Peter and Steven L. Fernandes (Academic Press, 2020, ISBN 978-0-12-620341-5).

Models and simulations are an important foundation for many aspects of IT, including AI and machine learning. As such, knowledge of them will be beneficial for data professionals and this book provides an education in using System Simulation and Modeling (SSM) for tasks such as performance testing and benchmarking.

The book analyzes the performance of several big data and cloud frameworks, including benchmarks such as BigDataBench, BigBench, HiBench, PigMix, CloudSuite and GridMix.

If you are dealing with big data and looking for ways to improve your testing and benchmarking through simulation and modeling, this book can be of help.

Posted in Big Data, book review, books, data, simulation, trends | Leave a comment

Inside the Data Reading Room – Summer 2020 Edition

Regular readers of this blog know that I am an avid reader and that I regularly post quick reviews of the technology books that I have been reading. This year has been a great one to catch up on reading, what with the pandemic and social distancing going on. So here are 4 interesting books I’ve been devouring this Summer.

Introducing Artificial Intelligence: A Graphic Guide by Henry Brighton and Howard Selina (ISBN: 978-184831214-2) published by Icon Books.

If you are looking for a nice, introductory treatment of artificial intelligence look no further than this concise, inexpensive, little book. It offers a trove of useful information that will help you to understand what AI is, the issues that can arise as it is adopted, and how it will change the way we use information systems.

The book is not geared for a developer looking for in-depth algorithms and such. Instead, it is focused on giving a nice, broad overview to the layperson. Coverage of the history of AI, its philosophical issues and implications, and the various types of AI (neural networks, machine learning, etc.) are all covered at a high level… and with graphics to support the descriptions and definitions. There’s even a short, but reasonably useful Index.

You won’t become an AI expert after reading this book, but you will have a reasonable foundation from which to learn more. And you can use the Further Reading section to help you along that path.

I’ll probably re-read this short book several more times (at least portions of it) just to make sure that my foundational AI knowledge is sound. If you want a similar foundation, pick up a copy of Introducing Artificial Intelligence: A Graphic Guide and give it a read today. You won’t be sorry.

Business Knowledge Blueprints: Enabling Your Data to Speak the Language of Business (2nd edition) by Ronald G. Ross (ISBN: 978-0-941049-17-7) published by Business Rules Solutions (BRS)

Anybody who has worked with data over the last several decades should know about the work and books of Ronald G. Ross, who is one of the founding luminaries of the concept of business rules. Ross has written extensively about business rules, having written The Business Rule Book, which was the seminal work in this field, but he has also written classic books on entity modeling and database systems.

At any rate, we now have a new, second edition, of his recent book Business Knowledge Blueprints. Herein you will learn the art and science of integrating data discovery and modeling with business communication.

This book contains a wealth of information about designing your data systems with the business in mind. It will be useful for anybody who works with data and needs to be able to communicate about and use the data in a way that is understandable to business and benefits the business.

With chapter like “Defining Things” and “Disambiguating Things”, as well as a whole section on “How to Define Business Terms in Plain English”, Ross takes you on a journey from confusion and messy data to building robust business vocabularies.

This book should be required reading for any professional whose work involves digital transformation and business transformation!

Data Democracy: At the Nexus of Artificial Intelligence, Software Development, and Knowledge Engineering, edited by Feras A. Batarseh and Ruxin Yang (ISBN: 978-0-12-818366-3) published by Academic Press

Not sure what data democracy is? But your interest is piqued because it has sounds “data” in the title? Well, you should be interested and you’ll understand the term well if you give Data Democracy a thorough read.

The author claims the book to be a manifesto for data democracy, and it succeeds in that challenge. Everybody is part of the “data republic” and therefore needs to be aware of their data… who has access to it, how they got access to it, how it is being used, how it is protected, and more.

In short, data democracy is the concept of sharing data instead of letting it be monopolized by a few large concerns. Of course, it is not quite that simple, so a book is needed… this book.

If you consume or create data – and at this point who does not – you will benefit by reading Data Democracy.

Data Governance: How to Design, Deploy, and Sustain an Effective Data Governance Program by John Ladley (ISBN: 978-0-12-815831-9) published by Academic Press

John Ladley’s latest book is the second edition of his Data Governance book, first published in 2012. If you know the first edition of this book, you’ll certainly appreciate this updated second edition. Reading this book will provide you with a comprehensive overview of why data governance is needed, how to design, initiate, and execute a data governance program, and how to keep the program sustainable.

There is a ton of new content in this second edition, including new case studies, updated industry details, and updated coverage of the available data governance tools that can help.

The book will be useful to you whether you are a novice or a seasoned professional. At the heart of this book is the framework that Ladley communicates that you can follow to build and maintain successful data governance at your organization. In combination with the use cases that he walks through in the book, you have a powerful guide for launching your data governance program.

Useful for both small and large organizations, be sure to pick up a copy of this book if you are charged with any aspect of data management and data governance within your shop.

Note: you can click on any of the links to purchase the books from amazon

Posted in AI, book review, books, data, data governance | Leave a comment

Happy Sysadmin Day

In case you did not know it, today is the last Friday in July… and that means it is System Administrator Appreciation Day. And this year, 2020, is the 21st annual celebration of Sysadmin Day!

Let’s face it, most Sysadmins are unsung heroes. If the network is up and running, if performance is optimal, if your transactions are working… that means that the Sysadmin is doing their job. But how often do you thank them just because everything is working well and you can do your job? Probably not very often.

That is what System Administrator Appreciation Day is all about. Take a moment out of your day to thank your local Sysadmins. Yes, plural, because there are probably many of them working tirelessly, day in and day out, to keep the systems running. The hardware and the software that you rely on every day!

And to thank them, why not bring them a nice cup of coffee (instead of that swill in the break room) or a tasty pastry?

They’ll surely appreciate it… and the next time you need to ask them for help that treat you brought them won’t be forgotten!

Posted in DBA | Leave a comment

Whatever Happened to The DBA?

Today’s post is a guest article written by Brian Walters, with Percona.

DBA

Today is DBA Appreciation Day (@dbaday, https://dbaday.org), where we take the time to celebrate the often-unacknowledged role of database admins. Suggested appreciation gifts are pizza, ice cream, and not deploying a migration that forces your DBA to work over the weekend!

DBA Appreciation Day is a welcome gesture, injecting humor into these difficult times when many DBAs are working incredibly hard. But, the celebration also forces us to reflect on how dramatically the world of data storage, retrieval, and information processing have changed in recent times.

When I started my career working with databases in the mid-1990s, relational databases were almost the only game in town. I was trained on Oracle 7.3 and obtained my first certification on Oracle 8i. I was sure then that the demand for relational database management system expertise would never wane. Boy, was I wrong… sort of?!

DBA redefined

Changes in the database technology space have touched every aspect of the data platform. It used to be that a database administrator (DBA) career path included having, or developing, knowledge of the entire application stack. This stretched from storage and infrastructure to the internal workings of the application itself.  If you wanted to progress, then the DBA role could be a stepping-stone to the larger world of full-stack architecture. So, what changed?

Well, to begin with, this career path is no longer so easy to define, as outside influences impact the relevance and value of the DBA role.

The introduction of new technologies such as NoSQL platforms and the rise of Cloud computing models have played a part. Data sources have proliferated and include the introduction of mobile, the birth of IoT, and the rapid expansion of edge devices. Software development and the production of code-based intellectual property and services have been revolutionized with the adoption of agile models, and the desertion of less flexible waterfall models. And, we cannot discount the effect that changes in deployment models and automation have had. The impact of both infrastructure-as-code and containerization is phenomenal.

The Everything-as-a-Service world we inhabit today is unrecognizable from where many of us started, just a few years before. Gone are the days when a DBA defined the low-level storage parameters for optimal database performance. Gone (or mostly gone) are the days when a data architect was a part of the application development team. Gone are the days when a DBA built their entire career around the configuration and tuning of one database technology.

In the majority of organizations today, the value of this role is no longer self-evident.  While some may disagree with this mentality, and many DBAs may not like the trajectory of the trend, at this point, there is no denying that things have changed.

Does this mean that the value of this skill set has also disappeared? Are database gurus extinct? Certainly not. In many cases these experts have simply moved into consulting firms, extending their skills to those experiencing critical issues, who need in-depth expertise.

There are many factors that played a part in taking us from the world where relational DBAs were indispensable, to where we stand today. The move towards DBaaS and the (false) perception that this will provide companies with a complete managed service certainly plays a part.

Skilled and still in-demand

Many of the companies I work with on a regular basis no longer hire in-house DBAs. Instead, they are increasingly choosing to bring in outside database expertise on a contract basis. This represents a dramatic shift in perception and should provoke wider internal and external discussions on the pros and cons of this policy.

Fundamentally, it is important to remember that solid database performance continues to be based on the quality of the queries, the design of the schema, and a properly architected infrastructure. Proper normalization still matters. Data-at-scale continues to require sound data architecture. Business continuity demands robust fault-tolerant infrastructure. However, many companies now don’t have the internal capacity required to achieve these demands in the same way they did in the past.

Database consulting is now a booming market. This is, in part, due to the perceived diminished need for in-house DBA expertise. But, the truth is, there is still a need for that capability and expertise.

With the appetite for employing in-house DBAs gone, filling the expertise gap falls to those few consulting firms that employ people who have these skill-sets.

For the firms who built their stellar reputations on the availability and quality of their DBAs, now made available by the hour and for pre-agreed engagements, it’s a great time to be in database consulting and managed services.

 

Written by Brian Walters
Brian is Director of Solution Engineering with Percona, a leader in providing enterprise-class support, consulting, managed services, training, and software for open source databases in on-premises and cloud environments.
Posted in DBA | 1 Comment

Wish Your DBA Glad Tidings on DBA Appreciation Day

It is the first Friday in July, so I wanted to wish DBAs everywhere a Happy DBA Appreciation Day!

dbaday_300_260

Day in and day out your DBAs are working behind the scenes to make sure that your applications have access to the mission-critical data they need to operate correctly and efficiently. Your DBAs are often called on to work over the weekend or late into the night to perform maintenance and support operations on your databases. And their dedication to keeping your databases available and your applications efficient is rarely noticed, let alone appreciated.

If the DBA is doing their job, then you never really notice that they are there…  so take a moment or two today, July 3rd, to thank your database administrators. Maybe buy them a donut or a pastry… get them a good cup of coffee (not that swill from the break room)… or just nod and tell them that you appreciate what they do.

You’ll make their day!

Posted in DBA | Leave a comment

What causes SQL Server to have performance issues or run slow?

Today’s post is a guest article written by a friend of the blog, Kevin Kline, an expert on SQL and Microsoft SQL Server.

circleImage Source: Pixabay

SQL Server can suffer from all sorts of problems if it is not properly optimized, and thankfully with the help of performance tuning, you can overcome common complications.

Of course, the first step to fixing flaws in SQL Server is understanding what causes them, so here is a look at the common symptoms to look out for and what they may indicate.

Hardware hold-ups

One of the first things to consider when addressing server performance imperfections is that the hardware itself may not be adequate to accommodate the kinds of uses you have in mind.

For example, if your storage is reaching its capacity or your CPU is being pushed to its limits in the handling of the queries that are being fired at the server throughout the day, then the only option may be to upgrade the overburdened components.

Of course, there are ways to make better use of the hardware resources you have available without splashing out on expensive new equipment. This can include increasing the maximum amount of memory that SQL Server can use, which is a widely used quick fix that is worth trying out, especially if you have previously stuck to the default settings without doing any tinkering.

A simple, cost-effective upgrade that can be considered is to migrate data from traditional hard disks to solid state storage, the price of which has fallen significantly in recent years.

Networking imperfections

If you have checked your server’s hardware resources and seen that they are not being taxed, then sluggish performance could instead be down to the network itself.

Analyzing traffic and testing connectivity will allow you to work out whether your SQL database is being hamstrung by the infrastructure which it relies upon to serve end-users.

Software snafus

Before delving any deeper into troubleshooting SQL Server performance, it is worth checking on the software side of the equation to make sure that no errant processes are monopolizing hardware resources unnecessarily.

It is perfectly possible for the OS to throw up unexpected issues of this nature from time to time, and often these can be fixed by simply killing the process in question, so long as it is not a lynchpin of the entire software environment, of course.

Index issues

Well-maintained indexes are key to keeping an SQL database running smoothly, which is why you should aim to look out for index fragmentation if you are experiencing problems, or even if you are not.

Make sure to schedule this on a consistent basis so that you are not left with seriously fragmented indexes that are entirely sub-optimal. Your maintenance schedule should be set according to your own needs, which is why you also need to stay on top of server monitoring so that you can make informed decisions.

Query qualms

If particular queries are used very frequently and tend to take up a lot of your I/O throughput and server CPU grunt, it is likely that there is room for improvement here.

There are lots of ways to optimize SQL queries, and it makes sense to focus your attention on the most commonplace queries, since even if you only make a minor enhancement you should see big performance gains.

It is best to see SQL Server maintenance as an ongoing process, rather than one which can be carried out once and then considered as complete. Being attuned to the likely issues that might arise will let you act swiftly when they do emerge, and even let you preemptively prevent them so that performance is always top-notch.

Written by Kevin Kline
Kevin serves as Principal Program Manager at SentryOne. He is a founder and former president of PASS and the author of popular IT books like SQL in a Nutshell. Kevin is a renowned database expert, software industry veteran, Microsoft SQL Server MVP, and long-time blogger at SentryOne. As a noted leader in the SQL Server community, Kevin blogs about Microsoft Data Platform features and best practices, SQL Server trends, and professional development for data professionals.
Posted in DBA, Microsoft SQL Server, optimization, performance, SQL | Leave a comment

The Cost of Virtual Database Conferences

This year — 2020 — the year of the COVID-19 pandemic and quarantine is wreaking havoc on the technical conference industry. So far, we have already seen many conferences postponed or canceled. Those that continue are soldiering along with online, virtual conferences.

I like the general idea of a virtual conference held online, especially when there are no safe, realistic options for holding in-person events this year.

One example of a well-done, online virtual event was the IBM Think 2020 conference. For those who never attended an IBM Think event, it is IBM’s annual, in-person conference that usually attracts over 10,000 attendees. The cost to attend is over $1000, but IBM offers discounts to some customers and VIPs.

This year’s IBM Think 2020 event hosted nearly 90,000 virtual participants. Which is incredible! But I think one of the key factors was the cost, which was free. Yes, IBM Think 2020 was provided for free to anybody who wanted to attend. Of course, a vendor the size of IBM can bear the cost of a free conference easier than those run by volunteers and organizations.

It will be interesting to watch what Oracle does for its currently-still-scheduled-to-be-an-in-person-event, Oracle World. This year the event is moving (or hopes to be moving) from San Francisco, where is has been held for years, to Las Vegas. It is still scheduled for the week of September 21-24, 2020… but who knows if it will still be held. And, if not, will it go virtual? Will it be free of charge? I’m curious… as are (probably) many of you!

Now take a look at some of some other database-focused events.

First let’s take a look at the IDUG Db2 North American Technical Conference, which will be a virtual event this year. Having postponed the original weeklong conference that was to be held in Dallas this week, IDUG is promoting the virtual event as a kickoff event occurring the week of July 20th, followed by additional labs, workshops, and sessions for three ensuing weeks. That sounds great to me! I’ll be presenting at one of the recorded sessions on the plight of the modern DBA, so if you attend, be sure to look me up and take part in my session!

But the IDUG Virtual Db2 Tech Conference is not free. There is a nominal cost of $199 to participate and attend.

Turning our attention away from Db2 to Microsoft SQL Server, the annual PASS Summit has also gone virtual for 2020. It was originally scheduled as a live conference in Houston (nearby to me, so I’m disappointed that PASS Summit won’t be in-person this year). Instead, the event will be held online, virtually, the week of November 10, 2020, and will offer over 200 hours of content.

Again, though, like IDUG, the PASS Summit 2020 is not free, either. The cost for this event is listed as $499.

But not all of the database events going virtual are charging. The Postgres Vision Conference will be conducted online June 23-24, 2020… and registration and attendance is free of charge.

Now I do not want to be negative about either of these great events. Both of them have histories of providing quality content for their respective database systems (Db2 and SQL Server). And I wish for both of them to survive and thrive both during, and after, this pandemic passes. Nevertheless, I am skeptical that there will be an outpouring of paid attendees for these events. Why do I say this?

Well, first of all, there are already a plethora of educational webinars being offered for free every day of the week on a myriad of technical topics. Sure, you may have to endure some commercial content, but frequently that means learning about technology solutions you may not know about. And to reiterate, these are free of charge.

The other mitigating factor will be attention spans. It can be difficult to allot time from your schedule to attend a series of presentations over a full day, let alone a full week. Without being in attendance, in person, at an event, there will be many distractions that will draw folks’ attention away from the content… phone ringing, texts and IMs, e-mail, is that the doorbell?, and so on.

So, potential attendees, but also their managers, will have to decide if the cost of a virtual event is worth it. I hope that most people will temper their objections and give these events a try, even the ones that are charging. After all, if you have been to these events in the past, and were planning on attending this year too, the cost will be lower than if you paid a full in-person registration fee plus travel expenses. For that reason alone it should be worth giving these events a shot. I mean, we DO want them to start up again after the pandemic, right? So supporting them now, even if you have reservations, is really the right thing to do!

What do you think? Will you be attending IDUG or PASS this year virtually? Why or why not? Leave a comment below and let us know!

Posted in conferences, DB2, DBA, DBMS, Microsoft SQL Server | 1 Comment

Impact of COVID-19 on Tech and IT

The global COVID-19 pandemic has had a significant impact on all of us as we struggle to stay safe, protect others, and still remain productive. I wrote about one important impact of the pandemic on the technology sector back in March (Coronavirus and Tech Conferences), but there have undoubtedly been many more factors that have arisen.

I don’t want to ignore perhaps the most obvious impact, working from home. But I don’t really want to belabor that point as it has been hashed out in the media quite a bit already; see, for example, Work from Home is Here to Stay (The Atlantic), Why Many Employees are Hoping to Work From Home Even After the Pandemic is Over (CNBC), and Telecommuting Will Likely Continue Long After the Pandemic (Brookings), among many, many others.

But there have been other, perhaps less obvious, ways that tech workers and companies have changed as we deal with the pandemic.

Yellowbrick Data, Inc., a data warehousing provider, recently surveyed over 1,000 enterprise IT managers and executives to see how IT departments were grappling with the impact of the COVID-19 pandemic.  At a high level—and contrary to conventional wisdom—not all IT budgets are being cut. Even with the economic challenges that COVID-19 has posed for businesses, almost 38 percent of enterprises are keeping their IT budgets unchanged (flat) or actually increasing them.

The survey also revealed some interesting statistics on how the pandemic has changed the thinking and lives of IT professionals. For example, it indicates that 95.1% of IT pros believe that COVID-19 has made their lives more centered on technology than ever before. To me, this is not surprising (except for the 4.9% who believe otherwise)!

Cost optimization is another vital discovery of this survey. 89.1% say their companies will be focused on cost optimization as a result of COVID-19 disruption, while at the same time revealing that 66% are accelerating their migration of analytics to the cloud due to COVID-19 and 63.9% are investing more in their data platform and analytics due to COVID-19. So cost optimization is important to organizations, not at the expense of ignoring or not managing their data appropriately. That is good news, in my humble opinion.

Migrating to a cloud computing model also remains an important aspect of IT amid the pandemic. Acceleration of cloud adoption has increased at 43.5% of organizations due to COVID-19 and 84.3% said that cloud computing is more important than workplace disruption. Nonetheless, 58.1% said that legacy computing is more important during workplace disruption. Not surprisingly then, 82% indicated a desire for hybrid multi-cloud options to spread any risk from their cloud investments. 50.5% said that the benefit of the hybrid cloud enables them to scale faster without compromising sensitive data. I’ve written about the hybrid multicloud approach before if you are interested in my thoughts on that.

Jeff Spicer, the CMO for Yellowbrick, provided these insights: “The survey brought to light some trends that we have been noticing recently related to the speed at which companies are moving to the cloud and investing in analytics. In fact, more than half of enterprises are accelerating their move to the cloud in light of COVID-19 challenges to their businesses. But what really stands out is that nearly 55 percent of enterprises are looking at a hybrid cloud strategy with a combination of cloud and on-premises solutions. That clearly shows that a cloud-alone strategy is not what most enterprises are looking for—and validates what our customers are telling us about their own best practices combining cloud and on-prem approaches to their biggest data infrastructure challenges.”

There will undoubtedly be additional impacts that will reverberate across the technology sector as we fully come to understand the long-term impact of the pandemic. Nevertheless, I found these insights to be illuminating and I hope that you do, too.

Feel free to share your thoughts below…

Posted in business planning, cloud, data | Leave a comment

A Little Database History: Relational vs. OO

I am a packrat, which means I have closets full of old stuff that I try to keep organized. A portion of my office closet is reserved for old magazines and articles I’ve cut out of IT publications.

For example, here is a binder of really old database-related articles:

db articles

Every few years I try to organize these things, throwing some away, reorganizing others, and so on. And usually, I take some time to read through some of the material as sort of a trip down memory lane.

One area that I was interested in back in the early 1990s was the purported rise of the ODBMS – a non-relational, non-SQL DBMS based on object-orientation. I was rightly skeptical back then, but the industry pundits were proclaiming that ODBMS would overtake the incumbent “relational” DBMSs in no time. Of course, there were some nay-sayers, too.

Don’t remember the OO vs. relational days? Or maybe you weren’t alive or in IT back then… Well, here are some quotes lifted right out of the magazines and white papers of the times:

 

  • From the pages of the Spring 1993 edition of InfoDB, there is an exchange between Jeff Tash and Chris Date on the merits, definition, and future of ODBMS. As you might guess, Date is critical of ODBMS in favor of relational; Tash counters that relational is defined by the SQL DBMS products more than the theory. Interesting reading; both have valid points, but Date is spot on in his criticism that there was a lack of a precise definition of an object model.
  • In the July/August 1990 issue of the Journal of Object-Oriented Programming, there are several questionable quotes in the article titled “ODBMS vs. Relational” (especially in hindsight): 

    1) “The data types in the relational model are quite constrained relative to the typing capabilities offered by an ODBMS.”  [Note: Today most RDBMS products offer extensible typing with user-defined distinct types.]

    2) “…the (relational) data model is so simple that it cannot explicitly capture the semantics we now expect from an object model.” [Note: The object folks always want to tightly-couple code and data. The relational folks view the separation of the two as an advantage.]

    3) “The apparent rigor of the relational model…” [Note: Not only is it “apparently” rigorous, but it actually is rigorous. This is an example of an object proponent trying to diminish the importance of the sound theoretical framework of the relational model. Of course, it might be reasonable to say that the DBMS vendors kinda did that themselves, too, by not implementing a true relational DBMS.]

  • Finally, we have a July 1992 article from DBMS Magazine titled “The End of Relational?” This type of headline and sentiment was rampant back then. Of course, as I read the article I see a claim that in March 1991 Larry Ellison said that Oracle8 would be an object database. Of course, it was not (O/R is not O — and the O was different IMHO). And then there is this whopper from that same article: “Although it is certain that the next generation of databases will be object databases…” [Note: Certain, huh?]

 

Perhaps the most interesting piece of data on the object vs. relational debate that I found in my closet is an IDC Bulletin from August 1997. This note discusses Object versus Object/Relational. Basically, what IDC explains in detail over 14 pages is that the marriage of object to relational is less a marriage and more of a cobbling onto relational of some OO stuff. In other words, the relational vendors extended their products to address some of the biggest concerns raised by the OO folks (support for complex data and extensible data types) — and that is basically the extent of it. The ODBMS never became more than a small niche product.

Although this is an interesting dive into a very active timeframe in the history of database systems, I think there is a lesson to learn here. These days similar claims are being made for NoSQL database systems as were being made for object database systems. Of course, the hype is not as blatant and the claims are more subdued. Most folks view NoSQL as an alternative to relational only for certain use cases, which is a better claim than the total market domination that was imagined for object database systems.

Nevertheless, I think we are seeing — and will continue to see — the major RDBMS players add NoSQL capabilities to their database systems. This creates what sometimes is referred to as a multi-model DBMS. Will that term survive? I’m not sure, but these days we rarely, if ever, hear the term Object/Relational anymore.

And over time, we will likely see the market for NoSQL databases consolidate, with fewer and fewer providers over time. Today there are literally hundreds of options (see DB-Engines.com) and most industries cannot support such a diversity of products. Most industries, although they may fluctuate over time, typically consolidate to where the top three providers control 70% to 90% of the market.

After all, history tends to repeat itself, right?

Posted in DBMS, History, OO, relational | Leave a comment