Supporting Multiple DBMS Products with Navicat Premium

DBAs are tasked with various and disparate responsibilities that consume their day. I’ve written DBA roles and responsibilities (What Does a DBA Do?) here on the blog before, so if you need a refresher, check out the link and meet us back here to hear about automating and simplifying database administration.

With such a varied list of duties, automation of DBA tasks makes sense. Add to the mix that many DBAs are required to support multiple DBMS products and the job becomes more complicated. As such, most DBAs develop scripts and jobs to automate daily tasks. But in-house developed scripts require maintenance and upkeep to stay current.  And some tasks are quite complicated to requiring a lot of effort to develop and maintain. For this reason, and many others, many organizations groups rely on DBA tools that are developed, maintained and supported by vendors dedicated to keeping the software current and efficient.

One such tool worth considering for automating and simplifying a myriad of database administration tasks is Navicat Premium from PremiumSoft CyberTech Ltd. The tool is available for Windows, macOS and Linux.

Navicat Premium provides support for simplifying a plethora of DBA tasks from database design through development and implementation. And it supports a wide range of different database management systems, including MySQL, MariaDB, Oracle Database, PostgreSQL, Microsoft SQL Server, SQLite and multiple cloud offerings (including Amazon, Oracle, Microsoft, Google and Alibaba). You will need to do some verification of specific features as supported features and platforms vary a bit for specific database systems and operating systems.

Perhaps one of the best things that Navicat Premium brings to the table is an easy-to-use GUI that enables users to connect simultaneously to MySQL, MariaDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application. If you are a DBA that has to manage multiple different DBMS products and installations then you know just how handy that can be. Unfortunately, Navicat does not offer support for IBM’s Db2, so keep that in mind if your organization uses Db2.

More importantly, however, is what you can do using Navicat. Let’s start at the beginning, with database design. Using the Navicat Premium intelligent database designer DBAs can create, modify and manage all database objects with professional built-in guidance. The tool provides physical database modeling capabilities complete with forward and reverse engineering capabilities. That means you can go from model to DDL or from a live database instance to a model. Using Navicat Premium’s graphical database design and modeling tool provides DBAs can model, create, and understand complex databases with ease. Automatic layout capabilities make it easy to create and print readable and useful data models, including to PDF files

And Navicat Premium makes populating and moving data to and from your database a snap. Data can be imported and exported to and from most of the formats DBAs use regularly including TXT, CSV, XML, JSON, DBF, and ODBC. A data viewer and editor make it possible to view and edit the data in your databases using a grid view or a form. This comes with the ability to filter the data as needed and to find/replace data in the table as needed. And you can even navigate and select data as needed based on your referential constraints.

It is important to note that Navicat Premium adds intelligence to data movement, as DBAs can compare and synchronize data between databases and schemas with a detailed analytical process. This means that DBAs can quickly and easily compare the structure of two database instances to identify differences and generate the needed DDL to make them identical. But it is not just for the database structures, Navicat Premium also lets the DBA compare the data, too, so that you can reliably ensure that two tables that are supposed to have the same data, actually do have the same data!

Furthermore, Navicat Premium facilitates database backup and restore for your availability and recovery requirements. That means DBA can use Navicat Premium to design, populate, and synchronize database structures.

Many organizations are adopting cloud databases these days and Navicat Premium can be used to manage cloud databases like Amazon RDS, Amazon Aurora, Amazon Redshift, SQL Azure, Oracle Cloud, Google Cloud and Alibaba Cloud. Additionally, Navicat Cloud adds collaboration and synchronization features including the ability to create projects and add member to collaborate, and to synchronize connections, queries, models and virtual groups.

Using Navicat Premium DBAs can setup automated tasks as needed, including scheduling backups, query execution, and data movement tasks like import, export, transfer and synchronization. There are numerous reports that can be generated by Navicat Premium and DBAs can schedule printing reports as needed. And notification emails can be sent automatically as part of the automation.

Application and SQL development tasks are also available using Navicat Premium. With the SQL editor and visual SQL builder you can create and modify efficient error-free SQL statements to query and modify data across your databases. The SQL console provides an integrated method for testing and executing SQL statement. And Navicat even provides debugging capabilities for PL/SQL and PL/PGSQL in stored procedures.

If you are looking for a tool that can assist your DBAs in supporting a complex, multi-DBMS environment, Navicat Premium is worth considering. It delivers an intuitive and well-designed interface that can simplify your database administration and development efforts.

Posted in automation, backup & recovery, database design, DBA | Leave a comment

On the High Cost of Enterprise Software

Enterprise Software Should Not Cost an Arm and a Leg

I believe that enterprise software is too costly and, indeed, it would seem that many others agree with me or we wouldn’t see the explosion of open source software being adopted industry-wide these days. Now don’t get me wrong, I am not saying that open source software is free to run – of course there are maintenance and management costs. But if commercial enterprise software provided needed functionality and was reasonably-priced then we wouldn’t really need open source, now would we?

Before the open source zealots start jumping down my throat, yes, I acknowledge that open source is more than just cost. The creativity and group-development aspects also led to its strong adoption rate… but I am getting off topic… so let’s go back to the high cost of enterprise software – that is, the software that runs the computing infrastructure of medium to large businesses.

It is not uncommon for companies to spend multiple millions of dollars on licenses and support contracts for enterprise software packages. This comprises not only operating systems, but database systems, business intelligence and analytics, transaction processing systems, web servers, portals, system management tools, ERP systems, and so on.

Yes, there is intrinsic value in enterprise software. Properly utilized and deployed it can help to better run your business, deliver value, and sometimes even offer competitive advantage. But what is a fair value for enterprise software?

That is a more difficult question to answer.

But let’s look at something simple, like a performance monitor. Nice software, helps you find problems, probably costs anywhere from tens of thousand dollars to over a million depending on the size of the machines you are running it on. Sometimes the software used to monitor is more expensive than what it is being used to monitor! Why does it cost that much? Well, because companies have been willing to pay that much. Not because the software has to cost that much to develop. I mean, how many lines of code are in that monitor? Probably less than Microsoft Excel and I can get that for a hundred bucks or so. And I can almost guarantee that Excel has a larger development and support team than whatever monitor you choose to mention.

So the pricing is skewed not based on what it costs to develop, but what the market will bear. That is fine, after all we live in a free market economy (depending on where you live, I guess). But I don’t believe that the free market will continue to support such expensive software. And the open source movement is kind of bearing that out. Nevertheless, there are still companies that prefer to purchase commercial software rather than to rely on open source software, at least for some things.

As I think about enterprise software a bit further… In many cases, enterprise software vendors have migrated away from selling new software licenses to selling maintenance and support. For some companies, more than half of their revenue comes from maintenance and support instead of selling new software. This is especially true for some mainframe software vendors.

Viewed another way, you could be excused for thinking that some of these companies are doing little more than asking their customers to pay for the continued right to use the software because their is little maintenance going on. Sounds like a nice little racket… you know what I’m talking about? So you pay several million for the software and then hundreds of thousands, maybe millions more for the continued right to use it and get fixes.

Another problem with enterprise software is feature bloat. Enterprise software can be so expensive because vendors want to price it as if all of its feature will be used by the enterprise. But usually only a few features are needed and used on a regular basis. Part of the problem, though, is that those few features can be (and usually are) different for each organization. One way vendors deal with this is to offer many separately-priced features enabled by key, but that is complicated for the user (as well as the software vendor).

So what is the answer? Gee, I wish I knew… if you have any ideas, please share them in the comments section… I’ve got some ideas and thoughts and perhaps I’ll share them with you all in a later blog post. But I think I’ve babbled on enough for today…

Posted in enterprise computing | 1 Comment

Implementing a Data Governance Initiative?

I recently received the following question and I think it is interesting enough to blog about.

Q: My company is looking to implement a data governance initiative. We want to be careful to avoid merely employing a series of extemporized data quality projects and are taking great pains to involve executive management, so we have formed a structure that segregates the responsibilities and activities into layers: a strategic level, tactical level, and execution level. What specific responsibilities should be assigned to each level?

Here is my answer:

The strategic level needs to involve upper level management and high-level technicians. Of course, this should include the CIO and CTO, but should include their lieutenants as well. Additionally, a senior data administrator or data architect should be included. A high level consultant could be used to help steer this committee and keep it on track. The strategists will map out the purpose and intent of the data governance initiative. Why is it needed? What problems will it solve? What will its impact be on the business?

The tactical level needs to involve the folks most literate on data and database systems. This should include data architects, database administrators, and technical end users; perhaps senior programmer/analysts, as well. Consultants may be needed to help flesh out the needed tasks and requirements. These folks will outline the necessary components of the initiative to meet the strategy as outlined by the executive committee. Budgeting goals will need to be set as guided by the executive committee and streamlining or adjusting the tactics may to occur to stay within the budget guidelines as this group works on its mission

The execution level needs to be staffed with the appropriate folks who can actually implement the tactics outlined by the tactical committee. This will likely include members of the tactical committee, as well as more junior DBA, DA, and programming staff.

Finally, I would suggest that you should engage the services of a skilled consultant in the area of data governance for advice on setting up your organization. I can recommend both Bob Seiner  who is quite knowledgeable on the topic of data governance, as well as his book: Non-Invasive Data Governance: The Path of Least Resistance and Greatest Success by Robert S. Seiner (2014-08-22), which describes a way to introduce data governance without adopting onerous, intrusive processes.

Posted in data governance | Leave a comment

A High-Level Guide to SQL Tuning

SQL tuning is a complicated task and to cover it adequately requires a full-length book of its own – actually, perhaps several if you use multiple DBMS products. That said, there are some good high-level SQL tuning suggestions that should apply regardless of the DBMS you are using. Well, as long as it supports SQL!

Here are some helpful rules of thumb:

  • Create indexes to support troublesome queries.
  • Whenever possible, do not perform arithmetic in SQL predicates. Use the host programming language (Java, COBOL, C, etc.) to perform arithmetic.
  • Use SQL functions to reduce programming effort.
  • Look for ways to perform as much work as possible using only SQL; optimized SQL typically outperforms host language application code.
  • Build proper constraints into the database to minimize coding edit checks.
  • Do not forget about the “hidden” impact of triggers. A DELETE from one table may trigger many more operations. Although you may think the problem is a poorly performing DELETE, the trigger is really the culprit.

Furthermore, a large part of the task of tuning SQL is identifying the offending code. A SQL performance monitor is the best approach to identifying poorly performing statements. Such a tool constantly monitors the DBMS environment and reports on the resources consumed by SQL statements.

Some DBMSs provide rudimentary bundled support for SQL monitoring, but many third-party tools are available. These tools provide in-depth features such as the ability to identify the worst-performing SQL without the overhead of system traces, integration to SQL coding and tuning tools, and graphical performance charts and triggers. If you find yourself constantly being bombarded with poor performance problems, a SQL monitor can pay for itself rather quickly.

At a high-level then, the guidance I want to provide is as follows:

  1. Ingrain the basics into your development environment. Make sure that not just the DBAs, but also the application developers understand the high-level advice in the bulleted list above.
  2. Build applications with database performance in mind from the beginning.
  3. Make sure that you have an easy, automated way of finding and fixing poorly performing SQL code.

Sound simple? Maybe it is, but many organizations fail at these three simple things…

Posted in performance, SQL | 1 Comment

Data Technology Today in 2017

Here we are in the first week of 2018 so it is time, once again, to look at what happened this past year in the blog. First of all, there were 16 new blog posts this year so I averaged a little more than one a month. That is less than what I would like to accomplish, but also a rate I can live with. I just want to make sure that I have enough new content here to keep you guys interested!

And it seems like there is continued interest. The graphic below shows 2017’s blog activity. There were over 46 thousand page views by over 36 thousand visitors.


The most popular post this year on the blog was one I posted a few years ago titled: An Introduction to Database Design: From Logical to Physical. There were 8,233 views of this particular post. The second most popular was a post on backup and recovery from 2014 that received about half as many views.

Who are you, out there, reading this blog? Well, I know where most of you live! Almost 20 thousand of you from my home country, the United States. But I also have a lot of great readers in India, as well as many others across the world, as can be seen here…


And how did most people find the blog? Unsurprisingly, it was by using a search engine, with terms like ‘types of database design’… ‘logical design to physical implementation’… ‘production data’… and many other data-related search terms. Twitter was the second most popular way for readers to find the blog, followed by my web site, LinkedIn, and Planet Db2.

So to end this brief synopsis of 2017, thank you to all of my regular readers – please keep visiting and suggesting more topics for 2018 and beyond. And if this is your first visit to the blog, welcome. Take some time to view the historical content – there are several informative posts that are popular every year… and keep checking back for new content on data, database, and related topics!

Posted in DBA | Leave a comment

Happy Holidays 2017!

Just a short post to end the year wishing all of my readers everywhere a very happy holiday season – no matter which holidays you celebrate, I hope they bring you joy, contentment, and let you recharge for an even better year next year!


So enjoy the holidays and come back in January as we continue to explore the world of data and database technology…

Posted in DBA | 1 Comment

SQL Coding and Tuning for Efficiency

Coding and tuning SQL is one of the most time consuming tasks for those involved in coding, managing and administering relational databases and applications. There can be literally thousands of individual SQL statements across hundreds of applications that access your many production databases. Although the DBA is ultimately responsible for ensuring performance of the database environment, there is quite a lot that application developers can do to help out. Frequently, developers are only concerned with getting the right answer (which is, of course, required) but not with getting it in the most efficient way.

When coding SQL statements, the following steps need to occur for each and SQL statement that you write:

  1. Identify the business data requirements
  2. Ensure that the required data is available within existing databases
  3. Translate the business requirements into SQL
  4. Test the SQL for accuracy and results
  5. Review the access paths for performance
  6. Tweak or re-write the SQL for better access paths
  7. Possibly code optimization hints
  8. Repeat steps 4 through 7 until performance is within the required range.
  9. Repeat step 8 whenever performance problems arise or a new DBMS version is installed
  10. Repeat entire process whenever business needs change

SQL tuning is a complex, time consuming, and sometimes error-prone process. Furthermore, it requires cooperation and communication between the business users and application programmers for the first three steps, and between the application programmers and the DBA for the remaining steps.

It is imperative that developers learn more about SQL performance and take steps to be proactive about coding their programs with performance in mind. This is especially the case in the modern DevOps, continuous delivery, agile world where code is moved to production rapidly and numerous times a week.

If developers are not concerned about performance – or only marginally so – then it is a certainty that your organization will experience performance problems in production. There are simply not enough DBAs and performance analysts available to examine every program before it is moved to production these days.

How can you become a performance-focused developer? Here are a few suggestions:

  • Read the manuals for your DBMS of choice (Oracle, Db2, etc.), especially the one that focus on performance. Find the SQL-related items and concentrate there, but the more you understand about all elements of database performance the better coder you will be.
  • Purchase books on SQL performance. There are several good ones that talk about performance in a heterogeneous manner and there are also many books that focus on SQL for each DBMS.
  • Talk to your DBAs about SQL techniques and methods that they have found to be good for performance.
  • Learn how to explain your SQL statements and interpret the access path information either in the plan tables or in a visual explain tool.
  • Use all of the performance tools at your disposal. Again, talk to the DBAs to learn what tools are available at your site.

And always be tuning!

Posted in books, performance, SQL | 1 Comment

IT Through the Looking Glass

Sometimes I look for inspiration in what may seem — at first glance — to be odd places.  For example, I think the Lewis Carroll “Alice in Wonderland” books offer sage advice for the IT industry.  I mean, how many times have you watched a salesman grin as he spoke and then expected him to simply disappear the way the Cheshire Cat does?

Which Way Should We Go?

But perhaps that is a bad metaphor.  The Cheshire Cat was actually a pretty smart cookie (no disrespect to salespeople intended)!   Recall the passage where Alice comes to a fork-in-the-road and first meets the Cheshire Cat up in a tree.  She asks, “Would you tell me, please, which way I ought to go from here?”  And the cat responds, “That depends a good deal on where you want to go.”  Alice, in typical end-user fashion replies “It doesn’t much matter where.”  Causing the cat to utter words that we should all take to heart — “Then it doesn’t matter which way you go!”

Of course, you could follow Yogi Berra’s advice.  He said, “When you come to a fork in the road, take it!”  But, then where would that leave you.  The bottom line is that planning and understanding are both required and go hand in hand with one another.

If you have no plan for where you want to go, then at best you will just be going around in circles; at worst, you’ll be going backward!  Planning and keeping abreast of the latest technology is imperative in the rapidly changing world of information technology (IT).  As Alice might put it, IT just keep getting curiouser and curiouser.

It Means What I Mean!

But a true understanding of the IT industry, which is required to accurately and successfully plan, can be difficult to achieve because, invariably we will stumble across Humpty Dumptys.

Humpty Dumpty

You remember Humpty, don’t you?  He’s that egg who sits on the wall and spouts off about everything under the sun, sometimes without the requisite knowledge to back up his statements.

Humpty Dumpty is famous for saying “When I use a term, it means whatever I choose it to mean — nothing more, and nothing less.”  There are too many Humpty Dumptys out there.  Perhaps they have good intentions, but we all know what road is paved with those, don’t we?

There are too many people in the IT world laboring under false impressions and definitions. Whenever a new trend or technology begins to gain traction, then you can bet that almost every vendor will claim that their product should be a part of the trend. Even if the trend is completely new and the product in question was created 30 years ago!

Of course, products can be adapted and trends can change. So what is the point of this little blog post? I guess it would be to keep up with trends, don’t believe everything you read, always be learning and create your plans based on sound research.

Does anybody out there disagree with that?


Posted in business planning, IT | Leave a comment

Gaining Value from Analytics

Data volume and higher transaction velocities associated with modern applications are driving change into organizations across all industries. This is happening for a number of reasons. Customer and end user expectations for interacting with computerize systems has changed. And technology is changing to accommodate these requirements. Furthermore, larger and larger amounts of data are being generated and made available, both internally and externally to our businesses. Therefore, the desire and capability to store large amounts of data continues to expand.

One clear goal of most organizations is to be able to harness all of this data – regardless of its source or size – and to glean actionable insight from it. This is known as analytics. Advanced analytical capabilities can be used to drive a wide range of applications, from operational applications such as fraud detection to strategic analysis such as predicting patient outcomes. Regardless of the applications, advanced analytics provides intelligence in the form of predictions, descriptions, scores, and profiles that help organizations better understand behaviors and trends.

Furthermore, the desire to move up the time-to-value for analytics projects will result in a move to more real-time event processing. Many use cases can benefit from early detection and response, meaning that identification needs to be as close to real time as possible. By analyzing reams of data and uncovering patterns, intelligent algorithms can make reasonably solid predictions about what will occur in the future. This requires being adept enough to uncover the patterns before changes occur. This does not always have to happen in real time.

Issues in Deploying Advanced Analytics

When implementing an analytics project it is not uncommon to encounter problems along the way. One of the first issues that needs to be addressed when adopting analytics in the cognitive era is having organization leaders who will embrace the ability to make decisions based on data instead of gut feelings based on the illusion of having data. Things change so fast these days that it is impossible for humans to keep up with all of the changes. Cognitive computing applications that rely on analytics can ingest and understand vast amounts of data and keep up with the myriad of changes occurring daily…if not hourly. Armed with advice that is based on a thorough analysis of up-to-date data, executives can make informed decisions instead of what amounts to the guesses they are making today.

However, most managers are used to making decisions based on their experience and intuition without necessarily having all of the facts. When analytics-based decision making is deployed management can feel less involved and might balk. Without the buy-in at an executive level, analytics projects can be very costly without delivering an ROI, because the output (which would deliver the ROI) is ignored.

Another potential difficulty involves managing and utilizing large volumes of data. Businesses today are gathering and storing more data than ever before. New data is created during customer transactions and to support product development, marketing, and inventory. And many times additional data is purchased to augment existing business data. This explosion in the amount of data being stored is one of the driving forces behind analytics. The more data that can be processed and analyzed, the better the advanced analysis can be at finding useful patterns and predicting future behavior.

However, as data complexity and volumes grow, so does the cost of building analytic models. Before real modeling can happen, organizations with large data volumes face the major challenge of getting their data into a form from which they can extract real business information. One of the most time-consuming steps of analytic development is preparing the data. In many cases, data is extracted, and a subset of this data is used to create the analytic data set where these subsets are joined together, merged, aggregated, and transformed. In general, more data is better for advanced analytics.

There are two aspects to “more data”: (1) data can increase in depth (more customers, transactions, etc.), and (2) data can grow in width (where subject areas are added to enhance the analytic model). At any rate, as the amount of data expands, the analytical modeling process can elongate. Clearly performance can be an issue.

Real-time analytics is another interesting issue to consider. The adjective real-time refers to a level of responsiveness that is immediate or nearly immediate. Market forces, customer requirements, governmental regulations, and technology changes collectively conspire to ensure that data that is not up-to-date is not acceptable. As a result, today’s leading organizations are constantly working to improve operations and with access to and analysis of real-time data.

For example, consider the challenge of detecting and preventing fraud. Each transaction must be analyzed to determine its validity. The organization waits for approval while this is done in real-time. But if you err on the side of safety, valid transactions may be declined which will cut into profit and perhaps more importantly, upset your customer. The advanced analytics approach leverages predictive analysis to scrutinize current transactions along with historical data to ensure transactions that may appear suspicious aren’t the norm for this customer. The challenge is doing this in real-time.

Nimble organizations need to assess and respond to events in real-time based on up-to-date and accurate information, rules, and analyses. Real-time analytics is the use of, or the capacity to use, all available enterprise data and resources when they are needed. If, at the moment information is created (or soon thereafter) in operational systems, it is sensed and acted upon by an analytical process, real-time analytics have transpired.

As good as real-time analytics sounds, it is not without its challenges to implement. One such challenge is reducing the latency between data creation and when it is recognized by analytics processes.

Time-to-market issues can be another potential pitfall of an advanced analytics project. A large part of any analytical process is the work involved with gathering, cleansing, and manipulating data required as input to the final model or analysis. As much of 60% to 80% of the man-effort during a project goes toward these steps. This up-front work is essential though to the overall success of any advanced analytics project.

Technology Considerations

From a technology perspective, managing the boatload of data and the performance of operations against that data can be an issue. Larger organizations typically rely on a mainframe computing environment to process their workload. But even in these cases the mainframe is not the only computing platform in use. And the desire to offload analytics to other platforms is often strong. However, for most mainframe users, most of the data resides on the mainframe. If analytics is performed on another platform moving large amounts of data to and from the mainframe can become a bottleneck. Good practices, and good software will be needed to ensure that efficient and effective data movement is in place.

But before investing in a lot of data movement off of the mainframe, consider evaluating the cost of keeping the data where it is and moving the processes to it (the analytics) versus the cost of moving the data to the process. Usually, the former will be more cost effective.

Taking advantage of more in-memory processes can also be an effective approach for managing analytical tasks. Technologies like Spark, which make greater use of memory to store and process data, are gaining in popularity. Of course, there are other in-memory technologies worth pursuing as well.

Another technology that is becoming more popular for analytics is streaming data software. Streaming involves the ingestion of data – structured or unstructured – from arbitrary sources and the processing of it without necessarily persisting it. This is contrary to our common methodology of storing all data on disk.

Although any digitized data is fair game for stream computing, it is most common for analyzing measurements from devices. As the data streams it is analyzed and processed in a problem-specific manner. The “sweet spot” for streaming is situations in which devices produce large amounts of instrumentation data on a regular basis. The data is difficult for humans to interpret easily and is likely to be too voluminous to be stored in a database somewhere. Examples of types of data that are well-suited for stream computing include healthcare, weather, telephony, stock trades, and so on.

By analyzing large streams of data and looking for trends, patterns, and “interesting” data, stream computing can solve problems that were not practical to address using traditional computing methods. To put it in practical terms, think about your home fire detectors. These devices are constantly up and running, waiting for a condition. When fire or smoke is detected, an alarm is sounded. Now if this was to be monitored remotely, you wouldn’t want to store all of the moments in time when there was no fire… but you care a lot about that one piece of data when the fire is detected, right?

Consider a healthcare example. One healthcare organizations is using an IBM stream computing product, InfoSphere Streams, to help doctors detect subtle changes in the condition of critically ill premature babies.  The software ingests a constant stream of biomedical data, such as heart rate and respiration, along with clinical information about the babies.  Monitoring premature babies as a patient group is especially important because certain life-threatening conditions, such as infection, may be detected up to 24 hours in advance by observing changes in physiological data streams. The biomedical data produced by numerous medical instruments cannot be monitored manually nor can a never-ending stream of values for multiple patients be stored long term.

But the stream of healthcare data can be constantly monitored with a stream computing solution. As such, many types of early diagnoses can be made that would take medical professionals much longer to make. For example, a rhythmic heartbeat can indicate problems (like infections); a normal heartbeat is more variable. Analyzing an ECG stream can highlight this pattern and alert medical professionals to a problem that might otherwise go undetected for a long period. Detecting the problem early can allow doctors to treat an infection before it causes great harm.

A stream computing application can get quite complex. Continuous applications, composed of individual operators, can be interconnected and operate on multiple data streams. Again, think about the healthcare example. There can be multiple streams (blood pressure, heart, temperature, etc.), from multiple patients (because infections travel from patient to patient), having multiple diagnoses.

The Bottom Line

There are many new and intriguing possibilities for analytics that require an investment in learning and new technology. But the return on the investment is potentially quite large in terms of gaining heretofore unknown insight into your business, and also in better servicing your customers. After all, that is the raison d’être for coming to work each day!

Posted in analytics, Big Data | 1 Comment