Teradata Analytics Universe 2018 and Pervasive Data Intelligence

I spent last week in Las Vegas at the Teradata Analytics Universe conference, Teradata’s annual user conference. And there was a lot to do and learn there.

 

IMG_0182

Attendees heading to the Expo Hall at the Teradata Analytics Universe conference in Las Vegas, NV — October 2018

 

The major message from Teradata is that the company is a “new Teradata.” And the message is “Stop buying analytics,” which may sound like a strange message at a conference with analytics in its name!

But it makes sense if you listen to the entire strategy. Teradata is responding to the reality of the analytics marketplace. And that reality centers around three findings from a survey the company conducted of senior leaders from around the world:

  1. Analytics technology is too complex. 74 percent of senior leaders said their organization’s analytics technology is complex; 42 percent said that analytics is not easy for their employees to use and understand.
  2. Users don’t have access to all the data they need. 79 percent of said they need access to more company data to do their job effectively.
  3. Data scientists are a bottleneck. Only 25 percent said that, within their enterprise, business decision makers have the skills to access and use intelligence from analytics without the need for data scientists.

 

WhereAreDataScientists_02x600

 

To respond to these challenges, Teradata says you should buy “answers” not “analytics.” And they are correct. Organizations are not looking for more complex, time-consuming, difficult-to-use tools, but answers to their most pressing questions.

Teradata’s calls their new approach “pervasive data intelligence,” which delivers access to all data, all the time, to find answers to the toughest challenges. This can be done on-premises, in the cloud, and anywhere in between.

A big part of this new approach is founded on Teradata Vantage, which provides businesses the speed, scale and flexibility they need to analyze anything, deploy anywhere and deliver analytics that matter. At the center of Vantage is Teradata’s respected analytics database management system, but it also brings together analytic functions and engines within a single environment. And it integrates with all the popular open source workbenches, platforms, and languages, including SQL, R, Python, Jupyter, RStudio, SAS, and more.

“Uncovering valuable intelligence at scale has always been what we do, but now we’re taking our unique offering to new heights, unifying our positioning while making our software and consulting expertise available as-a-service, in the cloud, or on-premises,” said Victor Lund, Teradata CEO.

Moving from analytical silos to an analytics platform that can deliver pervasive data intelligence sounds to me like a reasonable way to tackle the complexity, confusion, and bottlenecks common today.

Check out what Teradata has to offer at teradata.com

Posted in analytics, data, Teradata, tools | Leave a comment

Data Modeling with Navicat Data Modeler

Data modeling is the process of analyzing the things of interest to your organization and how these things are related to each other. The data modeling process results in the discovery and documentation of the data resources of your business. Data modeling asks the question “What?” instead of the more common data-processing question “How?”

As data professionals, it is important that we understand what the data is and what it means before we attempt to build databases and applications using the data. Even with today’s modern infrastructure that includes databases with flexible schemas that are applied when read (instead of the more traditional method of applying the schema on write), you still need a schema and an understanding of the data in order to do anything useful with it. And that means a model of the data.

The modeling process requires three phases and types of models: conceptual, logical and physical. A conceptual data model is generally more abstract and less detailed than a complete logical data model. It depicts a high-level, business-oriented view of information. The logical data model consists of fully normalized entities with all attributes defined. Furthermore, the domain or data type of each attribute must be defined. A logical data model provides an in-depth description of the data independent of any physical database manifestations. The physical data model transforms the logical model into a physical implementation using a specific DBMS product such as Oracle, MySQL or SQL Server.

Navicat Data Modeler

Which brings us to the primary focus of today’s blog post: Navicat Data Modeler. We have looked at other Navicat products in this blog before (1, 2, 3), but those were performance and DBA tools. Navicat Data Modeler is designed to be used by data architects and modelers (but it can, of course, be used by DBAs, too).

A good data modeling tool provides the user with an easy-to-use palette for creating data models, and Navicat Data Modeler succeeds in this area. Navicat Data Modeler provides a rich interface for visually designing and building conceptual, logical and physical data models. Figure 1 shows a portion of larger logical data model for a university application.

fig1

Figure 1. A Logical Data Model in Navicat Data Modeler

 

The interface enables the user to clearly see the relationships, entities, and attributes at a high level, as well as the ability to zoom in to see details (see Figure 2).

fig2

Figure 2. Attribute details for the Student entity

 

The tool offers a lot of flexibility, so you can create, modify, and design models in a user-friendly manner and the way you like. Navicat Data Modeler supports three standard notations: Crow’s Foot, IDEF1x and UML.

Although easy to use, Navicat Data Modeler is a powerful data modeling and database design tool. As already mentioned, it supports conceptual, logical, and physical modeling. Importantly, though, the tool manages migration of models using reverse and forward engineering processes.  Using the Model Conversion feature, you can convert a conceptual model into a logical model, modify and further design at the logical level, and then convert into a physical database implementation. Navicat Data Modeler supports MySQL, MariaDB, Oracle, Microsoft SQL Server, PostgreSQL, and SQLite. See Figure 3.

fig3

Figure 3. Forward engineering to a target database

 

OK, so that covers forward engineering, but what about reverse engineering? You can use Navicat Data Modeler to reverse engineer a physical database structure into a physical model, thereby enabling you to visualize the database to see the physical attributes (tables, columns, indexes, RI, and other objects) and how they relate to each other without showing any actual data.

Furthermore, you can import models from ODBC data sources, print models to files, and compare and synchronize databases and models. The Synchronize to Database function can be used to discover all database differences. You can view the differences and generate a synchronization script to update the destination database to make it identical to your model. And there are settings that can be used to customize how comparison and synchronization works between environments.

It is also worth noting that Navicat Data Modeler is fully integrated with Navicat Cloud. This makes sharing models much easier. You can sync your model files and virtual groups to the cloud for a real-time access at anytime and anywhere.

Summary

A proper database design cannot be thrown together quickly by novices. Data professionals require domain and design knowledge and powerful tools to implement their vision. Navicat Data Modeler offers one such tool that is worthy of your consideration.

Posted in data modeling, database design, DBA | Leave a comment

10 Rules for Succeeding as a DBA

Being a successful database administrator requires more than just technical acumen and deep knowledge of database systems. You also must possess a proper attitude, sufficient fortitude, and a diligent personality to achieve success in database administration. Gaining the technical know-how is important, yes, but there are many sources that offer technical guidance for DBAs. The non-technical aspects of DBA are just as challenging, though. So with that in mind, let’s take a look at ten “rules of thumb” for DBAs to follow as they improve their soft skills.

Rule #1: Write Down Everything – DBAs encounter many challenging tasks and time-consuming problems. The wise DBA always documents the processes used to resolve problems and overcome challenges. Such documentation can be very valuable (both to you and others) should you encounter a similar problem in the future. It is better to read your notes than to try to re-create a scenario from memory.

Rule #2: Keep Everything – Database administration is the perfect job for you if you are a pack rat. It is a good practice to keep everything you come across during the course of performing your job. If not, it always seems like you’ll need that stuff the day after you threw it out! I still own some manuals for DB2 Version 2.

Rule #3: Automate – Why should you do it by hand if you can automate your DBA processes? Anything you can do, probably can be done better by the computer – if it is programmed to do it properly. And once it is automated you save yourself valuable time that is better spent tackling other problems.

Rule #4: Share Your Knowledge – The more you learn the more you should try to share what you know with others. There are many vehicles for sharing your knowledge: local user groups, online forums, web portals, magazines, blogs, Twitter, and so on. Sharing your experiences helps to encourage others to share theirs, so we can all benefit from each other’s best practices.

Rule #5: Focus Your Efforts – The DBA job is complex and spans many diverse technological and functional areas. It is easy for a DBA to get overwhelmed with certain tasks – especially those tasks that are not performed regularly. Understand the purpose for each task you are going to perform and focus on performing the steps that will help you to achieve that purpose. Do not be persuaded to broaden the scope of work for individual tasks unless it cannot be avoided. Analyze, simplify, and focus. Only then will tasks become measurable and easier to achieve.

Rule #6: Don’t Panic! – Problems will occur. There is nothing you can do to eliminate every possible problem or error. Part of your job is to be able to react to problems calmly and analytically. When a database is down and applications are unavailable your environment will become hectic and frazzled. The best things you can do when problems occur is to remain calm and go about your job using your knowledge and training.

Rule #7: Measure Twice, Cut Once – Being prepared means analyzing, documenting, and testing your DBA policies and procedures. Creating simple procedures in a vacuum without testing will do little to help you run an efficient database environment. And it will not prepare you to react rapidly and effectively to problem situations.

Rule #8: Understand the Business – Remember that being technologically adept is just a part of being a good DBA. Technology is important but understanding your business needs is more important. If you do not understand the business reasons and impact of the databases you manage then you will simply be throwing technology around with no clear purpose.

Rule #9: Don’t Be a Hermit – Be accessible; don’t be one of those “curmudgeon in the corner” DBAs that developers are afraid to approach. The more you are valued for your expertise and availability, the more valuable you are to your company. By learning what the applications must do you can better adjust and tune the databases to support the business.

Rule #10: Use All of the Resources at Your Disposal – Remember that you do not have to do everything yourself. Use the resources at your disposal. Many times others have already encountered and solved the problem that vexes you. Use your DBMS vendor’s technical support to help with particularly thorny problems. Use internal resources for areas where you have limited experience, such as network specialists for connectivity problems and system administrators for OS and system software problems. Build a network of colleagues that you can contact for assistance. Your network can be an invaluable resource and no one at your company even needs to know that you didn’t solve the problem yourself.

Achieve DBA Success!

The job of the DBA is a challenging one – from a technological, political and interpersonal perspective. Follow the rules presented in this blog post to improve your success as a DBA.

Posted in DBA | Leave a comment

How Much Data Availability is Enough?

I bet that some of you reading the title of this blog post scoffed at it. I mean, in this day age, isn’t round-the-clock availability for every application and user just a basic requirement?

No, it shouldn’t be. Let’s discuss.

Availability is traditionally discussed in terms of the percentage of total time that a service needs to be up. For example, a system with 99% availability will be up and running 99% of the time and down, or unavailable, 1% of the time.

Another term used to define availability is MTBF, or mean time between failure. More accurately, MTBF is a better descriptor of reliability than availability. However, reliability has a definite impact on availability. In general, the more reliable the system the more available it will be.

In this Internet age, the push is on to provide never-ending uptime, 365 days a year, 24 hours a day. At 60 minutes an hour that mean 525,600 minutes of uptime a year. Clearly to achieve 100% availability is a laudable goal, but just as clearly an unreasonable one. Why? Because things break, human error is inevitable, and until everybody and everything is perfect, there will be downtime.

The term five nines is often used to describe highly available systems. Meaning 99.999% uptime, five nines describes what is essentially 100% availability, but with the understanding that some downtime is unavoidable (see the accompanying table).

Table 1. Availability versus Downtime

Availability Approximate downtime per year
In minutes In hours
99.999% 5 minutes .08 hours
99.99% 53 minutes .88 hours
99.95% 262 minutes 4.37 hours
99.9% 526 minutes 8.77 hours
99.8% 1,052 minutes 17.5 hours
99.5% 2,628 minutes 43.8 hours
99% 5,256 minutes 87.6 hours
98% 10,512 minutes 175.2 hours
(or 7.3 days)

Even though 100% availability is not reasonable, some systems are achieving availability approaching five nines. DBAs can take measures to design databases and build systems that are created to achieve high availability. However, just because high availability can be built into a system does not mean that every system should be built with a high-availability design. That is so because a highly available system can cost many times more than a traditional system designed with unavailability built into it. The DBA needs to negotiate with the end users and clearly explain the costs associated with a highly available system.

Whenever high availability is a goal for a new system, database, or application, careful analysis is required to determine how much downtime users can really tolerate, and what the impact of an outage would be. High availability is an alluring requirement, and end users will typically request as much as they think they can get. As a DBA, your job is to investigate the reality of the requirement.

The amount of availability that should be built into the database environment must be based on service level agreements and cost. How much availability does the business require? And just as importantly, how much availability can the business afford to implement?

That is the ultimate question. Although it may be possible to achieve high availability, it may not be cost-effective, given the nature of the application and the budget available to support it. The DBA needs to be proactive in working with the application owner to make sure the cost aspect of availability is fully understood by the application owner.

Posted in availability, DBA, SLA | Leave a comment

Database Performance Management Solutions

Performance management, from a database perspective, is comprised of three basic components:

  1. Monitoring a database system and applications accessing it to find problems as they arise. This is typically referred to as performance monitoring.
  2. Analyzing performance data (logs, trace records, etc.) from the system to determine the root cause of the problem.
  3. Assembling a corrective action to implement a fix to the problem.

Database performance software can aid in all three areas. But some simply monitor systems or fix problems, while others deliver combined functionality.

Database performance management software can also be broken down by the type of database performance issues it addresses. Database performance problems can be arise in any of the following three areas:

  • The DBMS itself, which must interact with other system software and hardware, requiring proper configuration to ensure it functions accurately and performs satisfactorily. Additionally, there are many database system parameters used to configure the resources to be used by the DBMS, as well as its behavior. This includes important performance criteria such as memory capacity, I/O throughput and locking of data pages.
  • Database schema/structures. The design of databases, tables and indexes can also impact database performance. Issues include the physical design of the database, disk usage, number of tables, index design and data definition language parameters. How the data is organized must also be managed. And as data is modified in the database, its efficiency will degrade. Reorganization and defragmentation are required to periodically remedy disorganized data.
  • SQL and application code. Coding efficient SQL statements can be complicated because there are many different ways to write SQL that return the same results. But the efficiency and performance of each formulation can vary significantly. DBAs need tools that can monitor the SQL code that’s being run, show the access paths it uses and provide guidance on how to improve the code.

Database performance tools can identify bottlenecks and points of contention, monitor workload and throughput, review SQL performance and optimization, monitor storage space and fragmentation, and view and manage your system and DBMS resource usage. Of course, a single tool is unlikely to perform all of these tasks, so you may need multiple tools (perhaps integrated into a functional suite) to perform all of your required database performance management tasks.

Without proactive tools that can identify problems as they occur, database performance problems are most commonly brought to the attention of the DBA by end users. The phone rings and the DBA hears a complain that is usually vague and a bit difficult to interpret… things like “my system is a bit sluggish” or “my screen isn’t working as fast as it used to.” In such cases, the DBA needs tools that can help uncover the exact problem and identify a solution. Database performance management tools can help to find the problem as well as to put together and deploy a solution to the problem.

A lot organizations use more than one production DBMS. Frequently, the same DBA team (and sometimes even the same excact DBA) will have to assure the performance of more than one DBMS (such as Oracle and SQL Server… or Db2 and MySQL). But each DBMS has different interfaces, parameters and settings that affect how it performs. Database performance tools can mitigate this complexity with intelligent interfaces that mask the complexing making disparate components and settings look and feel similar from DBMS to DBMS.

There are many providers of database performance management tools, including the DBMS vendor (IBM, Microsoft and Oracle), large ISVs like BMC, CA and Quest and a wide array of niche vendors that focus on DBA and database peformance software.

What database performance tools do you use and recommend? Share your expeiences with us in a comment here on the blog.

Posted in DBA, performance, tools | Leave a comment

SQL Basics

It is hard to imagine a time when SQL was unknown and not the lingua franca it is today for accessing databases. That said, there are still folks out there who don’t know what SQL is… so for them, here is an introductory place to start…

SQL is an acronym for Structured Query Language. It is often procounced a sequel, but also spelled out as letters, like ess-cue-ell. SQL is a powerful tool for accessing and manipulating data. It is the de facto standard query language for relational database management systems, used by all of the leading RDBMS products including Oracle, SQL Server (natch), Db2, MySQL, Postgres, SAP Adaptive Server, and more.

Intereestingly enough, NoSQL database systems are increasingly being adapted to allow SQL access, too! So SQL is ubiquitous and it makes sense for anybody with an interest in data management to learn how to code SQL.

SQL is a high-level language that provides a greater degree of abstraction than do procedural languages. Most programming languages require that the programmer navigate data structures. This means that program logic needs to be coded to proceed record-by-record through data elements in an order determined by the application programmer or systems analyst. This information is encoded in programs and is difficult to change after it has been programmed.

SQL, on the other hand, is fashioned so that the programmer can specify what data is needed, instead of how to retrieve it. SQL is coded without embedded data-navigational instructions. The DBMS analyzes the SQL and formulates data-navigational instructions “behind the scenes.” These data-navigational instructions are called access paths.

By having the DBMS determine the optimal access path to the data, a heavy burden is removed from the programmer. In addition, the database can have a better understanding of the state of the data it stores, and thereby can produce a more efficient and dynamic access path to the data. The result is that SQL, used properly, can provide for quicker application development.

Another feature of SQL is that it is not merely a query language. The same language used to query data is used also to define data structures, control access to the data, and insert, modify, and delete occurrences of the data. This consolidation of functions into a single language eases communication between different types of users. DBAs, systems programmers, application programmers, systems analysts, and end users all speak a common language: SQL. When all the participants in a project are speaking the same language, a synergy is created that can reduce overall system-development time.

Arguably, though, the single most important feature of SQL that has solidified its success is its capability to retrieve data easily using English-like syntax. It is much easier to understand the following than it is to understand pages and pages of program source code.

SELECT   LASTNAME
FROM      EMP
WHERE   EMPNO = '000010';

Think about it; when accessing data from a file the programmer would have to code instructions to open the file, start a loop, read a record, check to see if the EMPNO field equals the proper value, check for end of file, go back to the beginning of the loop, and so on.

SQL is, by nature, quite flexible. It uses a free-form structure that gives the user the ability to develop SQL statements in a way best suited to the given user. Each SQL request is parsed by the DBMS before execution to check for proper syntax and to optimize the request. Therefore, SQL statements do not need to start in any given column and can be strung together on one line or broken apart on several lines. For example, the following SQL statement is equivalent to the previously listed SQL statement:

SELECT LASTNAME FROM EMP WHERE EMPNO = '000010';

Another flexible feature of SQL is that a single request can be formulated in a number of different and functionally equivalent ways. One example of this SQL capability is that it can join tables or nest queries. A nested query always can be converted to an equivalent join. Other examples of this flexibility can be seen in the vast array of functions and predicates. Examples of features with equivalent functionality are:

  • BETWEEN versus <= / >=
  • IN versus a series of predicates tied together with AND
  • INNER JOIN versus tables strung together in the FROM clause separated by commas
  • OUTER JOIN versus a simple SELECT, with a UNION, and a correlated subselect
  • CASE expressions versus complex UNION ALL statements

This flexibility exhibited by SQL is not always desirable as different but equivalent SQL formulations can result in extremely differing performance. The ramifications of this flexibility are discussed later in this paper with guidelines for developing efficient SQL.

As mentioned, SQL specifies what data to retrieve or manipulate, but does not specify how you accomplish these tasks. This keeps SQL intrinsically simple. If you can remember the set-at-a-time orientation of a relational database, you will begin to grasp the essence and nature of SQL. A single SQL statement can act upon multiple rows. The capability to act on a set of data coupled with the lack of need for establishing how to retrieve and manipulate data defines SQL as a non-procedural language.

Because SQL is a non-procedural language a single statement can take the place of a series of procedures. Again, this is possible because SQL uses set-level processing and DB2 optimizes the query to determine the data-navigation logic. Sometimes one or two SQL statements can accomplish tasks that otherwise would require entire procedural programs to do.

Summary

Of course, this brief introduction does not constitute an education in SQL and it will not make you a SQL programmer. For that, you will need education and practice. A good place to start is with a SQL book or two. I can recommend these:

After reading through some good books, practice writing some SQL and keep learning… move on to more advanced texts and if you can, attend a class on the SQL. Because learning SQL makes sense in this day and age of analytics!

Posted in DBA, SQL | Leave a comment

DBA Corner

Just a quick blog post today to remind my readers that I write a regular, monthly column for Database Trends & Applications magazine called DBA Corner.

The DBA Corner is geared toward news, issues, and technologies that will be of interest to database administrators. Sometimes the material is in-depth and technical (well, as much as 700 or so words allows) and sometimes it will be more philosophical or newsy.

If you are not a DBA, do not worry, as the column regularly expands to focus on issues of interest to data architects, data analysts and even programmers and developers. Issues addressed recently in my column include data modelingdatabase design, database standards, SQL coding, DBA practices and procedures, performance , application development, optimization techniques, data governance, regulatory compliance with regard to data, industry trends, and more.

So I hope you will check back each month to read the DBA Corner column at the DBTA web site… and if you have any ideas or topics that you’d like me to address, add them as a comment to this blog post.

Posted in DBA, NoSQL, performance, SQL | Leave a comment

Managing MongoDB Databases with Navicat

I’ve written about Navicat tools for managing data in this blog before (performance monitoring, heterogeneous database administration ), so I thought I’d take a look at their most recent offering which provides many useful DBA features for MongoDB.

MongoDB is a NoSQL, open-source, cross-platform document-oriented database management system. MongoDB uses JSON-like documents with schemas. Use cases for which MongoDB excels include web commerce applications, content management, blogs, real-time analytics, and social networking. It is not particularly well-suited for systems with high transaction rates.

But I don’t really want to discuss MongoDB in-depth here. As a proponent of performing database administration as a management discipline though, the world of NoSQL database systems lacks the in-depth management and administration tooling enjoyed by relational database systems. That has to change, and Navicat has obviously recognized this fact, with its new Navicat for MongoDB offering.

Navicat for MongoDB delivers GUI interface for MongoDB database management, administration and development (see Figure 1). You can use it to connect to local and remote MongoDB servers and it is compatible with MongoDB Atlas.

Navicat for MongoDB

Figure 1. Navicat for MongoDB GUI – main screen

Navicat for MongoDB offers many useful features for DBAs to manage, monitor, query, and visualize your MongoDB data. It supports adding, modifying, and deleting documents using built-in editors including a tree view, JSON view, and the classic spreadsheet-like grid view.

One of the bigger headaches of using a new database technology like MongoDB can be moving data around. Navicat for MondoDB makes this easier as it comes with an Import Wizard that can be used to transfer data into and out of your MongoDB databases. It supports multiple, diverse formats like Excel, Access, CSV and more. You also can ingest data from ODBC after setting up a data source connection. It provides strong data transfer and synchronization capabilities to enable the migration of your data. You can transfer data across databases, compare the data in your databases, and synchronize the data.

Querying data in MongoDB is a snap with the Navicat Visual Query Builder. It can be used to create, edit and run queries without having to worry about syntax and proper usage of commands. Additional features, like Code Completion and customizable Code Snippet simplify your coding efforts by providing suggestions for keywords and eliminating repetitious coding.

For DBAs, Navicat for MongoDB provides an Intelligent Object Designer. It enables you to create, modify and manage all database objects using built-in professional advice and guidance. You can preview results on each step and debug the sampled data before running your jobs. And the Smart Schema Analyzer can be used to help you visually discover and explore your schema. With it, you can analyze your documents and display the structures within your collections, thereby making it easy to understand your data’s schema, find schema anomalies and inspect outliers.

Navicat for MongoDB even provides an intuitive Backup Utility that can be used to automate your backup process and reduces the potential for errors. You can set up a repeatable deployment process for job and script execution at a specific time or day.

Security is built into the product, too. It uses SSH Tunneling and SSL to ensure every connection is secure, stable, and reliable. You can use different authentication methods of database servers such as Kerberos and X.509 authentication.

Finally, you can use Navicat for MongoDB in the cloud to enable collaboration with your co-workers. Share your connection settings, queries and virtual groups with your coworkers anytime and anywhere.

All-in-all, Navicat for MongoDB goes a long way toward making your MongoDB environment as manageable as your relational environments.

You can download and try Navicat for MongoDB here: https://www.navicat.com/en/download/navicat-for-mongodb.

Posted in DBA | Tagged , , , | 1 Comment

My Computer Mug Collection, Part 4

Recently I tweeted a mug a day from my collection of coffeee mugs given to me over the years by computer and software companies. Since I finished up tweeting all of my mugs, I have been posting them here to my blog, as well. So far I have posted three previous posts showcasing my computer mug collection. Today, as promised, here are the remaining mugs I have yet to blog…

First up is a mug I forgot to include in Part 2 (mugs from computer conferences). This one is from SHARE, and I received it as a Best Session Award for my DB2 for z/OS Performance Roadmap presentation. It is clear glass, so it is a bit difficult to see:

IMG_1075

Next up is a series of mugs from German software vendor Software Engineering. I think I was lucky enough to collect all of the mugs in the series:

IMG_0865

And here is a mug from from ComputerWorld  with one of their IT cartoons on it. I sure hope the ESC key worked!

IMG_0853

And this mug is back from the days when Oracle actually developed and marketed a version of their DBMS for MVS! Sure, you can run Oracle on a mainframe today, but it has to be in a Linux partition.

IMG_0858

Here are several mugs from IBM in my collection. The first one says “Tame you Data Monster with IBM” – and that is a good overall summation of what I’ve done my entire career!  And the there is the newest of these mugs, the IBM Champion mug. I use this one every day as a pen and pencil holder! And the last one is a joke, of sorts. Documentation and memos that are not meant to be shared are often marked “Internal Use Only” as is this mug, probably referring to the coffee it will hold.

Next we have a mug from Memorex. Some of you might question as to whether it is actually a “computer” mug, but it is! This is from back in the day when Memorex was a big manufacturer of floppy discs.

IMG_0874

Here is a nice little mug from Peppers & Rogers that I think I got when I took a product management class from them back in the mid-1990s:

IMG_0871

And finally, here is a mug from Software Marketing Journal. I only subsribed to this magazine for a short time in the late 1990s when I was VP of marketing for Platinum’s database solutions… so I’m pretty sure that it is from that timeframe:

IMG_0870

And that concludes my cavalcade of computer mugs… I think. There may be another mug or two hiding around here somewhere… if I discover any more I’ll be sure to share them with you.

So what next? I have an extensive button/pin collection from various computer companies and conferences. Anybody want me to start sharing those? Or have you had enough?

Posted in DBA | Leave a comment

My Computer Mug Collection, Part 3

So far I have published two posts showing the computer-related mugs in my collection.  Over several weeks I first tweeted a mug a day from my collection, and now I am blogging them for those that missed any of the tweets.

In part 1 of my computer mug collection I highlighted the mugs from companies where I worked; and in part 2 I showed the mugs I received from conferences and user groups. Today’s post I call the dearly-departed — mugs from companies that have been acquired or gone out of business.

First up, we have Goal Systems. This is not the current company named Goal Systems (transport industry software), but the company that sold the performance monitor Insight for Db2 before CA acquired them.

IMG_0714

And here are two more companies that were bought by CA: Pansophic and I just had to include a Platinum mug again.

Then we have BGS Systems, the maker of mainframe capacity planning and performance software that I believe was acquired first by Boole and Babbage, and then by BMC Software.

IMG_0771

And here is a nice mug from Easel Corporation, which was a popular software development firm for GUI design. It was acquired by VMARK in the mid-1990s.

IMG_0847

Here we have a mug from R&O, the original makers of the Rochade repository. They have since been acquired by Allen Systems Group.

IMG_0855

Then we have this mug, from Cogito, the makers of EZ-Db2… they have since been acquired by Syncsort.

IMG_0860

And then there is XA Systems, which was acquired by Compuware back in the early 1990s.

IMG_0859

And finally, here is a mug from Sablesoft. They made the Compile/QMF (which at some point was renamed to Compile/QQF) product. Sabelsoft was acquired by Platinum technology, inc., and then CA.

IMG_0856

That conclude today’s post… but there are still a few more mugs I have yet to blog. Stay tuned for the final post in this series coming next week!

 

Posted in data, DB2, mugs | Leave a comment