Database Access Auditing: Who Did What to Which Data When?

As just about anyone in business these days knows there is a growing list of government and industry regulations that organizations must understand and comply with. This increasing compliance pressure is particularly intense on data stored in corporate databases. Companies need to be ever more vigilant in the techniques used to protect their data, as well as to monitor and ensure that sufficient protection is in place. Such requirements are driving new and improved software methods and techniques.

One of these techniques is database auditing, sometimes called data access monitoring (or DAM). At a high level, database auditing is basically a facility to track the use of database resources and authority. When auditing is enabled, each audited database operation produces an audit trail of information including information such as what database object was impacted, who performed the operation, and when. The comprehensive audit trail of database operations produced can be maintained over time to allow DBAs and auditors, as well as any authorized personnel to perform in-depth analysis of access and modification patterns against data in the DBMS.

Database auditing helps to answer questions like “Who accessed or changed data?” and “When was actually changed?” and “What was the old content prior to the change?” Your ability to answer such questions is very important for regulatory compliance. Sometimes it may be necessary to review certain audit data in greater detail to determine how, when, and who changed the data.

Why would you need to ask such questions? Consider, HIPAA the Health Insurance Portability and Accountability Act. This legislation contains language specifying that health care providers must protect individual’s health care information even going so far as to state that the provider must be able to provide an individual a list of everyone who even so much as looked at their information. Think about that? Could you produce a list of everyone who looked at a specific row or set of rows in any database under your control?

Industry regulations, such as PCI DSS  control the protective measures that must be undert(Payment Card Industry Data Security Standard),aken to secure personally identifiable information (PII). Organizations that fail to comply run the risk of losing their ability to accept payments using credit cards and debit cards… and that can quickly ruin a company.

Tracking who does what to which piece of data is important because there are many threats to the security of your data. External agents trying to compromise your security and access your company data are rightly viewed as a threat to security. But industry studies have shown that the majority of security threats are internal – within your organization. Indeed, some studies have shown that internal threats comprise 60% to 80% of all security threats. The most typical security threat comes from a disgruntled or malevolent current or ex-employee that has valid access to the DBMS. Auditing is crucial because you may need to find an unauthorized access emanating from an authorized user.

But keep in mind that auditing tracks what a particular user has done once access has been allowed. Auditing occurs post-activity; it does not do anything to prohibit access. Audit trails help promote data integrity by enabling the detection of security breaches, also referred to as intrusion detection. An audited system can serve as a deterrent against users tampering with data because it helps to identify infiltrators.

There are many situations where an audit trail is useful. Your company’s business practices and security policies may dictate a comprehensive ability to trace every data change back to the initiating user. Perhaps government regulations (such as the Sarbanes-Oxley Act) require your organization to analyze data access and produce regular reports. You may be required to produce detailed reports on an ongoing basis, or perhaps you just need the ability to identify the root cause of data integrity problems on a case-by-case basis. Auditing is beneficial for all of these purposes.

A typical auditing facility permits auditing at different levels within the DBMS, for example, at the database, database object level, and user levels. One of the biggest problems with existing internal DBMS audit facilities is performance degradation. The audit trails that are produced must be detailed enough to capture before- and after-images of database changes. But capturing so much information, particularly in a busy system, can cause performance to suffer. Furthermore, this audit trail must be stored somewhere which is problematic when a massive number of changes occur. Therefore, a useful auditing facility must allow for the selective creation of audit records to minimize performance and storage problems.

There are several different names used for database auditing. You may have heard database auditing capabilities referred to as any of the following:

  • Data Access Auditing
  • Data Monitoring
  • Data Activity Monitoring

Each of these is essentially the same thing: monitoring who did what to which piece of data when. In addition to database auditing, you may wish to include database authorization auditing, which is the process of reviewing who has been granted what level of database access authority. This typically is not an active process, but is useful for regularly reviewing all outstanding authorization to determine if it is still required. For example, database authorization auditing can help to identify ex-employees whose authorization has not yet been removed.

Database Access Auditing Techniques

There are several popular techniques that can be deployed to audit your database structures. Let’s briefly discuss three of them and highlight their pros and cons.

The first technique is trace-based auditing. This technique is usually built directly into the native capabilities of the DBMS. Commands or parameters are set to turn on auditing and the DBMS begins to cut trace records when activity occurs against audited objects. Although each DBMS offers different auditing capabilities, some common items that can be audited by DBMS audit facilities include:

  • login and logoff attempts (both successful and unsuccessful attempts)
  • database server restarts
  • commands issued by users with system administrator privileges
  • attempted integrity violations (where changed or inserted data does not match a referential, unique, or check constraint)
  • select, insert, update, and delete operations
  • stored procedure executions
  • unsuccessful attempts to access a database or a table (authorization failures)
  • changes to system catalog tables
  • row level operations

The problems with this technique include a high potential for performance degradation when audit tracing is enabled, a high probability that the database schema will need to be modified, and insufficient granularity of audit control, especially for reads.

Another technique is to scan and parse the database transaction logs. Every DBMS uses transaction logs to capture every database modification for recovery purposes. Software exists that interprets these logs and identifies what data was changed and by which users. The drawbacks to this technique include the fact that reads are not captured on the logs, there are ways to disable logging that will cause modifications to be lost, performance issues scanning volumes and volumes of log files looking for only specific information to audit and the difficulty of retaining logs over long periods for auditing when they were designed for short-term retention for database recovery.

Additionally, third party vendors offer products that scan the database logs to produce audit reports. The DBMS must create log files to assure recoverability. By scanning the log, which has to be produced anyway, the performance impact of capturing audit information can become a non-issue.

The third database access auditing technique is proactive monitoring of database operations at the server. This technique captures all SQL requests as they are made. It is important that all SQL access is audited, not just network calls, because not every SQL request goes over the network. Proactive audit monitoring does not require transaction logs, does not require database schema modification, and should be highly granular in terms of specifying what to audit.

The Questions That Must be Answerable

As you investigate the database access auditing requirements for your organization, you should compile a list of the types of questions that you want your solution to be able to answer. A good database access auditing solution should be able to provide answers to at least the following questions:

  1. Who accessed the data?
  2. At what date and time was the access?
  3. What program or client software was used to access the data?
  4. From what location was the request issued?
  5. What SQL was issued to access the data?
  6. Was the request successful; and if so, how many rows of data were retrieved?
  7. If the request was a modification, what data was changed? (A before and after image of the change should be accessible)

Of course, there are numerous details behind each of these questions. A robust database access auditing solution should provide an independent mechanism for the long-term storage and access of audit details. The solution should offer the canned queries for the most common types of queries, but the audit information should be accessible using industry standard query tools to make it easier for auditors to customize queries as necessary.

Summary

Database auditing can be a crucial component of database security and compliance with government regulations. Be sure to study the auditing capabilities of your DBMS and to augment these capabilities with third party tools to bolster the auditability of your databases.

 

 

Posted in auditing, compliance | Leave a comment

What Does the Latest Salary Survey Say About Data Professionals?

In the latest Computerworld IT Salary Survey (2016) 71% of IT workers who took the survey reported that they received a raise in the past year. That is a nice healthy number.

For those of us who specialize in data and database systems though, it may be bad news that we are not as “in demand” as our application development brethren: 45% expect their organizations to hire new application developers this upcoming year whereas only 17% expect to hire new database analysis and development folks. Of course, this may not be all bad news because organizations need many more developers than they do DBAs, right?

In terms of compensation, the national average for DBAs was $98,213 in the 2016 survey, up 1.9% over 2015. This figure includes both a base salary and bonus.

Let’s compare that to the application developer compensation. AppDev folks averaged $91,902 in 2016, up 4.4% over 2015. So DBAs are still out-earning application developers, but not by much. And the application folks are getting bigger raises!

That said, these types of surveys are always skewed somewhat because of the multitude of titles and jobs out there that fall into multiple categories. For example, the national average compensation for database analysts was $90,370 in 2016, up 3.5% over 2015. And the national average for database architects was $128,242 in 2016, up 3.3% over 2015. I think this sends a clear message to DBAs: it is time to ask for your title to be changed to database architect!

You can go to the web site and search on the various categories to uncover the compensation figures for your favorite profession. I was curious, for example, about data scientists, but there were only 13 respondents rendering the results not significant.

 

 

Posted in DBA, salary | Leave a comment

Evaluating Data Warehouse Platforms

I just completed a four part series of articles for TechTarget on data warehousing and the platforms that are used to implement data warehouses. The purpose of the series is to help inform potential data warehouse buyers of the considerations and decision points when choosing data warehousing products.

With that in mind, I am going to share links to these articles with my blog readers today. The first article, published in November 2015, defines data warehousing and puts it in context within the modern IT infrastructure that includes data lakes and big data analytics – things that did not exist for most of the lifespan of data warehousing. This article is titled The benefits of deploying a data warehouse platform and you should read it first to understand the overall theme of the series of articles.

Part 2 was published in January 2016, and it focuses on answering this question: Does your organization need a data warehouse? The article is titled Evaluating your need for a data warehouse platform.

Next I took a look at the most important features of data warehousing technologies. Choosing between different technologies and vendors can always be a challenge unless you know your requirements and focus on platforms that can best achieve the results you desire. Part 3 of this series, Evaluating the key features of data warehouse platforms, can help you do just that!

And finally, in Part 4, I focused on the actual platforms and highlighted important considerations to evaluate when choosing a data warehouse platform. Check this article out here –> Five factors to help select the right data warehouse product.

The series of articles was accompanied by multiple, high-level product descriptions of some of the leading data warehouse platforms that you might consider. They are bookmarked here for your easy consumption:

Hope you enjoy!

Posted in data warehouse, DBA | 2 Comments

Building A Basic IT Library

Printed words on dead trees is still my preferred way to read about IT topics. I am a huge fan of technical books and I house a library of hundreds of them in my home office. I believe that there is no better mechanism for digging in and learning a new subject than to wrap your hands around a book and to start reading.

I know that many of you have embraced ebook readers (like the Amazon Kindle) — and so have I. Just not for tech books. I’ll read a novel or a bio on my Kindle, but I still find it hard to navigate and utilize a tech book in any other format than an actual printed book on paper.

 

But back to the topic at hand — and that is to discuss a core set of books that should be in any IT professionals library. It doesn’t really matter whether you have access to these books in print or as an ebook, just that you have access to them.

So here it is, my version of a good, basic library of IT books that every computer professional should own. These books are classics in the field, or should be. I have excluded books on narrow topics like specific programming languages, operating systems, and database management systems. All of the following books are useful to anyone who is employed as a professional in the field of Information Technology.

The first book any IT professional should own is The Mythical Man-Month (Addison-Wesley Pub Co; ISBN: 858-0001065793) by Fred Brooks. Fred Brooks is best known as the father of the IBM System/360, the quintessential mainframe computer. This book contains a wealth of knowledge about software project management including the now common sense notion that adding manpower to a late software project just makes it later. The 20th anniversary edition of The Mythical Man-Month, published in 1995, contains a reprint of Brooks’ famous article “No Silver Bullet” as well as Brooks’ reflections on the twenty years since the book’s publication. If creating software is your discipline, you absolutely need to read and understand the tenets in this book.

Another essential book for technologists is Peopleware (Dorset House; ISBN: 0932633439) by Tom DeMarco and Timothy Lister. This book concentrates on the human aspect of project management and teams. If you believe that success is driven by technology more so than people, this book will change your misconceptions.

And if you ever are going to write a line of code, you really should be familiar with the Donald Knuth’s multiple volume opus on The Art of Computer Programming. This multi-volume reference is certainly the definitive work on programming techniques. Knuth covers the algorithmic gamut in this set, with the first volume devoted to fundamental algorithms (like trees and linked lists), a second volume devoted to semi-numerical algorithms (e.g. dealing with polynomials and primes), a third with sorting and searching, and a final volume dealing with combinatorial algorithms. Even though a comprehensive reading and understanding of this entire set can be foreboding, all good programmers should have these techniques at their disposal.

Transaction processing is at the heart of many computerized systems, but not everybody has a firm understanding of the concepts and techniques that underlie transactional systems. I think the best place to start to acquire such knowledge is with Transaction Processing: Concepts and Techniques by Jim Gray and Andreas Reuter (Morgan Kaufmann, ISBN: 1-55860-190-2). This book expounds on implementing high-performance, high-availability systems for conducting transactions. Understanding the material here will make you a better programmer or DBA because you will understand the basics of how transactions work.

A more recent classic is Nicholas Negroponte’s Being Digital (Vintage Books; ISBN: 0679762906). First published in 1995, “Being Digital” challenged the reader to rethink reality. The book weaves the history of media technology and ponders the future of the human interfaces to technology. The book is not technologically challenging, and some of the discussion is outdated, but “Being Digital” is a book that will encourage you to think differently about the world and technology’s place within it.

Information is the cornerstone of IT, so we should understand the difference between data, information, and knowledge. Information Anxiety 2 (Hayden/Que; ISBN: 978-0789724106) by Richard Saul Wurman does a great job of highlighting the angst that occurs when you feel that there is just too much information out there that you should know, but don’t. If you find it, it is also worthwhile to pick up the now-out-of-print earlier edition of the book — Information Anxiety (Doubleday, ISBN 0-385-24394-4) if you can find it. This is where Wurman first defined the term “information anxiety” as “the ever-widening gap between what we understand and what we think we should understand.” Reading either (or both) of these books on information anxiety will open your eyes and make you think twice about all of the data you receive on a daily basis.

Finally, because I am a database proponent, I think every IT professional needs a basic understanding of database technology. And the seminal text for accomplishing this feat comes from Chris Date. An Introduction to Database Systems, 8th edition is not an easy text to dive into, but it does provide the most in-depth coverage of important data and database management concepts and capabilities. If this book is too theoretical or challenging then you might want to try another of Date’s books: Database in Depth: Relational Theory for Practitioners.

Summary

If your job is in IT, then the books highlighted in this short post should be on your bookshelf. Well, actually, they should be in your hands and you should be reading them. The knowledge contained in these books will make each and every one of you a better IT professional.

Of course, I don’t mean to suggest that these are the only books you should buy. You will need others that help with your chosen career niche. For example, if you are a DBA you’ll want to own Fleming and von Halle’s Handbook of Relational Database Design (Addison-Wesley Pub Co; ISBN: 0201114348), a book on data modeling, and maybe my DBA book. And you’ll also want books about the specific DBMS you are using (DB2, Oracle, etc.).

But the bottom line is that books promote knowledge. And knowledge helps us do our jobs better. So close down that web connection and pick up a book. You’ll be glad you did.

Posted in books, DBA | Tagged | 1 Comment

Inside the Data Reading Room – New Year 2016 Edition

Regular readers of my blog know that I periodically take the time to review recent data-related books that I have been reading. This post is one of those blogs!

Today, I will take a quick look at several books that I think you will enjoy, starting with Repurposing Legacy Data: Innovative Case Studies by Jules J. Berman (Elsevier, ISBN 978-0-12-802882-7). This short book offers up a quick read and delivers on the promise of its title. It leads the reader through example case studies showing how organizations can take advantage of their “old” data. In the day and age of Big Data and Data Science the techniques and tactics explored in this fine book are worth investigating and further.

Next up is a book that tackled MDM titled Multi-Domain Master Data Management: Advanced MDM and Data Governance in Practice by Mark Allen and Dalton Cervo (Morgan Kaufmann, ISBN 978-0-12-800835-5). Allen and Cervo offer up practical implementation guidance using hands-on examples and guidelines for ensuring productive and successful multi-domain MDM. Along the way you’ll learn how to improve your data quality, lower your maintenance costs, reduce risks, and improve data governance. There is a complimentary companion site for the book that offers additional MDM reference and training materials.

I’ve also enjoyed reading Cognitive Computing and Big Data Analytics by Judith Hurwitz, Marcia Kaufman, and Adrian Bowles (Wiley, ISBN 978-1-118-89662-4).  The book does a good job of instructing readers on cognitive computing, from the basics of what it is, its various components (e.g. machine learning, natural language processing, etc.), its growth due to the rise of big data analytics , and examples of projects showing how it works and it promise. As an IBM supporter I particularly enjoyed the chapter in IBM Watson. But really, the entire book is worthwhile and if you have any interest at all in how computers can gain cognitive capabilities, you should pick up a copy of this book.

Finally, for today, we have a DevOps book by the title of DevOps: A Software Architect’s Perspective by Len Bass, Ingo Weber and Liming Zhu (Addison Wesley, ISBN 978-0-13-404984-7). DevOps is a somewhat new movement espousing collaboration and communication between software developers and those providing operational and administrative IT support. The word is a combination of DEVelopment and OPerations, and there is a lot of hype out there about DevOps. This book does a reasonable job of explaining the concept of DevOps (frankly, I am not one of the people who thinks it is really a monumental change) and how it can benefit your organization. If you’ve been with IT for sometime, do not expect to be wowed with new information. Instead, the authors do a credible job of explaining DevOps and a lot of development/administration best practices.

That’s it for today. If you’ve read any of these books please leave a comment with your thoughts… and let me know if there are any books you’d like to see reviewed in future editions of Inside the Data Reading Room here on the Data & Technology Today blog!

Posted in analytics, Big Data, book review, books, Data Quality, legacy data, MDM | 1 Comment

Keeping Up With the DBMS

 

One of the more troubling trends for DBAs is keeping up with the latest version of their DBMSs. Change is a fact of life and each of the major DBMS products change quite rapidly. A typical release cycle for DBMS software is 18 to 24 months for major releases with constant bug fixes and maintenance delivered in between major releases. Indeed, keeping DBMS software up-to-date can become a full-time job.

The troubling aspect of DBMS release migration these days is that increasingly, the majority of organizations are not on the most recent version or release of the software. Think about it. The most recent version of Oracle is Database 12c, but many organizations have yet to migrate to it even though it was released in July 2013. Things are much the same for Microsoft SQL Server and IBM DB2 users, too. For example, many mainframe organizations are running DB2 10 for z/OS (and even older, unsupported versions) instead of being up-to-date on DB2 11 for z/OS (which was released in October 2013).

This happens for many reasons including the desire to let others work out the inevitable early bugs, the lack of compelling new features that would drive the need to upgrade immediately, and lack of time to adequately upgrade as often as new releases are unleashed on us.

The DBA team must develop an approach to upgrading DBMS software that conforms to the needs of their organizations and minimizes the potential for disrupting business due to outages and database unavailability.

You may have noticed that I use the terms version and release somewhat interchangeably. That is fine for a broad discussion of DBMS upgrading, but a more precise definition is warranted. Versions typically are very broad in scope, with many changes and new features. A release is typically minor, with fewer changes and not as many new features. But DBAs must meticulously build implementation plans for both.

In many cases, upgrading to a new DBMS version can be treated as a special case of a new installation. All of the procedures required of a new installation apply to an upgrade: you must plan for appropriate resources, you need to reconsider all system parameters, and you need to ensure that all supporting software is appropriately connected. But there is another serious issue that must be planned for, and that is existing users and applications. An upgrade needs to be planned so as to cause as little disruption to the existing users as possible. Therefore, upgrading can be a tricky and dificult task.

Keeping the DBMS running and up-to-date without incurring significant application outages requires an on-going effort that will consume many DBA cycles. The approach undertakan must conform to the needs of their organization, while at the same time minimizing business impact and avoiding the need to change applications.

Upgrading to a new DBMS release offers both rewards and risks. By moving to a newer DBMS release developers will be able to use the new features and functionality delivered in the new release. For purchased applications, you need to be cognizant of the requirements of application releases on specific DBMS versions. Additionally, new DBMS releases tend to deliver enhanced performance and availability features that can optimize existing applications. Often the DBMS vendor will provide better support and respond to problems faster for a new release of their software. DBMS vendors are loath to allow bad publicity to creep into the press about bugs in a new and heavily promoted version of their products. Furthermore, over time, DBMS vendors will eliminate support for older versions and DBAs must be aware of the support timeline for all DBMSs they manage.

An effective DBMS upgrade strategy will balance the benefits against the risks of upgrading to arrive at the best timeline for migrating to a new DBMS version or release. An upgrade to the DBMS almost always involves some level of disruption to business operations. At a minimum, as the DBMS is being upgraded databases will not be available. This can result in downtime and lost business opportunities if the DBMS upgrade has to occur during normal business hours (or if there is no planned downtime). Other disruptions can occur including the possibility of having to convert database structures, the possibility that previously supported features were removed from the new release (thereby causing application errors), and delays to application implementation timelines.

The cost of an upgrade can be a significant barrier to DBMS release migration. First of all, the cost of the new version must be planned for (price increases for a new DBMS version can amount to as much as 10 to 25 percent). You also must factor in the costs of planning, installing, testing, and deploying not just the DBMS but also any applications using databases. Finally, be sure to include the cost of any new resources (memory, storage, additional CPUs, etc.) required by the DBMS to use the new features delivered by the new DBMS version.

Also, in many cases the performance benefits and improvements implemented in a new DBMS release requires the DBA or programmers to apply invasive changes. For example, if the new version increases the maximum size for a database object, the DBA may have to drop and re-create that object to take advantage of the new maximum.

Another potential risk is the possibility that supporting software products may lack immediate support for a new DBMS release. Supporting software includes the operating system, transaction processors, message queues, purchased application, DBA tools, development tools, and query and reporting software.

And we haven’t even touched on applying maintenance to the DBMS. Maintenance and fixpacks occur frequently and can consume a LOT of DBA time and effort. Some companies have even begun to contract with DBA services companies to handle their maintenance and fixpack planning and implementation.

The bottom line is that keeping up with new DBMS releases and functionality has become a very significant component of the DBA’s job.

Posted in change management, DBMS, fixpacks, maintenance | 1 Comment

Data Technology Today – 2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 54,000 times in 2015. If it were a concert at Sydney Opera House, it would take about 20 sold-out performances for that many people to see it.

Click here to see the complete report.

Posted in DBA | 1 Comment

Happy Holidays 2015

Just a short post to end the year wishing all of my readers everywhere a very happy holiday season – no matter which holidays you celebrate, I hope they bring you joy, contentment, and let you recharge for an even better year next year!

happy-holidays

So enjoy the holidays and come in January when we continue to explore the world of data and database technology…

Posted in DBA | 1 Comment

Using SQL to Count Characters


If you write SQL on a regular basis, it is very important to know the functions that are supported by your DBMS. In general, there are three types of built-in functions that can be used to transform data in your tables:

  • Aggregate functions, sometimes referred to as column functions, compute, from a group of rows, a single value for a designated column or expression.
  • Scalar functions are applied to a column or expression and operate on a single value.
  • Table functions can be specified only in the FROM clause of a query and return results resembling a table.

Understanding the built-in functions available to you can make many coding tasks much simpler. Functions, many times, can be used instead of coding your own application program to perform the same tasks. You can gain a significant advantage using built-in functions because you can be sure they will perform the correct tasks with no bugs… as opposed to your code which requires time to code, stringent debugging, and in-depth testing. This is time you can better spend on developing application specific functionality.

At any rate, I was recently asked how to return a count of specific characters in a text string column. For example, given a text string, return a count of the number of commas in the string.

This can be done using a combination of two scalar functions, LENGTH and REPLACE, as shown here:

SELECT LENGTH(TEXT_COLUMN) - LENGTH(REPLACE(TEXT_COLUMN, ',' ''))

The first LENGTH function simply returns the length of the text string. The second iteration of the LENGTH function in the expression returns the length of the text string after replacing the target character (in this case a comma) with a blank.

So, let’s use a string literal to show a concrete example:

SELECT LENGTH('A,B,C,D') - LENGTH(REPLACE('A,B,C,D', ',', ''))

This translates into 7 – 4… or 3. And, indeed, there are three commas in the string.

When confronted with a problem like this it is usually a good idea to review the list of built-in SQL functions to see if you can accomplish your quest using SQL alone.

Posted in DBA, functions, SQL | 1 Comment

New IT Salary Details from TechTarget

TechTarget conducts an annual IT Salary and Careers Survey regarding salaries for IT technicians and executives, and their most recent salary survey for 2015 shows some heartening results for those of us who toil in the IT ranks. The survey was conducted from June to September 2015 and there were 1,783 U.S. respondents.

The average base salary for all respondents, regardless of position, came in at $100,333, and the average total compensation (salary plus bonuses) was about 10 percent higher at $110,724. So the average base salary of IT professionals is a six figure number, which is whole lot better than many other industries these days.

What about the details? Well, I leave it to you to click over to the detailed article on the TechTarget site… but since many of the readers of this blog are DBAs, here are the TechTarget results for database administrators:

  • DBA average base salary 2015: $102,437
  • DBA average total compensation (salary+bonus) 2015: $108,661

Of course, as with all salary details, the exact salary numbers will vary by geography and experience level.

Posted in DBA, salary | 1 Comment