Information vs Misinformation in Software development teams

In the world that everything is represented in numbers, how should we measure software metrics? How can we tell information from misinformation or disinformation? How to approach these questions scientifically? In this article I’ll explain meaning of the words data, information, knowledge, misinformation and more. I’ll explain the difference between Correlation and Causation. Why does it matter, and finally, I will talk about how can we take a scientific approach to extracting knowledge in a software development team.

Definitions:

Data: In common usage data is a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally. Data is raw, unorganized facts that need to be processed. Data can be something simple and seemingly random and useless until it is organized e.g number of pull requests of developer X in months Oct, Nov, and Dec of 2023.

Information: When data is processed, organized, structured or presented in a given context so as to make it useful, it is called information. For example, average / mean / min / max number of pull requests made by developer X over a 3 months period.

Knowledge: Knowledge means the familiarity and awareness of a person, place, events, ideas, issues, ways of doing things or anything else, which is gathered through learning, perceiving or discovering. It is the state of knowing something with cognizance through the understanding of concepts, study and experience. for example trend / average / mean number of pull requests of developers of a certain team or company. Knowledge and information can be used interchangeably in many cases.

Misinformation: “mis-” for mistaken, Misinformation is “false information that is spread, regardless of intent to mislead.” With extra emphasis being on unintentional mistake. For example: Mr. Z says developer X on average submits 20 pull requests per month by mistake when the actual number is 22. That’s misinformation.

Disinformation: Disinformation is “deliberately misleading or biased information; manipulated narrative or facts; propaganda.” With extra emphasis being on the fact that the mistake is intentional. I also have heard Disinformation used to refer to something that looks like information but it’s not informative, e.g many videos on social media that are not entertaining, and at the same time although they look like they are informative, but they are not.

How can I tell information from misinformation or disinformation?

You can’t easily! There’s no straight forward answer to this. If it’s a statement based on other sources, you need to check the main source, or check multiple credible resources. If it’s a new knowledge, you should apply logic, check if it’s too good to be true, if it is formed based on your biases, and if possible take a scientific approach to the information at hand.

In the context of software development teams, I have rarely heard of disinformation. Besides some office gossip, and some companies using strange excuses in their layoff process not to pay for employee’s severance package, I have rarely heard of disinformation at workplace. After all, software development companies are team oriented businesses, and that helps with more straight-forward and transparent communication. Personally, I believe people are good by nature, so that also helps I guess.

What about “misinformation” at software development companies? We measure many things in a software company, e.g performance, reliability, uptime, downtime, storage space, experience, productivity, infrastructure costs, sales cost, marketing costs, etc, you get the idea. And of course it’s possible to make a mistake with any of these topics – usually not with the money related ones as that would directly translate to prison time. In every other topic – besides money -, a scientific approach helps us make responsible decisions.

Why does it matter?

Obviously most arguments / discussions in a software company, affect someone, somewhere. Being able to tell information from misinformation is important because information is source of decision making. Misinformation can lead to wrong decisions, that costs us money directly (by allocating resources money / human to wrong problems) or indirectly (by affecting user experience). It also can affect employee experience by putting too much workload on them, or affecting their mental health which makes it ethically wrong.

Scientific Approach to Extracting Knowledge in Software Development Teams

We all have heard of “statistics” and “Data Science”, Statisticians and “Data Scientists”. The problem is when we are presented with information we can not always run to or depend on data scientists. So we should be able to extract and understand it by ourselves. Let’s imagine we want to measure the reliability of our software. Before we talk about reliability, we need to understand some other topics.

Correlation vs. Causation

IMO it is extremely important to understand this topic clearly, although it is to some extent confusing.

Correlation means there is a statistical association between variables. When one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables. These variables change together: they covary. But this covariation isn’t necessarily due to a direct or indirect causal link.

Causation means that a change in one variable causes a change in another variable. There is a cause-and-effect relationship between variables. The two variables are correlated with each other and there is also a causal link between them

Remember that a correlation doesn’t imply causation, but causation always implies correlation. But why? This is because the relationship between variables could either be due to a third variable (third variable problem) or simply a coincidence. Correlation vs Causality is big and confusing topic by itself, if you are interested, to learn more about the definitions, examples, and how to determine causation reliably, I suggest that you read Correlation vs. Causation: What’s the Difference? from Coursera.

An example from the aforementioned article: If you were to collect data on the sales of “ice cream cones” and “swimming pools” throughout the year, you would likely find a strong positive correlation between the two as sales of both increase during the summer months. If you make the mistake of assuming correlation implies causation, you would incorrectly claim that an increase in ice cream cone sales causes people to buy swimming pools. However, this isn’t the case since you can attribute the increase in both to another variable—likely the warmer weather people experience during the summer. So although a correlation is present, you can’t support causation.

Feature Selection:

There’s a lot of data that we can gather about our software. In data processing, each column of data in a dataset is called a “feature“. Among all the data that you have access to you need to select the correct subset of data otherwise you’ll end up with misinformation. There are different methods to feature selection, but they are out of scope of this article.

An exaggerated example that I usually use is if you wanted to measure performance of a car, you wouldn’t combine it’s fuel consumption per 100 kilometers with it’s total sales from last year! I know, sounds absurd, though now you get the idea if you already had not! A topic to think about as a practice would be “determining how productive a developer is as an individual” What data would you collect? What features would you select? e.g “number of pull requests” as a feature, does it have a causation relationship with productivity or correlation? You need to understand your features clearly, and engineer them carefully. In a software development team, features will have very close correlation but causation? it’s not easy to prove.

Data Collection

You might think data gathering after feature selection sounds a bit strange as it’s done the other way around with data science projects. In a company, these data does not come for free and needs to be gathered and stored by different tools and people. The process takes time and money. Thinking in advance about the data that we need for our specific purpose will help us be more efficient. The downside is, if you need something more, you need to repeat the process for new features.

In our imaginary example we can think of: Number of failures (NF), Total operating time (TOT), Total down time (TDT), Total defects, size of software as our raw data.

Normalization:

Convert each metric to a common scale so they can be compared and combined. For example, you might normalize metrics to a scale from 0 (worst) to 1 (best), where the worst and best values are defined based on historical data, industry standards, or target goals. Remember, that normalization has it’s own pitfalls, for example 2 independent feature after normalization might have “Spurious correlation of ratios” problem.

Extracting Information

Now that we have selected our feature and collected our data, we can process the data to information. These are some examples for calculating reliability:

  • Mean Time Between Failures (MTBF): MTBF is a measure of the average time between system failures. The formula is: MTBF = TOT / NF
  • Mean Time to Repair (MTTR): MTTR measures the average time it takes to repair a system after a failure. The formula is: TD / NF
  • Availability: Availability is the percentage of time that a system is operational. The formula is: Availability = ((TOT – TDT) / TOT) * 100)
  • Failure Rate: The failure rate is a measure of how often a system fails. The formula is: Failure Rate = 1 / MTBF
  • Defect Density: This measures the number of known defects divided by the size of the software (usually in lines of code or function points). Defect Density = Total Defects / Size of Software
  • Probability of Failure on Demand (PFD): For safety-critical systems, this is a measure of the likelihood that the system will fail when a demand is placed upon it: PFD = Number of failed demands / Total demands

Which one is better? There’s no single answer. We can put all of these on a dashboard as indicators of reliability. Based on the trend of the indicator, we can conclude that we need to take action or not.

Building a composite index

Creating a single index to represent the knowledge at question (in this case software reliability) is theoretically possible, but in practice, it can be quite challenging due to the multidimensional nature of the concept. Different metrics capture different aspects of your problem (in our case reliability), and the relevance of each aspect can vary based on the context, and purpose. So it is not recommended. Instead you can look at the metrics separately, and extract patterns, trends, etc out of them.

But if you insist on doing so, you should first assign a weight to each metric based on its importance (which has its own pitfalls). The weights should sum up to 1. And then you can multiply weight by relative metric to sum it up in a composite index. (Just to emphasize, generally not recommended)

Conclusion

In this article, I talked about information vs misinformation, the difference between Correlation vs Causation. Why does it matter to tell information from misinformation, and finally, I explained how you can take a scientific approach to data in a software development team. I hope you enjoyed reading this article!

1 Comments Information vs Misinformation in Software development teams

  1. Vandesh

    Some very interesting thoughts in there! Well composed article and an insightful intermix of mathematics & devloper productivity!
    Always hard to put metrics in place when it’s people you’re dealing with, but if something is in place, it should be a well thought out system like you have demonstrated here.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *