Wednesday 19 November 2014

Make Concepts Explicit when Fixing Bugs

Dear Junior

To find implicit concepts and make them explicit is a very powerful way to improve code. But, sometimes it is hard to judge beforehand which concept is important enough to make explicit. We try to make good judgements, but sometimes we miss.

But, no reason to despair, we can take Time as our helpful aid. So, instead of everything perfect from start, we continuously perfect it whenever we find a reason. Two of the topmost reasons are either a bug appearing or further development of the code.

Let me for now fokus on the case of when a bug appears. If the code does not behave the way we intended it to do, then there is most often a place in the code that was to convoluted. In that convoluted state, subtle misunderstandings and simple mistakes could hide. The solution is to clarify the code until it is obvious whereof the mistake consists.

Things get clearer by examples, so let me clarify what I mean with a very minimal example, the story about a bug and its fix.

In real life, more or less a decade ago, I worked on a project where we did a batched import of records coming from an external source. The code looked roughly like this following in the class ImportCron.

public class ImportCron {

    private RecordImporter importer;
    private Logger logger;

    void runImport()  {
        List records = importer.listWaitingRecords();
        logger.log(Level.INFO, "Importing records: " + records.size());
        for ( Object rec : records) // imports each of the records in turn
   …
    }

Now, the import was run once a minute, but most of the time there where non records waiting.  Problem was the import log became filled with rows saying "Importing records: 0" so we had to grep out the non-zero lines each time we wanted to look at what was happening.

One of those days I got tired of this, fired up the editor, and wrapped the logging in an if statement to filter out all those zero-import writes.

    void runImport()  {
        List records = importer.listWaitingRecords();
        if (records.size() == 0)
            logger.log(Level.INFO, "Importing records: " + records.size());
        for ( Object rec : records) // imports each of the records in turn
   …
    }

A few days later we released during lunch, which was the best time for our users. The deploy went smooth, apps started up as expected, and soon it was time for the first import "tick".

Within a minute, import was run the first time, and the log line read:
<timestamp> Importing records: 0
While standing confused, a minute went by, and the log line read:
<timestamp> Importing records: 0
Another minute went by … and no log line.
While firing up my editor to check the code, another minute went by and a third log line appeared:
<timestamp> Importing records: 0
A colleague of mine looking in the database calmly reported "We just imported three records".
It only took me a split of a second to realise my mistake once I had the code in front of me.

    void runImport()  {
        List records = importer.listWaitingRecords();
        if (records.size() == 0)
            logger.log(Level.INFO, "Importing records: " + records.size());
        for ( Object rec : records) // imports each of the records in turn

    }

Obviously my spine reflex for coding "check for zero" made me write the boolean expression the wrong way around. By the way, did you capture that mistake at first glance in the code on my first description? Confirmation bias is a nasty thing.

Now, I must say this kind of bugs are pretty uncommon, bugs that are typos or simply mispunching the keys. Most bugs in my opinion are rather that pieces of code subtly misunderstand each other.

Well there are at least two ways of fixing this code. The obvious, and fastest would be to quickly change "==" to "!=". However, humbled by the mistake I had done in the first place, I realised that that kind of hasty coding was what got me in the trouble in the first place.

One of my coding mantras since long have been "Code should mean something, not just do something". So, a better way out would be to make the code more meaning-ful. From Eric Evans I learned the phrase "Make implicit concepts explicit", which says the same thing in this context, but gives a better guiding direction forward.

I took to the challenge of fixing the bug by finding what implicit concepts had been missed, and making them explicit until it was blatantly obvious that the code was wrong. An added benefit would be to be able to make a test proving the code was wrong.

Extracting the boolean condition to a method of its own would force me to spell out the meaning of that piece of code.

    void runImport()  {
        List records = importer.listWaitingRecords();
        if (containsRecords(records))
            logger.log(Level.INFO, "Importing records: " + records.size());
        for ( Object rec : records) // imports each of the records in turn

    }

    static boolean containsRecords(List records) {
        return records.size() == 0;
    }

Granted, this is more code than I started with - but I think the code is more "to the point" (phrased inspired by Rickard Öberg, another of the great programmers).

At least this made me able to write a test

    @Test
    public void shouldConsiderEmptyListNotContainingRecords() {
       Assert.assertFalse(ImportCron.containsRecords(Collections.EMPTY_LIST));
    }

Well, at least this is what the test would look today - at the time JUnit looked slightly different.

Anyway, now I have a failing test. Claiming "the empty list does not contain records" is simply false - as proven by the test. We safely update the code to fix the bug.

    static boolean containsRecords(List records) {
        return records.size() != 0;
    }

Upon which the test switched to green as expected.

During this "refactor -> put under test -> fix" something interesting has happened. The concept "import list contains records" that hitherto had been implicitly represented by the technical "records.size() != 0" has now been made explicit and given a name "containsRecords".

It might be claimed that the main benefit in this example was derived from the drive to put code under test before making a change. Undeniably, that is a point, but I think it only tells half the tale. 

Simply putting things under test can be done in a myriad of ways, and I have seen several very technology-based efforts. I do not think those efforts have paid off handsomely. Mostly the code get cut along technical lines to push in tests in the gaps. But, the code does not get more comprehensible over time.

Focusing on "make implicit concepts explicit" have been a more productive course for me when working with code.

Yours

   Dan

ps My team-mates did not force me to skip lunch to fix the bug. We went out together, and when we came back satisfied and rested, I sat down and fixed the code. We released the fix during lunch-time the day thereafter.






Monday 17 November 2014

Make Implicit Concepts Explicit in Code

Dear Junior

In our letters on software development we have touched upon the idea several times, but I think it might be worth spelling it out - the idea to make implicit concepts explicit; something I see as one central message of Domain Driven Design applied to coding.

Let me pick that phrase apart for a moment. When coding we have a lot of concepts in mind, e g when writing code that handles people I might have a class namned Person. In this case the concept of person has an explicit representation in the code - i e an explicit concept.

A person might have a birth date, represented in code as a data-field Date dayofbirth in the Person class; another example of an explicit concept.

We might have business logic restrictions that are based on the age of the person, e g you have to be at least fifteen years to access some content. So there will be code calculating this.

    void contentRequest() {
        ... 
        boolean access = ((new Date().getTime() - customer.dayofbirth.getTime()) / msPerYear) >= 15;

Again we have a concept represented in code, this time age. However, this time the representation is not explicit - there is nothing in the code saying "age". Age is an implicit concept

There is nothing strange with having implicit concepts. In the short code snippet we have several implicit concepts: current point of time, point of time of customer's birth, and age-limit for content are just three obvious. This is not a problem. The programmer reading the code will intuitively re-construct the relevant concepts.

However, the design guideline from Domain Driven Design advice us to keep an eye open for important concepts - and if we find the represented implicitly, then make that implicit concept explicit

Let us look at the code snippet again.

        boolean access = ((new Date().getTime() - customer.dayofbirth.getTime()) / msPerYear) >= 15;

Of the implicit concepts dwelling in this like, which of them seem important enough to make explicit? 

The concept of "current point of time"? Probably not.

The concept of "milliseconds since birth"? Nah, not that either.

The concept of "milliseconds since birth expressed as years, rounded downwards"? Hey! I know this! We already have a word for it - we call it "age"! And we talk about it all the time!

Well, that sounds important enough. Let us start with giving that value a name. 

    void contentRequest() {
        ... 
        long age = (new Date().getTime() - customer.dayofbirth.getTime()) / msPerYear;
        boolean access = age >= 15;

By the way: this is where I really like the IntelliJ shortcuts "cmd-w" to widen the selection until the expression is selected, then "cmd-alt-v" for the refactoring to extract the selection as a variable - naming it "age" in the pop-up. It literally takes less than 10 seconds.

Now, we actually have made the implicit concept age into an explicit concept. Mission completed.

Well, not completely. We need to find the concept a good home where it can lead a good life. Right now it is stranded in the middle of some access computation. At least we can give it a method of its own.

Applying another of my favourite refactoring "Replace Temp with Query" yields this code.

    void contentRequest() {
        ... 
        boolean access = customerAge(customer) >= 15;

    private long customerAge(Person customer) {
        return (new Date().getTime() - customer.dayofbirth.getTime()) / msPerYear;
    }

Better, but can be improved to make the concept clearer. If we talk about the "age of the customer" it will vary from time to time, and it might cause confusion: the age when the customer requested access to the content, the age when content was first accessed, when it was last accessed, the age now? Better to clarify.

Of course we could rename the method to "customerAgeNow". However, I do not feel comfortable having my concepts depend implicitly on external things like the system clock. I prefer to make those dependencies explicit. So we make the time-in-question a parameter. 

    void contentRequest() {
        ... 
        boolean access = customerAgeAt(customer, new Date()) >= 15;

    private long customerAgeAt(Person customer, Date timepoint) {
        return (timepoint.getTime() - customer.dayofbirth.getTime()) / msPerYear;
    }
 
This design also improves testability drastically. The tests can now use explicit test-data, and need not rely on checking, or changing, the system clock.

Kudos once again to the lovely refactoring support of modern IDEs. Another less-than-10-seconds-refactoring: two cmd-W to select "new Date()", cmd-alt-P to extract as parameter, change name to "timepoint" in pop-up. Enter. Done.

By now it is pretty obvious that the method "customerAgeAt" suffers from feature envy. The most relevant data it operates upon is the Customer, but it resides somewhere else. We should consider moving it.

Should we move it, the Customer class would get a method "ageAt", taking the liberty to rename it slightly. That method certainly makes sense in this context, but does it so in other contexts? As we do  not have the rest of the codebase at hand, we will just have to pretend that "ageAt" makes sense in other context - and might actually be useful there as well.

This is really one of my favourite refactoring: move method. Again extremely smooth thanks to modern IDEs. Literally two keystrokes (F6, Enter - in IntelliJ) for the move, and a few for renaming. 

    void contentRequest() {
        // ...
        boolean access = customer.ageAt(new Date()) >= 15;

class Person {
    Date dayofbirth;
    // ...
    long ageAt(Date timepoint) {
        return (timepoint.getTime() - dayofbirth.getTime()) / TimeUtil.msPerYear;
    }
}

You might not believe me, but I remember the days of yore when you actually had to do all the involved steps manually; cut-n-paste the method body and header, change the list of parameters, update the code to use the fields of "this" instead of the argument, change all client calls (which might be several).

Now, we have finally ended up with a version I feel comfortable with. The concept "person" has now an conceptual attribute "age at" which has an explicit representation in code. In code we have "enhanced" our language of what we can talk about directly. So, when we discuss the age of a customer with business people, it is likely that we consistently mean the same thing - even if we for sure still can misunderstand each other.

As a side effect the code checking the access has become a lot clearer.

        boolean access = customer.ageAt(new Date()) >= 15;

This is a line of code that can be shown some person from the business side and explained by reading it out "access is granted if the customer's age at 'now' is at least fifteen years". Given that support, they can see by themselves - something that really builds trust. 

The time invested was not huge. Coding-wise all refactorings took less then a minute together. The time to slowly realise that "age" is an important concept in this particular domain is not counted. However, that insight will have to dawn on the programmer sooner or later, and when that happens, that extra minute is well invested.

To make concepts explicit does not only apply to code, it can be fruitfully applied to other areas as well, e g capturing and writing requirements/specifications. 

As we all stand on shoulders of gigants, I also must give credit where credit is due. The phrase "Make Implicit Concepts Explicit" I have picked up from Eric Evans, the thought leader of Domain Driven Design. 

In conclusion: to make implicit concepts explicit help us to over time close the gap between the code and the understanding of the business domain. It is not uncommon to in the process find bugs or crucial misunderstandings, that then can be adressed in a proactive manner instead of popping up as nasty surprises later on. 

Yours 

   Dan





Wednesday 12 November 2014

Agile Projects should have Effect Targets

Dear Junior

Measuring projects and setting targets for them is a tricky business. And, it does not get easier when agile projects enter the scene. This is not really because agile projects are strange per se, but because they are different from non-agile projects.

Setting targets and measuring projects is also an important business. I have seen several agile initiatives fail. Most of them have failed because they did not succeed in setting goals for projects, and monitor their progress. And if you cannot do that, you quickly lose the confidence and support from upper management. Agile initiative terminated, or left to dwindle away. End of story.

However, it need not to be so. Agile projects can have measurable targets, and they can be monitored - you just have to do it right.

There are some bad news and some good news.

The Well-Established Chaos Rod for Measuring Success

The bad news is that agile projects cannot be measured using the de-facto established standard rod for project success.

The standard rod for measuring projects is "on-time and on-budget, with all features and functions as initially specified", as used by e g Standish Group in their much-to-cited Chaos Reports. Let us call this the "Chaos rod". Most organisations use some version of the Chaos rod for measuring their projects.

As we have discussed earlier it is pretty obvious that "all features implemented" is not a good way to measure "project success" - not for any project. Nevertheless, this is the standard rod.

Damned if you do, damned if you don't

Of course agile projects will fail if measured using the Chaos rod. The reason is simple - agile projects does not manage to keep their hands away from fiddling with the original specification. Agile projects remove stuff, change stuff, add stuff - agile projects are in a constant state of scope-creep.

This is no coincidence - it is how agile projects are designed.

Think of it this way: If we learn something during the run of the project - shall we let that insight effect the plan of what we intended to do? Or shall we ignore that insight, sticking to the original plan even though inferior? Of course we will adapt! Anything else would be ridiculous.

An agile project anticipates that such insights will emerge, so its processes are designed to harness and leverage upon those insights: demos, retrospectives, re-prioritisation of product backlog etc. These practises are all aimed at constantly refine and redefine the scope. But then, we have derived from the original specification "all features and functions as initially specified".

Thus, any agile project longer than a sprint will fail - by definition. That is, if you use the Chaos rod for measuring success.

To put it bluntly, if you measure an agile project using the Chaos rod, it will fail. Either the agile process will fail, or the project as defined will fail.

Either, the project adapts to its measuring rod and blindly follows the "functions as specified", throwing whatever insight gained aside. Then, "project" will succeed, but "agile" will fail.

Or,  the project will work according to its agile-minded processes, and then certainly the result of the project will not be "all features and functions as initially specified". Then, "agile" will succeed, but "project" will fail. I e project will fail as measured by Chaos rod.

Damned if you do, damned if you don't.

A New (or Old) Hope

The good news is that the Chaos rod is not the only way to measure projects. As we have discussed earlier, there has always been two ways to measure projects: feature list (Chaos rod) or measuring business effect, what we can call the Effect rod.

Let us take our earlier example with an on-line book store. They had an idea that recommendations of the type others-have-bought would increase their sales, so they set of some money for that projects.

Using the Chaos rod they would have created a feature list around others-have-bought recommendations. The project would later be evaluated on how well it implemented the list. But let us look at a better idea.

Using the Effect rod they might set a target that customers will by 0.35 more books per checkout on average. The Effect rod is used to state the value of a project. These 0.35 books can probably be converted to money, that makes the project worth the effort.

The project might start out with an idea that others-have-bought recommendations would cut the cake. After the first small stories, deployed to production and put into hands of customers, the team learns that some kind of category would be helpful. To facilitate this they implement a simplified "search similar" as one of their features.

Suddenly the number of books in the checkout carts raise to a level 0.4 above the pre-project baseline. And the level sustains, it was not just random noise - the change is statistically significant.

The project has fulfilled its target according to the Effect rod, and is declared a success.

Measuring using the Chaos rod "on time, budget, and specification" - the project would have been deemed a failure, because it did not deliver the initially specified others-have-bought.

So, agile projects can be measured. You just have to use a measurement that is well suited. And, to be honest - which is better anyway.

A sad side-note is that I know of several projects which have worked in an agile manner, and delivered enormous business benefit - but in the project report the project lead has had to apologise repeatedly for not meeting the project target as defined in the project specification - a feature list.

Granted, to measure projects on business effect is not a new idea in any way at all. The idea was certainly there before the Agile Manifesto was written around the turn-of-millennia. What is new is that this way of measuring is essential to provide rigour to agile projects.

For Agile to succeed at larger scale, we certainly need lots practises around agile-minded processes. Measuring agile projects on effects is certainly one of them.

The way to measure agile projects is to set targets for business effect.

Yours

   Dan

PS Interesting enough, there is a sub-category "agile projects" in the later Chaos Reports, but the rate of success is not 0%, it is around 40%. I wonder what is going on here.

PPS Within the agile community, the practice of projects is debated - at least in the sense of the common description "temporary organisation with limited time, resources, and ambition". So, it would probably be better to use some other term, e g talking about "initiatives". However, for convenience, I have kept the often-used and familiar term "project". Check out the twitter hashtag #NoProjects.



Thursday 16 October 2014

All Features Implemented does not equal Project Success

Dear Junior

Setting goals for agile projects is trickier that setting goals for waterfall projects. As I mentioned in an earlier letter, for waterfall projects it is possible to set the goal in the form of a feature list to be delivered. This approach has several drawbacks - it does not guide the thousands of micro-decisions that are made, it gives no sense of purpose to support the drive or inner motivation, and it gives no guidance for evaluating whether the money, time, and effort was well spent.

However, although all these problems, and although the approach is not advisable - it is still possible. The project management mindset and tools for waterfall projects are applicable for reporting project status or following up visavi a feature list. It is also possible to evaluate success by checking whether everything on the feature list is implemented.

Well, I still think it would be better to evaluate whether the project created some value.

Now, ridiculous as this seems it is still industry standard.

The CHAOS report 1995 from Standish Group might be one of the most cited reports in system development. According to its statistics only 16% of software projects succeed. In the 2012 report this has been updated to 39%. And they collect data from tens of thousands of projects so the figures should be pretty reliable. 

Now, this seems scary and low, but only until you check out their definition of success.
"Resolution Type 1, or project success: The project is completed on-time and on-budget, with all features and functions as initially specified."
OK. Not a word about actually getting value for the effort. Remember the on-line bookstore where they wanted others-have-bought recommendations to increase the number of books customers. Now, imagine they implemented the others-have-bought recommendations, but the customers did not buy more books anyway. Is this project a success?

Well, you have spent lots of money building something, but made no money from it. To me it sounds like money, time, and effort down the drain - not as a success.

To Standish Group, it is considered a success.

On the flip side, imagine the project realising that a simplified "search similar" would increase the number of books each customer buys. Imagine further that this feature would be much easier to implement. So, the project decides to implement that feature instead. And the number of books sold per customer increases 0.4 on average.

Spending less money, time, and effort on something else than originally envisioned, but still getting the benefit - that sounds like success to me.

To Standish Group, it is considered a failure.

Thus, the figure 16% or 39% actually says nothing at all about the state of software projects.

So, Standish Group is probably filled with smart people. Why did they not evaluate software projects according to some meaningful metric instead? Most probably "generated business value as specified upon funding" would be more interesting. 

My guess is that too few projects actually stated an envisioned business effect, so analysing those projects would give so little data that it would not be possible to make any significant conclusions.

So, instead of measuring something that would actually matter ("fulfilled envisioned business effect") they just measured something that could be measured.

Now, from the second example, where the team implemented another feature than originally envisioned, it is also clear that this approach is an absolutely worthless way of evaluating agile projects.  

Yours

   Dan

PS The way to measure agile projects is to set a target for a business effect.

PS The CHAOS 1995 report has been republished openly available for academic purposes. It can be found at http://www.projectsmart.co.uk/docs/chaos-report.pdf. The 2013 version can be found at http://www.versionone.com/assets/img/files/CHAOSManifesto2013.pdf


Thursday 9 October 2014

Project Goals using Effects or Feature List

Dear Junior

There has always been two ways to set goals for a project, either you can define the goal of the project to be a list of feature to be implemented, or you can define some effect you want to see. Both of these have been around for a long time, and effect goals have always been superior. 

Setting goals as a feature list is actually pretty simple - you simply state "these features I want to see before we consider this project closed". An online bookstore could say that it wants search-by-author, categories, and others-have-bought recommendations.

Following up during the project will simply consists of checking how far on this check-list the development team has progressed.

Effect goals are a little bit harder to set. You have to figure out and express why you want to have work done, what purpose it fulfils, in what way it makes your world better. Said bookstore must then express that it hopes people each customer will by 0.35 more books per checkout on average, or that it will increase its customer base with 10 000 new customers.

Following up during the project using this approach will consist of measuring the business parameters and watch the values change. A tricky part is that the full impact of the project might not be seen until the development work is "finished" (whatever that means), or even not until some time thereafter.

To put things a little bit into context, it is not the "bookstore" that starts a project - it is always a person involved, in this case probably the product owner of the online store. And, this is the person who should explain why she is about to spend a lot of peoples' time and a lot of the organisation's money.

I have always found that measuring success on effect, or impact, is the more intellectually honourable approach. To be frank, setting goals as a feature list does really say "this much work I want to have done" or rephrased "this much money I want to spend", nothing about what value it should result in. And if you want to have a group of people to work to bring your ideas to come real, the least you can do is to explain to them the value you think it will bring. Setting goals on effect is really about that - bringing purpose to the work.

Now, these two ways of setting goals and measuring success have a fundamental impact on how to manage projects in an agile manner, but that will have to wait until a later letter.

Yours

   Dan

PS An agile project cannot be feasibly measured using feature lists; agile projects should be measured by setting a target for a business effect.

Tuesday 17 June 2014

CDE and Big Brother

Dear Junior

What can a sleazy soap opera teach us about coaching agile teams and their managers?

It was somewhat a surprise to me when I realise that the reality TV show Big Brother makes a good metaphor for explaining one of my favourite models for understanding self-organised teams: the CDE-model.

The Big Brother TV-show is a non-scripted soap opera where a bunch of people are locked up in a house together and followed by cameras day and night. The "thrill" of the show is how the people interact and react upon each other.

The CDE-model is a system-theoretic model to reason about how teams self-organise as a reaction on their surrounding. Of course these reactions are very non-linear and hard to predict.

What on earth have these two things in common?

Let us put ourselves in the shoes of the Big Brother producers. Say that the events in the locked house have become a little bit dull lately. The people locked up have settled for a pace of life where they all sustain without unnerving the others. Nice for them, but not thrilling TV. We need something to happen.

The problem for us is that Big Brother is non-scripted. We cannot direct Susan to "go and snug up Jonathan", even if we think it would be an interesting turn of events. We need to find other ways to change the behaviour inside the house without giving explicit directions.

So we decide to shake them up a bit by changing the conditions under they live. A rough partition can separate three types of conditions: containers, differences, and exchanges.

Containers

One type of conditions we can change are the containers.

We can lock the yard so that they are locked indoors. That will unsettle Bob who is used to take his morning strolls there. He might have to spend his mornings in the kitchen together with Alice, who has a terrible temper before getting her coffee. Or, we can open up an extra room, one that is a little bit hidden away, with very little insight - apart from the camera of course. Or, in the middle of the night we could push in an extra wall dividing the house into two parts; let us ensure that Tom and Lisa are locked up in one part, and Toms rival Jerry in the other part. Or, we could simply remove all doors. That could be fun.

All these are examples of changing the containers that contain the contestants - making the containers bigger, or smaller, or less connected, or more connected, or even dissected. 

Containers need not to be physical, there are other ways to divide a group. We can create a competition between two teams, then the teams become sociological containers. The teams can follow some obvious partitioning like city of birth or gender, or by some arbitrary dissection. 

The important part is that containers effect the patterns of interactions, so changing containers will change the behaviour.

Of course we could interpret "containers" literarily and put them all in a freight container. That is an idea.

Differences 

Another type of conditions we can change are differences and how they are resolved.

We can stir up events by introducing differences. For example, if we have an group of contestants with the same ethnicity we can send in a new contestant of a different ethnicity, for example sending in a white guy when there are only hispanics in the house. Or send in a professor of philosophy in a house with high-school drop-outs. You get the idea.

Sending in someone fair-haired when all are brown-haired will probably not make a lot of fuzz. In Big Brother, hair colour does not make a difference to how people treat each other - at least not in a way that make a significant change. We say that we only consider "significant differences". 

It differs from situation to situation what is a "significant difference" but in general the nuance of hair-colour is not one, whereas gender is - men and women are (sadly enough) treated differently. Same goes for ethnicity, sexual orientation and lots of other traits. All those are significant differences.

Introducing or enhancing differences are sure thing to induce change, but sometimes reducing differences can also unsettle the state of things.

Let us say we want to send in one person when there are four men and one women left in the house. Sending in a man would enhance differences from 4-1 to 5-1. Sending in a women would reduce gender difference to 4-2, but would probably be more interesting.

Apart from the differences as such, we have the issue about how these differences are resolved. 

One contestant might enjoy a particular kind of music, and preferably at high volume. Another contestant might not be so fond of that particular kind of music. However, they might be able to stand each other on a day-to-day basis. The difference is manageable, handling it is not difficult.

Now we can amplify the difficulty to manage differences, for example by introducing a large amount of liquor. Should either of them get drunk, or both, it will be harder to resolve the difference in taste of music, and we will probably see some interesting conflicts. 

Another difference is food preferences, one way to amplify the effect of this difference is to insist that everybody in the house agree on what should be served for dinner. Can be fun to see how Mark "must have meat" tackles Vegan-Lisa, even more interesting when he gets hungry. 

As differences and resolving differences are a major driver for interaction, obviously changing those differences will change behaviour.

Exchanges

Third and last of the conditions are the exchanges with the outside.

If we let the contestants interact with the outside things will happen. We might let each of them have a (filmed) phone-call with a friend. Or we can have a small room where one at a time is allowed to speak to the audience. Or we might take away that room. We could put up a big TV showing news from the outside. We could fake the news we show. Surely things will happen.

Exchange need not to be communication. We can change the way food is delivered to the house; instead of small deliveries every day we make one big delivery once a week. That will cause some interesting effects at the end of the week when someone has eaten all the goodies on day one. If we are diabolic we can give them slightly too little to eat. Surely things will happen.

As exchanges are the connection between the very limited system in the house, and the very large system on the outside, it is not surprising that how the inside and outside are connected will effect the behaviour on the inside.

CDE

Of course there are a multitude of ways to unsettle the status quo in the house. However considering containers, differences, and exchanges (CDE) gives a good start to think about what leverages we can pull to cause the people in the house to behave differently. Or phrased in system lingo: to make the system reconfigure itself in another configuration.

System theory teaches us that lots of systems are dynamic and non-linear. This means that it is very hard to predict the exact outcome of a change.

As producers of Big Brother we know this. When we nudge the system in the house, we know that something will happen, but we do not know exactly what. We can have a guess, but we do not know. So, we need to be at our toes to watch out if things go another direction, and take compensating actions - or actions we hope are compensating.

Well, obviously neither you nor I are producers of Big Brother. And most probably we will never be. But same ideas can be applied to think about agile software teams that have self-organised and when coaching them or their managers. However, that is a subject which is a letter of its own.

Yours

   Dan

PS To be honest, my description does not perfectly fit how CDE was described by Glenda Eoyang in her Ph D thesis. But I think I am truthful to the main idea.

PPS Glenda Eoyang’s thesis can be read in full at http://www.hsdinstitute.org/about-hsd/dr-glenda/glendaeoyang-dissertation.pdf. A briefer introduction can be found at http://wiki.hsdinstitute.org/cde.

PPPS I think first time I came across CDE was when Mike Cohn introduced it to me during is tour when he released Succeeding with Agile, a good book for a lot of reasons and which also includes an introduction to CDE.






Thursday 5 June 2014

The Three Elements of Agile

Dear Junior

Not that I necessarily claim to be an old hip-hopper, but an interesting part of the hip hop culture is the concept of "the four elements". 
  • Breaking - more commonly known as breakdance
  • MC - more or less what rap is about
  • DJ - you know disk-jockey
  •  Graffiti - public painting
An interesting thing is that within the hip hop community it is commonly agreed that all four elements are needed. If you should take away any of them, the hip hop culture would not be whole, it needs all four of them to be complete.

Also, all the elements are on the same level. That MCing should be "higher" than graffiti, or vice verse, is an idea that would be refuted as ridiculous. All the elements are of equal importance and each indispensable. 

Obviously, the individual hip-hopper might not practice all of the elements. He or she will most probably immense in one of them. A few might practice two or three. Some might practice all four but are certainly not expected to master all of them.

Still there are a lot of respect between practitioners of the different elements: a talented MC easily gives public cred to a vicious graffiti painter. They are both needed and they would both feel poorer without the other. 

The four elements must be in balance for the hip-hop culture to thrive.

Well, that sounds all nice and cosy. But what does that have to do with agile?

I think the Agile community have a similar situation. Cutting the cake with a blunt knife I see three elements within the "agile culture": tech, process and organisation. OK, the knife is blunt so the cuts might not be perfectly clear, but I think it suffice to clarify my point.

Agile Tech consists of the technologies we have developed and use for the direct building of software. In this category we find frameworks as Hystrix for resilience; we find products as nosql-databases; we find tools as Chaos Monkey. Aside from the code we also have the close-to-code practices like Test-Driven Development, Domain-Driven Design, Continuous Integration, Build Pipelines etc. All these things enable us directly to write awesome software.

With "process" in Agile Process I simply mean "the way we work". We work in sprints á la Scrum, or WIP-limited continuous flow á la Kanban; we demo at regular intervals; we use information radiators and stand-ups to synchronise work; we make forecasts to synchronise our work with other departments; we run retrospectives after sprints and projects to have a structured learning etc; we set targets and measure progress visavi effect goals, e g using Impact Mapping.

Finally Agile organisation is about how we structure ourselves. Within teams we self-organise, sure, but this is what we do on a larger level as well. We must ensure that information propagate across the organisation in a sound way; we want the architecture to stay reasonably consistent, preferably without a command-and-control chief architect. We must also synchronise and prioritise initiatives across this organisation so we also find portfolio management and finance/funding practices like Beyond Budgeting. Last but not least we find Agile HR/Agile Talent Management as part of this element.

Each agile practitioner might not do all the elements. On the contrary, most will stay mostly within one element or have their foot-hold in one and cross over to the other. 

Comparing to the hip-hop culture I must claim that the agile community is not complete without all three elements. Doing only tech does not help us; doing only process does not help us; doing only org does not help us. Skipping any of them would hurt us.

Nevertheless I have during the history of agile I have seen competition between the element, sometimes even hostility between practitioners where one side have claimed to be the "true agile".

I think it is unnecessary and harmful.

Furthermore, I think it is unworthy of us, of a community praising "humans and interactions" over all things, a community that thinks "cross functional teams" are the superior model - should we not strive for a "cross functional community"?

In the same way as the hip-hoppers give cred across element borders I would love to see the kick-ass developer praise the agile-minded HR Director for "getting it"; I want the beyond-budgeting-inspired CFO to thank the engaged and engaging scrum master for the fantastic project retrospective; I want to see the impact-mapping business analyst to thank the devops for coming up with insightful real-time metrics about what customers actually do.

OK, I might be guilty of hippiesm here, but I want the three elements of agile to share love.

Yours

   Dan

PS One of the reasons I care so much is that deep in my heart I feel that Agile is different.
PPS There is a video in Swedish from the conference Agila Sverige 2014 where I do a blitz presentation on this subject and some related stuff.

Thursday 27 March 2014

Deep Learning as a Strategy for Information Age

Dear Junior

Last friday we had the pleasure of having CS students Sofie Lindblom and Anton Arbring as guests visiting the monthly competence day at Omegapoint. 

After this visit, Sofie have done me the honour of musing on the theme of our letters by writing an open letter "Dear Senior, Letter to a Senior Programmer". 

That post is so full with interesting topics it would take a day just to briefly discuss them. But those topics are also way too important to leave uncommented. So, to do something, let us pick one important thing and discuss it. I pick the topic about "what to learn".

Too much information out there - as always have been

Sofie writes:
But there is too much information out there to know where to start. I am not stupid, I did very well in all programming courses and is a fast learner. But I feel exhausted by the amount of information available.
To start somewhere, let us start with the vast amount of information, technology, frameworks, etc that are out there. Obviously there is no way to take in all of that. If we want to use metaphors, it does not suffice saying "drinking from the fire-hose", it is rather to try to gulp the Nile. 

Good part is that the situation is not new. Of course there are more information out there now compared to 15 years ago when I left university. But even then the amount of information available was too much for any individual to comprehend. And the situation is even older. The proverb "so many books, so little time" is not fresh-out-of-the-presses. 

For those leaving university today, there will be truckloads of technology you will be using at work that you did not learn in class. But that situation is not new either. When I left university, I had not used a relational database in any single class or lab. Still, most systems (not all), I have worked with professionally have included SQL databases of some sort. Actually, one of my first jobs was to teach a class on the Java database API JDBC. How did I manage?

The obvious solution is to replace "know everything" with "able to comprehend". We cannot know everything beforehand, but we need to be able to understand any technology we come across with just spending a reasonable amount of work.

Killing a meme

There is a meme around in this information age that basically goes "you do not need to know, you need to be able to find information". I want to kill that meme in the context of system development.

The meme might very well be true, with Wikipedia and the rest of the web at our fingertips we will be able to find data like "first historically recorded solar eclipse" (5th of May 1375 BC in Ugarit). Nevertheless true, it is worthless to us as software professionals. Because what we need is not data or information, but understanding.

Deep knowledge feed deep knowledge

Now, this is only my own meandering experience, but I have found it invaluable to know a few things really well. Deep knowledge has interesting side effects. Suddenly you see some pattern apply to a new domain. It seems like no domain of knowledge is an island. Even if facts do not carry across boarders, some structures of thinking and reasoning actually apply.

This is really vague, so let me throw some examples to clarify. When studying law I suddenly found that my studies of formal logic really helped me. I studied negotiation theory and found how it applies to finding a good architecture for a software system. I studied compiler technology and found it helpful when studying linguistics. Through my lifelong studies of math, I see wonderful aspects of beauty in the world every day. (OK, the last is a little bit off topic - but it makes my life richer, and that is worth something)

The strategy I try to apply myself is to study subjects in depth, to the level when I have to think hard about them. The specific knowledge might not be immediately applicable - I will probably not have any specific use of knowing e g how to count the number of ways to paint a cube using several colours. However, thinking hard has probably etched new traces in my brain - and those ways of thinking will probably pop up applicable in a new domain.

To fall back on metaphors again. As software developers we need to dig deep to understand a new technology. To get down to depth we are not very helped by having dug a meter deep over a large area. But if we have dug a few 20 m deep holes in other places, there is a good chance that we can dig a short tunnel at the 20 m level from the bottom of some other hole. 

How did I survive that first job-gig teaching an API that I had myself never used before? Well, having studied functional programming in depth (using e g ML) had made me comfortable with the ideas of abstract datatypes. So the idea of an API was not unfamiliar. Having studied linguistics I was very familiar with formal grammars of languages so SQL syntax was not strange. Having studied compiler technology I could understand the semantics of SQL. Having studied algebra and set theory I could easily pick up how SELECT and JOIN worked.

It took me a few days to read the JDBC API and specification combined with some small hacks to validate that I had got it right. And after those few days I did not only knew about JDBC, I actually understood it well enough to be able to teach it in a reasonable way. Not being an expert, but reasonably competent.

Without the deep knowledge in some very obscure subjects (linguistics, set theory, compiler technology etc) I would have been utterly lost. No skill in "searching information on the web" would have helped me the least.

Sofie writes

The more I learn, the more I realize how little I know. It creates contradictory feelings towards the field I love. To twist it even further the part I love the most is that you can never be fully learned and that there is never a ”right” answer.
I understand the frustration. But I am not sure I would like to have a field where there was a right answer, a proven best practice - many in our field dream of such. 

However, to me a large portion of the beauty of the field is that we are constantly pacing unchartered terrain. The challenge is to constantly search your tool-box for something that seems applicable, to adapt, to improvise, to search, to try, to fail, to back up, to learn, to grow, to try again, to discuss, to exchange ideas, to finally nail it.

This is nothing but my own personal experience, but if I should offer an advice to handle the world of information we have around us it would be the following:

Find things to learn that you find interesting and that challenge your intellect. Take the time, pain, and pleasure to learn a few of those things to depth. The deep thinking will etch your brain in ways that will help you enormously whenever you approach a new field. And enjoy the pleasure of deep understanding when it dawns on you.

Yours

  Dan

PS Should you come across Sofie and Anton, take the time to have a discussion with them. And do not stop at a chat about everyday things - they have really interesting ideas to delve into.