Aterny Blog

Latest From Our Blog

The success of a change program is dependent on many things. One of them is how the organisation deals with change. And how they learn what needs changing and whether the change was effective once introduced. Continue reading
Methodology (n) 1. The system of methods and principles used in a particular discipline. 2. The branch of philosophy concerned with the science of method and procedure. 3. An organised, documented set of procedures and guidelines for one or more phases of the software development lifecycle. 4. A pretentious way of saying Method. Continue reading
In their iconic book "Peopleware", DiMarco and Lister said "I can't tell you what to do to get a team to gel, but I can tell you 100 things that will prevent it" A similar thing applies to Agile. A lot has been written about how to become agile – what practices to use, and what principles and behaviours to adopt. Little, however, has been written about what can stop an agile transformation in its tracks. There are a number of things that can undermine or derail an agile team or company, especially one in transition. One day I might write that book, but the focus of this post is just one such factor – individual KPIs (Key Performance Indicators). Continue reading
What if you have a great new business idea, but don't know whether it will work. You prototype it, right? Of course. But what do you prototype? How do you define what that minimum viable product (MVP) is that will prove whether your business idea is a great one or a non-starter? As Steve Blank says in this article, it's not necessarily a smaller, cheaper version of the final product. The important thing is to define what you don't know about the product, and work out the quickest, cheapest way of finding out the answers. It's about smart learning. Continue reading
The Standish Group have been publishing the Chaos report into the state of software development annually since at least 1995 and the figures have been reproduced all over the place, especially when trying to explain why Agile is a good idea. If they are to be believed, the software development industry is in serious crisis, with only 32% of projects completing successfully (in 2009). But some people have been questioning the figures. Ask yourself – How many failed projects are you aware of? I haven't seen too many, to be honest. Interestingly, the Standish Group have never published their data, nor the names of companies that completed their surveys. Which raises questions about the integrity of the surveys. But more to the point, the definitions of what constitute a failed or challenged project is seriously open to question. The University of Amsterdam and the IEEE highlight the problems with the way the Chaos study has been conducted and challenges the results as being unrealistic. "…Standish defines a project as a success based on how well it did with respect to its original estimates of cost, time, and functionality". In other words, they are judging not the projects themselves, but merely the estimation of those projects. And that is something entirely different. That project success can be meaningfully defined as the accuracy of estimation is clearly nonsense. Scott Ambler regularly conducts surveys on a host of IT topics, especially but not limited to Agile. According to these surveys, in 2011, traditional projects were perceived to be successful about 50% of the time, Agile or iterative projects about 70%. Yet, in 2010, agile projects were perceived to be successful just 55% of the time, challenged 35% of the time, with 10% failing. Interestingly, 33% said that no agile projects had failed. While these figures are better than those published by Standish, I think there is some way to go before we properly understand how successful projects are, especially in an agile context. Are we all measuring the same thing? In my recent experience, I can think of only two project failures in the last couple of years – less than 5 percent! One was stopped part way through, the other was completed but had to be 'switched off' afterwards. Perhaps we need a new definition for success, failure and what constitutes compromised, that can be applied to all IT projects, regardless of Method. Then we can realistically assess how successful Agile methods are. What does success mean in your organisation and what are your success rates? Answers in the comments box below please. Continue reading
I was on one of the LinkedIn groups the other day, reading through one of the discussions around testing and one of the contributors stated that Agile "mandates specific practices" and that if the "real world" precludes you from "doing agile", then don't pretend you're doing agile. I was sorely tempted to comment, but the discussion had already become a little heated, as measured by the length of each successive comment. So instead of further stoking the flames of argument, I sat back and had a think about it, reading through all of the opinions in the discussion as it got further and further off-topic. The argument boiled down to what each person's perception was of the nature of "Agile". Some see it as a collection of practices, which are, in an ideal world, inter-related. I did write a while ago about this, saying that "the processes work best as a whole". But, if you do not have the infrastructure, the software architecture or the tools to be able to implement continuous integration for example, does that mean you are not agile and should come up with a new name for whatever it is that you are doing? Interestingly this same fella also said that Scrum is more of a management/planning framework than a practice framework, which almost made me choke on my lunch. So, I am stating my opinion here quite clearly – "Agile", as defined by the Agile Manifesto, does not mandate any practices at all. It does, however, provide guiding principles. Mandating practices is in the arena of Agile Methods, of which Scrum, Kanban, and DSDM are but a few. Even then, most practices are strongly encouraged, but hardly mandated. Perhaps the exception to this is XP, which is heavily defined by a few (very good) practices. It is my belief, that "Agile" is not a binary state. One cannot say that a person or an organisation is or is not Agile, but only how Agile they are/are not; how far they are along their agile journey. I believe that at the root of all that is Agile, there is just one fundamental defining necessity – frequent feedback. Most of the accepted agile practices are there to promote feedback. Think of stand-ups, user stories, sprints (iterations), retrospectives, pairing, continuous integration, etc – all exist in order to promote feedback. We can think of this question of our 'Agile-ness' as being defined by how good we are at obtaining and acting on feedback. Feedback gained through iterative development, collaboration with the customer, frequent integration, automated testing, etc etc. So, if you don't have a fully-engaged Product Owner available whenever you need her, does that mean you're not Agile? What if you don't use user stories? If, to be considered Agile, one had to implement (and be good at) all of these practices at once, transition would fail a lot more frequently. In fact, trying to do so may be the reason for some failures in agile adoption. The important thing is to understand why Agile works and use those practices that can successfully be adopted in order to further your 'agility', progressively and at a pace that the organisation can cope with. And to constantly strive to do better. Continue reading
The DWP has ceased all agile software development work for the Universal Credit (UC) programme. This is disappointing, but not entirely unexpected. Warning bells should have been clamouring loudly when it was announced by the Major Projects Authority that "conventional contracts with large suppliers" were in effect. Yet, In July 2011, we were told that addenda had been written to the contracts in place favouring an Agile approach, and "incentivising velocity". Failure was, in fact predicted, back in April of 2011 by IT lawyer Alistair Maughan, who argued that: 1. Under Agile projects… you can't guarantee a specified outcome for a specific price 2. Government is legally required to follow open procurement rules. 3. Agile offers insufficient means of remedy if things go wrong 4. Agile is not suited to public sector management structures Maughan concluded: "You can have an ICT project with a watertight contract, clear deliverables, openly and legally procured, with a fixed price and appropriate remedies if you don't get what you want. Or you can have an Agile project. You can't have both." I disagree. Here's my take on those four factors : 1. Agile does not necessarily mean you don't have a fixed scope. It is perfectly reasonable to fix high-level requirements and guarantee delivery of the essential elements of those high-level requirements, while retaining flexibility in the details through prioritisation and in the solution. This creates the contingency needed. 2. I don't see why procurement should be an issue if the tender process specifies high-level requirements and requires those tendering to estimate their delivery dates and costs. If the delivery date is fixed in the contract, with contingency in the features also guaranteed, then I can't see a problem. But then, I don't fully understand the procurement rules. 3. There are other ways of constructing an agile contract than just fixed-scope or time-and-materials, as Susan Atkinson is aware. I suspect that the problem lay at least partly with the fact that the 'agile nature' of the contracts was contained only within an addendum, which may have been at odds with the main part of the contract. It would be interesting to see them. 4. No, Agile is not suited to public sector management structures, but did the consultants involved really neglect to mention this to the DWP? Or did they simply "Keep Calm and Carry On", ignoring the obvious flaw in the plan? What, I wonder, was the nature of the contract for their consultancy. Agile (or a flavour thereof) can and does scale. IF you do it in a disciplined and well-controlled fashion, as SITA (Sociéete Internationale de Telecommunications Aeronautiques) are doing with their 5 year, $155m programme, and their contracts were essentially time-and-materials. I am not suggesting that the approach SITA have adopted is the only one, but the balance between central management, governance and control on the one hand, and distributed, empowered iterative delivery on the other are clearly key factors in making Agile work at this sort of scale. I sincerely hope that other government departments learn that a) a £2 Bn programme is not the place to pilot a completely new delivery philosophy, and b) Agile at Scale does work, but it does require certain pre-requisites. Continue reading
The 'Iron Triangle' known to all project managers is one of the cornerstones of the profession. Essentially, it defines the major project constraints of time, cost and scope (sometimes quality is shown in the middle). On most traditional projects, scope is fixed through definition of the requirements at the start, and the PM spends a lot of time trying to manage the cost and schedule aspects. And if and when the project overruns, it is often the quality that suffers. But how does this apply to an agile project? Do we look at it differently? The iron triangle is not something specific to any method. It is an expression of real project constraints, whatever method you are using. Agilist Scott Ambler, Chief Methodologist at IBM Rational has written about this, urging people to respect these constraints and actively plan for how to deal with them, even on an agile project, and arguing that fixing all three elements is possibly unethical. DSDM expressly turns the typical triangle on its head, fixing time and cost, with Scope the variable corner of the triangle. This means that DSDM is perfectly suited to fixed-cost (fixed budget in Scott Ambler's terms) projects, something that 'Agile' (i.e. Scrum) has long struggled with judging by the debates on the forums. But what if the project really does demand a fixed scope? What does DSDM do then? After all, "Deliver On Time" is its second principle. Can we really fix scope and time? I wouldn't. Fixed-cost projects are numerous (According to Scott Ambler's survey in 2010, 62% of agilists prefer to deliver on time… and 34% prefer to deliver when the system is ready") and for those, using DSDM in its current state is fine. Fixed-scope projects require a different approach. I have come across Business sponsors who are happy for a project to take longer "provided I get everything I ask for". In this situation, you could argue the case for not fixing the scope and explain till you're blue in the face why you shouldn't. But then you are working for the Sponsor, not the other way around. The BEST solution is to fix the scope and baseline it at a high-enough level that you know the list of requirements is unlikely to change. Then you create feature-contingency by decomposing and re-prioritising. Or you accept that the scope is fixed. In which case, what do you do with the second principle? Yes, DSDM works best when you fix the delivery date at the end of Foundations, but if you do have a fixed scope, fixing the schedule as well is dangerous! Unless of course, you have a large pool of skilled people sitting around 'on the bench' waiting to be called upon to contribute. You haven't? No, I didn't think you had. So if the scope is fixed, the easiest thing to do would be to prioritise using a ranking system (since MoSCoW isn't applicable without a timescale) and just run one timebox after another until the project finishes delivering the scope. It doesn't mean we abandon DSDM, just tweak it a little, adapting it – as you would with any other framework – to the needs of the project. I think that, like any other project, a DSDM one should consider the possibility that the iron triangle can rotate to ensure that any one of the three corners are variable, while stressing that the preferred configuration is fixed time and (people) cost, but variable scope. Perhaps the second Principle should be "Deliver within agreed project constraints" or something similar. Continue reading
I know of an IT department with just three full-time developers. Yes, just three. So what do you do with a team that size? What they are doing is developing lots of small enhancements and defect fixes, and not much else. They don't have enough people to do much else. Which got me thinking… Any decent IT department needs to consider three planning horizons – what I call the three 5s. Five days – you have to have someone resolving that backlog of production glitches, those occasional emergencies that occur from time to time, and all those minor enhancements that have been irritating the business for months. Call this the Production Support Team. Five months – the Project Teams work in this area, tackling that prioritised portfolio of projects. Five years – someone needs to be thinking about what your platform is going to look like in five years time and, more importantly, doing something about it. For this you need a Strategy Team. If you don't have three separate groups of people dedicated to those three horizons, something is missing in your organisation, because you can't survive long without all three. Without the Production support team, project teams are forced to include production bugs and small changes into their project scope. Which means they take longer to get delivered. A lot longer. Because when it comes to prioritisation, the project sponsor cares a lot more about his project than someone else's small enhancements, so they go to the bottom of the stack and are frequently not delivered at all. Without the Project Teams, you are limited to having the Production Support team working on only the smallest of work requests (like the department I mentioned at the start of this post). Anything over a few weeks is too big to handle because it would require more people than you have, and besides, nothing else would get done during that time. Without the Strategy Team, the other team/s are always thinking short-term, not caring about maintainability, architecture, or upgrades in coding standards, languages, design patterns etc. So you start building up technical debt to the point where you have an inflexible legacy system that is ponderous to maintain. Having three separate, dedicated groups of people mean that you can simultaneously deliver the urgent fixes, and small enhancements, as well as the bigger, more valuable projects, and define long-term technical strategy, monitoring adherence and progress along the way. So, does your IT department have three separate areas? Or if you have another way of solving this problem, I'd like to hear it. Continue reading
We are working with a corporate partner, who is providing us detailed requirements for changes to their customer documents. They supply the text, we apply it to the document and get it tested. During the planning workshop for our last timebox, the conversation got interesting : Business Ambassador: "We may not get the final text for that document in time." Project Manager : "How much of a problem is that?" Amb : "Well, if they don't get it to us in time, we can't complete it, can we?" PM : "So, we can call that a Should Have…?" Amb : "Err.. no. I'm not going back to them telling them this isn't a Must Have" PM : "But by definition it isn't. We wouldn't stop or delay the project if we didn't deliver this, would we?" Amb: "No, but if they do send us the text in time, we have to deliver it. So it's a Must. If it's not a Must, it won't get delivered." PM : "That's not true!" The story was marked as a Must, but it made me think. Strictly speaking, that story was by definition, a Should. But the Ambassador wouldn't accept that because thus far, our focus has been almost exclusively on the Must Have stories, and little else gets delivered. The point he's missing, though, is that the statement "If it's not a Must, it won't get delivered" becomes a self-fulfilling prophecy. The greater the portion of your requirements prioritised as Must Have, the less chance that you'll be able to work on anything else. This story was also a Must Have only if the partner provided the text that comprised the detailed requirements in time for the team to code it up and get it tested. A conditional Must Have requirement in other words. Awkward. Continue reading