Software People Inspiring
Jason Gorman's Software People Inspiring
06/25/2017 12:32 AM
I've long recommended running requirements documents (e.g., acceptance tests) through tag cloud generators to create a cheap-and-cheerful domain glossary for developers to refer to when we need inspiration for a name in our code.
But, considering today how we might assess the readability of code automatically, I imagined what we could learn by doing this for both the requirements and
our code, and then comparing the resulting lexicons to see how much conceptual overlap there is.
I'm calling this overlap Conceptual Correlation
, and I'm thinking this wouldn't be too difficult to automate in a basic form.
The devil's in the detail, of course. "Noise" words like "the", "a", "and" and so on would need to be stripped out. And would we look for exact words matches? Would we wish to know the incidence of each word and include that in our comparison? (e.g., if "flight" came up often in requirements for a travel booking website, but only mentioned once in the code, would that be a weaker correlation?)
I'm thinking that something like this, coupled with a readability metric similar to the Flesch-Kincaid index, could automatically highlight code that might be harder to understand.
Lots to think about... But it also strikes me as very telling that tools like this don't currently exist for most programming languages. I could only find one experimental tool for analysing Java code readability. Bizarre, when you consider just what a big deal we all say readability is.
06/20/2017 12:35 AM
I'm currently updating a slide deck for an NUnit workshop I run (all the spiffy new library versions, because I'm down with the yoof), and got to the slide on fluent assertions.
The point I make with this slide is that - according to the popular wisdom - fluent assertions are easier to understand because they're more expressive than classic NUnit assertions.
So, I took a bunch of examples of classic assertions from a previous slide and redid them as fluent assertions, and ended up with this.
Compared next to each other like this, suddenly somehow my claim that fluent assertions are easier to understand looks shaky. Are they? Are they really?
A client of mine, some time back now, ran a little internal experiment on this with fluent assertions written with Hamcrest and JUnit. They rigged up a dozen or so assertions in fluent and classic styles, and timed developers while they decided if those tests would pass or fail. It was noted - with some surprise - that people seemed to grok the classic assertions faster.
What do you think?
06/09/2017 11:20 PM
Could You Be A Mentor To An Aspiring Software Developer?
I've been beavering away these last few weeks putting together the basis for an initiative that will enable experienced software developers to mentor new programmers looking to become developers one day.
It'll take the form of a Software Developers' Guild
- a sort of clearing house that helps talented new programmers find old hands who can provide "light-touch" guidance over the long-term (4-6 years).
I see it working along similar lines to what I've been doing with my "apprentice" Will Price (who's just finished his final exams for his CS degree, and has turned out pretty spiffy as a developer, too). I've been pairing with Will regularly for a couple of hours every fortnight or so, working on the skills formal education tends to leave out (using version control, test automation, TDD< refactoring, design principles and other practical aspects of code craft).
I've also been nudging him towards certain sources of information: books, blogs, conferences, and so forth, and generally giving him a steer on what he would find most useful to know as a software developer.
Reflecting on how it's gone, both Will and I feel it's been of immense value - and not just for Will. Mentoring someone new to this field has spurred me to learn new things, too (like Python, for example) and reinvigorated my enthusiasm for learning. So, after twelvety-stupid years as a developer, I feel renewed. And looking forward to doing it again.
The industry is also up on the deal by one potentially great developer.
My thinking of late has been that this could be a workable route to avoiding the Groundhog Day that our profession seems stuck in, where new developers have to go through the same long process of rediscovery, with all the false leads and dead ends I wasted years on.
And so, this year, I tentatively begin the process of trying to scale this approach up. You can find out a bit more by visiting the Software Developers' Guild
holding page. And, maybe, you'd be interested in becoming a mentor?
I'm looking for experienced developers who've "been around the block at least twice" ( call it my Rule of Two
), and who'd be willing and able to provide a similar kind of light-touch guidance to someone at university, or from a code club, or returning to work after raising children or caring for a relative, or retraining for a career change, etc.
Could that be you?
06/04/2017 07:19 PM
The Codemanship TDD "Driving Test" - Initial Update
A question that gets asked increasingly frequently by folk who've been on a Codemanship TDD workshop is "Do we get a certificate?"
Now, I'm not a great believer in certification, especially when the certificates are essentially just for turning up. For example, a certificate that says you're an "agile developer", based on sitting an exam at the end of a 2-3 day training course, really doesn't say anything meaningful about your actual abilities.
Having said all that, I have pioneered programs in the past that did seem to be a decent indicator of TDD skills and habits. First of all, to know if a juggler can juggle, we've got to see them juggle.
A TDD exam is meaningless in most respects, except perhaps to show that someone understands why
they're doing what they're doing. Someone may be in the habit of writing tests that only ask one question, but I see developers doing things all the time that they "read in a book" or "saw their team doing" and all they're really doing is parroting it.
Conversely, someone may understand that tests should ideally have only one reason to fail so that when they do fail, it's much easier to pinpoint the cause of the problem, but never put that into practice. I also see a lot of developers who can talk the talk but don't walk the walk.
So, the top item on my TDD certification wish-list would be that it has to demonstrate both practical ability and
In this respect, the best analogy I can think of is a driving test
; learner drivers have to demonstrate a practical grasp of the mechanics of safe driving as well as a theoretical grasp of motoring and the highway code. In a TDD "driving test", people would need to succeed at both a practical and a theoretical component.
The practical element would need to be challenging enough - but not too challenging - to get a real feel for whether they're good enough at TDD to scale it to non-trivial problems. FizzBuzz just won't vut it, in my experience. (Although you can weed out theose who obviously can't even do the basics in a few minutes.)
The Team Dojo I created for the Software Craftsmanship conference seems like a viable candidate. Except it would be tackled by you alone (which you may actually find easier!) In the original dojo, developers had to tackle requirements for a fictional social network for programmers. There were a handful of user stories, accompanied by some acceptance tests that the solution had to pass to score points.
In a TDD driving test, I might ask developers to tackle a similar scale of problem (roughly 4-8 hours for an individual to complete). There would be some automated acceptance tests that your solution would need to pass before you can complete the driving test.
Once you've committed your finished solution, a much more exhaustive suite of tests would then be run against it (you'd be asked to implement a specific API to enable this). I'm currently pondering and consulting on how many bugs I might allow. My instinct is to say that if any
of these tests fail, you've failed your TDD driving test. A solution of maybe 1,000 lines of code should have no bugs in it if the goal is to achieve a defect density of < 0.1/KLOC. I am, of course, from the "code should be of high integrity" school of development. We'll see how that pans out after I trial the driving test.
So, we have two bars that your solution would have to clear so far: acceptance tests, and exhaustive testing.
Provided you successfully jump those hurdles, your code would then be inspected or analysed for key aspects of maintainability: readability, simplicity, and lack of duplication. (The other 3 goals of Simple Design, basically.)
As an indicator, I'd also measure your code coverage (probably using mutation testing). If you really did TDD it rigorously, I'd expect the level of test assurance to be very high. Again, a trial will help set a realistic quality bar for this, but I'm guessing it will be about 90%, depending on which mutation testing I use and which mutations are switched on/off.
Finally, I'd be interested in the "testability" of your design. That's usually a euphamism for whether or not dependencies betwreen your modules are easily swappable (by dependency injection). The problem would also be designed to require the use of some test doubles, and I'd check that they were used appropriately.
So, you'd have to pass the acceptance tests to complete the test. Then your solution would be exhaustively tested to see if any bugs slipped through. If no bugs are found, the code will be inspected for basic cleanliness. I may also check the execution time of the tests and set an upper limit for that.
First and foremost, TDD is about getting shit done
- and getting it done right. Any certification that doesn't test this is not worth the paper it's printed on.
And last, but not least, someone - initially me, probably - will pair with you remotely for half an hour at some random time during the test to:
1. Confirm that it really is you who's doing it, and...
2. See if you apply good TDD habits, of which you'd have been given a list well in advance to help you practice. If you've been on a Codemanship TDD course, or seen lists of "good TDD habits" in conference talks and blog posts (most of which originated from Codemanship, BTW), then you'll already know what many of these habits are
During that half hour of pairing, your insights into TDD will also be randomly tested. Do you understand why you're running the test to see it fail first? Do you know the difference between a mock and stub and a dummy?
Naturally, people will complain that "this isn't how we do TDD", and that's fair comment. But you could argue the same thing in a real driving test: "that's not how I'm gonna drive."
The Codemanship TDD driving test would be aimed at people who've been on a Codemanship TDD workshop in the last 8 years and have learned to do TDD the Codemanship way. It would demonstrate not only that you attended the workshop, but that you understood it, and then went away and practiced until you could apply the ideas on something resembling a real-world problem.
Based on experience, I'd expect developers to need 4-6 months of regular practice at TDD after a training workshop before they'd be ready to take the driving test.
Still much thinking and work to be done. Will keep you posted.
05/29/2017 10:45 PM
Do You Write Automated Tests When You Spike?
So, I've been running this little poll on Twitter asking devs if they write automated tests when they're knocking up a prototype (or a "spike", as Extreme Programmers call it).
The responses so far have been interesting, if not entirely unexpected. About two thirds of rarely or never write automated tests for a spike.
Behind this is the ongoing debate about the limits of usefulness of such tests (and of TDD, if we take that a step further). Some devs believe that when a problem is small, or when they expect to throw away the code afterwards, automated tests add no value and just slow us down.
My own experience has been a slow but sure transition from not bothering with unit tests for spikes 15 years ago, to almost always writing some
unit tests even on small experiments. Why? Because I've found - and I've measured myself doing it, so it's not just a feeling - I get my spike done faster when I have a bit of test scaffolding holding it up.
For sure, I'm not as rigorous about it as when I'm working on production code. The tests tends to be at a higher level, and there are fewer of them. I may break a few of my own TDD rules and have tests that ask more than one question, or I may not refactor the test code quite as fastidiously. But the tests are there, nevertheless. And I'm usually really grateful that I wrote some, as the experiment grows and maybe makes some unexpected twists and turns.
And if - as can happen - the experiment becomes part of the production code, I'm confident that what I've produced is just about good enough to be released and maintained. I'm not in the business of producing legacy code... not even by accident.
An example of one of my spikes, for a utility that combines arrays of test data
for use with parameterised tests, gives you an idea of the level of discipline I might usually apply. Not quite production quality, but not that far off.
The spike took - in total - maybe a couple of days, and I was really grateful for the tests by the second day. In timed experiments, I've seen me tackle much smaller problems faster when I wrote automated tests for them as I went along. Which is why, for me, that seems to be the way to go. I get done sooner, with something that could
potentially be released. It leaves the door open.
Other developers may find that they get done sooner without writing automated tests. With TDD, I'm very much in my comfort zone. They may be outside it. In those instances, they probably need to be especially disciplined about throwing that code away to remove the temptation of releasing unreliable, unmaintainable code.
rehabilitate it, writing tests after the fact and refactoring the code to give it a production sparkle. Some people refer to this process as "spike & stabilise". But, to me, it does rather sound like "code and fix". Because, technically, that's exactly what it is. And experience - not just mine, but a mountain of hard data going back decades - strongly suggests that code and fix is the slow route to delivery.
So I'm a little skeptical, to say the least.
05/29/2017 05:20 PM
20 Dev Metrics - 20. Diversity
The final metric in my series 20 Dev Metrics
First of all, we can have diversity of people: their ages, their genders, their sexual orientations, their ethnic backgrounds, their nationalities, their abilities (and disabilities), their socio-economic backgrounds, their educational backgrounds, and so on.
But we can go beyond this and also consider diversity of ideas
. The value of diversity is essentially more choice
. A team with 10 different ideas for improving customer retention is in a better position for solving their problem than a team with only one.
Nurturing diversity of people can lead to a greater diversity of ideas, but I believe we shouldn't take that effect for granted. Teams made up of strikingly different people are still quite capable of group-think. Culture is susceptible to homogenisation, because people tend to try to fit in. A more diverse group of people may just take a bit longer to reach that uniformity. Therefore, diversity is not a destination, but a journey; a process that continually renews itself by ingesting new people and new ideas.
For example, on your current product or project, how many different ideas were considered? How many prototypes were tried? You'd be amazed at just how common it is for dev teams to start with a single idea and stick to it to the bitter end.
What processes and strategies does your organisation have for generating or finding new ideas and testing them out? Where do ideas come from? Is it from anyone in the team, or do they all come from the boss? (The dictatorial nature of the traditional heirarchical organisation tends to produce a very narrow range of ideas.)
What processes and strategies does your organisation have for attracting and retaining a diverse range of people? Does it have any at all? (Most don't.)
How outward-looking are the team? Do they engage with a wide range of communities and are they exposed to a wide range of ideas? Or are they inward-looking and insular, mostly seeking solutions in their own backyard?
The first step to improving diversity is measuring it. Does the makeup of the team roughly reflect the makeup of the general population? If not, then maybe we need to take steps to open the team up to a wider range of people. Perhaps we need to advertise jobs in other places? Perhaps we need to look at the team's "brand" when we're hiring to see what kind of message we're sending out? Does "Must be willing to work long hours" put off parents with young children? Does "Regular team paintballing" exclude people with certain disabilities? Does "We work hard, play hard" say to the teetotaller "You probably won't fit in"?
Most vitally, is your organisation the kind that insists on developers arriving fully-formed (and therefore are always drawing from the narrow pool of people who are already software developers)? Or do you offer chances for people who wouldn't normally be in that pool to learn and become developers? Do you offer paid apprenticeships or internships, for example? Are they open to anyone
? Are you advertising them outside
of the software development community? How would a 55-year-old recently forced to take early retirement find out about your apprenticeship? How would an 18-year-old who can't afford to go to university hear about your internship? These people probably don't read Stack Overflow.
05/28/2017 07:35 PM
Software Craftsmanship 2017
It's early days, but planning is underway for this year's Software Craftsmanship conference.
Since it returned last year, SC20xx has evolved from a conference where talks were banned, to a conference with no fixed sessions at all. We stripped it down to the bare essentials of what folk said made previous SC conferences fun and worthwhile. Basically, we want a chance to meet likeminded code crafters, socialise, exchange ideas, and - most of all - code
Instead of scheduled sessions, we'll have the space and the resources needed to tackle interesting and challenging programming projects. Work in pairs, work in groups, work by yourself (although, really, why come all that way to work alone?) Build a bot. Write a tool. Start a tech business. Put your Clean Code skills to the test. It's all good.
SC2017 will be on Saturday Sept 16th, and last year we had events being hosted in London, Manchester, Bristol, Munich and Atlanta, GA. This year, we're hoping for even more hosted events all around the world. It doesn't require much organisation, so drop me a line
if you're interested in running one where you are.
Unofficially, the London event is already open for registration
(there's a small fee to cover costs - SC2017 is a non-profit event), and more details will be posted soon.
05/21/2017 12:09 PM
20 Dev Metrics - 19. Progress
Some folk have - quite rightly - asked "Why bother with a series on metrics?" Hopefully, I've vindicated myself with a few metrics you haven't seen before. And number 19 in the series of 20 Dev Metrics
is something that I have only ever
seen used on teams I've led.
When I reveal this metric, you'll roll your eyes and say "Well, duh!" and then go back to your daily routine and forget all about it, just like every other developer always has. Which is ironic, because - out of all the things we could possibly measure - it's indisputably the most important.
The one thing that dev teams don't measure is actual progress
towards a customer goal. The Agile manifesto claimed that working software is the primary measure of progress. This is incorrect
. The real measure of progress is vaguely alluded to with the word "value". We deliver "value" to customers, and that has somehow become confused with working software.
Agile consultants talk of the "flow of value", when what they really mean is the flow of working software. But let's not confuse buying lottery tickets with winning jackpots. What has value is not the software itself, but what can be achieved using the software. All good software development starts there.
If an app to monitor blood pressure doesn't help patients to lower their blood pressure, then what's the point? If a website that matches singles doesn't help people to find love, then why bother? If a credit scoring algorithm doesn't reduce financial risk, it's pointless.
At the heart of IT's biggest problems lies this failure of almost all development teams to address customers' end goals. We ask the customer "What software would you like us to build?", and that's the wrong question
. We effectively make them responsible for designing a solution to their problem, and then - at best - we deliver those features to order. (Although, let's face it, most teams don't even do that.)
At the foundations of Agile Software Development, there's this idea of iterating rapidly towards a goal. Going back as far as the mid 1970's, with the germ of Rapid Development, and the late 80's with Tom Gilb's ideas of an evolutionary approach to software design driven by testable goals, the message was always there. But it got lost under a pile of daily stand-ups and burndown charts and weekly show-and-tells.
So, number 19 in my series is simply Progress
. Find out what it is your customer is trying to achieve. Figure out some way of regularly testing to what extent you've achieved it. And iterate directly towards each goal. Ditch the backlog, and stop measuring progress by tasks completed or features delivered. It's meaningless.
Unless, of course, you want the value of what you create to be measured by the yard.
05/18/2017 09:10 PM
A Clean Code Language?
Triggered by a tweet about the designers of Python boasting that functions can now accept 255 parameters - I mean, why, really? - my magpie mind has been buzzing with the notion of how language designs (and compilers) could enforce clean code?
For example, what if the maximum number of parameters was just three
? Need more data for your method than that? Maybe a parameter object is required. Or maybe that method does too much, and that's why it has so many parameters. You would have to fix the underlying problem.
And what if the maximum number of branches or loops in a method was one
? Need another branch? You'd have to create another method for it and compose your conditionals that way. Or maybe replace conditionals with a polymorphic solution.
And what if objects that aren't at the top of the call stack weren't allowed to instantiate other objects? You'd have to pass its collaborators in through a constructor or other method.
And what if untested code caused a compile error?
And so on. Hopefully, you get the picture. Now, I'm just thinking out loud, but I think this could be at the very least a valuable thought experiment. By articulating design rules in ways that a compiler (or pre-compiler) might be able to enforce, I'm clarifying in my own mind what those rules really are.
What rules would your Clean Code language have?
05/18/2017 06:20 PM
20 Dev Metrics - 18. External Dependencies
18th in my series 20 Dev Metrics
is External Dependencies
If our code relies too much on other people's APIs, we can end up wasting a lot of time fixing things that are broken when the contracts change. (Anyone who's written code that consumes the Facebook API will probably know exactly what I mean.)
In an ideal world, APIs would remain backwards-compatible. But in the real world, where 3rd-party developers aren't as disciplined as we are, they change all the time. So our code has to keep changing to continue to work.
I would argue that, with the way our tools have evolved, it's too easy these days to add external dependencies to our software.
It helps to be aware of the burden we're creating as we suck in each new library or web service, lest we fall prey to the error of buying the whole Mercedes just for the cigarette lighter.
The simplest metric is just to count the number of dependencies. The more there are, the more unstable
our code will become.
It's also worth knowing how much of our code has direct dependencies on external APIs. Maybe we only depend on JDBC, but if 50% of our code directly references JDBC interfaces, we still have a problem.
You should aim to have as little of your code directly depend on 3rd-party APIs as possible, and as few different APIs as you can use to build the software you need to.
(And, yes, I'm including GUI frameworks etc in my definition of "external dependencies")