Thursday, 30 July 2015

Code Blocks and Pictures of Dogs! Markdown Blogs from Here On Out!

Hey readers!

Exciting news! Remember all of my old terribly formatted blogs with the hideous lack of Code syntax highlighting? Well my dearest and most loyal readers, it thrills me to announce that I am finally making the switch to a medium of expression that my inner geek deems worthy.

MARK DOWN FOR WHAT!

Being far too lazy to learn a decent enough amount of HTML to make any kind of visually flexible posts for my dearest readers, the switch to the Markdown should come as a warm welcome for all relevant stakeholders. Thanks, StackEdit!

Expect code blocks!

public class Shrek () {
    public boolean isSexyFlyBrownBaldGuyWithBeard() {
        return true;
    }
}

More code blocks!

public class ShrekTest () {
    private Shrek shrek = new Shrek();

    @Test
    public void testShrekIsSexyFlyBrownBaldGuyWithBeardTrue() {
        assertThat(this.shrek.isSexyFlyBrownBaldGuyWithBeard()).is(true);
    }
}

And maybe once in a while even a PICTURE OF A DOG!
Dog

Exciting times! Keep an eye out for some new Tech Blogging from yours truly in the very near future!

Cheers,

Shrek

UPDATE: This markdown tutorial is awesome!

Thursday, 19 February 2015

Gojko Selfie!! A Brief Introduction to Impact Mapping

Hey all,

Check out this bad ass selfie:

Earlier this week, Orion Health's Continuous Delivery guru, Richard Paul and I had the pleasure of attending Gojko Adzic's brilliant course on Impact Mapping!  For those of you who don't know who Gojko is, he's considered as one of the game changing minds in the Software industry today.  He travels the world consulting for Software firms in trouble and running courses on Agile, BDD and more!  

He is also the author of the hallowed tome of BDD, Specification by Example!

Impact Mapping is all about mapping from an organisational goal to a solution by identifying the impact the solution has on the relevant users.  Gojko argues that the only reasonable impact to observe is a change in behaviour in your users.

One great example he presented us with was a Facebook games development company he consulted to.  They wanted to introduce a whole host of features including the posting of levels and achievements, and invites to friends in the hopes of increasing their player base.  Before they went ahead and shipped all of these features, Gojko had them draw an impact map from their goal to their solutions as follows:
From this they identified slices through the map, one such slice being:
  • Goal: 1 Million Players
  • Actors: Players
  • Impact on/change in behaviour of Actors: Players Posting more about the game on Facebook
  • Solution: Ability to post Level-ups to Facebook wall
Using the divine powers of Agile, the company pushed out just this feature and paid close attention to the pertinent metrics:
  • Are people posting more about the game to Facebook? (The Impact)
  • Are we closer to having 1 million players on Facebook? (The Goal)
What they found was that there was no marked change in behaviour:  no one wanted to spam their friends' news feeds when they became a Level 7 Wizard.  No impact was observed and obviously, no progress towards the goal was made.

Following this, they sliced the Semi-automated invites feature and observed that there was indeed a large number of players inviting their friends.  But unfortunately, none of their friends were accepting the invites and the goal was no nearer.

Effectively, Impact Mapping is a decision making paradigm for Agile software development companies that focusses on prioritisation based on company goals, not features.  When implemented, it allows for informed business decisions to be made.  The over-arching philosophy is to deliver impacts, not software!

To pull it off effectively you need:
  • A really really fast release process (I'm looking at you, Richard :) )
  • Metrics that mean something to the business
I can only say so much in a blog post.  I have a million thoughts as to how this can be applied at my workplace and for my own projects and after reading this I'd imagine you would too.  If you would really like to dig deep into the topic, I'd strongly suggest checking out Gojko Adzic's Impact Mapping book or heading to one of his classes when he's in town!

Let me know what you think of all this below!

Cheers,

Shrek

Sunday, 8 February 2015

The Most Irritating Software Development Attitude

Hey all,

Over the years of being a Software guy I've come to develop some serious irritations with some ways in which development is approached.  Be it BDD, Agile or Object Oriented programming, it seems no development paradigm has ever managed to avoid being completely misinterpreted and abused.

One of the typical issues I've seen with the uptake of a paradigm is developers thinking they're doing BDD just because they're using Cucumber.  I wrote a blog post about this.

The term "REST" seems to be thrown around whenever a service is written using Jax-rs.  There doesn't seem to be much consideration for the fact that REST is a specification for sound semantics around the use of HTTP, which you could find out by simply looking up what REST stands for and why.

Object oriented programming seems to have deteriorated to "I'm liberally using Java-isms", and the blight of 1000-line classes with names like "GenericUtility" continues to spread.

I could go on.

So what anti-pattern can we distill from this mess of paradigm-abuse?

From what I have observed, it all seems to boil down to this:

"Using the tool means I'm following the paradigm".

This may sound sensible at first glance.  Clearly tools like Java, jax-rs, cucumber etc. were written to support particular paradigms.  All I have to do to follow these paradigms is to use the tooling that supports them, right?

I'd like to make the case for why this is horribly wrong.

Let's take the REST example.  REST (representational state transfer) is a specification for clean use of HTTP semantics to support the design of fluent CRUD-based web services.  Jax-rs is a set of java APIs that support the development of REST web services.

Say our business problem is to create a web service API for the managing of documents.  Jax-rs can be used to create a "/documentCreate" service with a POST method for the purpose of creating documents.   It can be used to create a "/documentRetrieve" service with a GET method that allows you to retrieve a document.  However, as "representational state transfer" suggests, URL paths should represent resources, not actions, and HTTP methods should represent changes in the state of a given resource.

The above is clearly not a RESTful approach, the RESTful approach being to create a single "/document" resource with the appropriate POST and GET methods.  Jax-rs facilitates the creation of RESTful interfaces, but it cannot mandate that it is used correctly.  As a result, it is time and again used to create semantically unsound, noisy looking web services with the label of "REST" slapped onto them.

And on that note, I return to my point.  The attitude of "using the tool means I'm following the paradigm" is plain and simply damaging and un-conducive to writing good software.  But let's give this attitude a bit of merit.  I think it stems from the urge to get things done, a pragmatic mindset perhaps.  My take on a solution to this is to encourage a healthy balance between pragmatic and academic thinking.  Don't just get things done, hit the books regularly too.

So in summary, using Cucumber doesn't mean you're BDDing; using jax-rs doesn't mean you're following REST; using Java doesn't mean you're doing OO and so on and so forth.  Using the tool does not equate to following the paradigm, so learn the paradigm before you start using the tools.  Hit the books, develop an opinion on any given paradigm, then get to using the appropriate tooling!

Let me know what you think below,

Cheers,

Shrek

Monday, 8 September 2014

Big Testing in Cucumber vs. JUnit

Hey all,

My team and I recently found ourselves in the interesting position of having to Big Test Rhapsody Routes written by another team.  Rhapsody is Orion Health's Integration Engine, used to create customized data workflows in the Clinical setting.  In this case it was being used as an alternative for some work we otherwise would have done in Java.  "Big Test" is a way of saying "End to End test of a live, running application".

On top of the usual Acceptance Criteria tests that you'd expect to perform at a "Big" level, we needed to have more granular, edge case type tests. In Java land, you get this kind of coverage with Unit level tests, but we don't have the ability to write these with Rhapsody routes.  As far as I know, there is no automated way of testing how every piece of Rhapsody configuration strings together or how an embedded Javascript behaves.

As a result, we had to Big Test/Black box test granular edge cases.  We decided to call these type of tests Big Little Tests.

There's a general association drawn between Big Tests and Cucumber so naturally, we considered writing these Big Little Tests in Cucumber. I insisted we do this type of test using Junit instead for a number of reasons:
  1. Tests that don't directly back a high level business criteria have no place in a Cucumber test suite.
  2. As much as I see the value Cucumber provides in enabling BDD, I actually dislike it a lot as a Testing Framework
To elaborate on point 2, I find Cucumber tests incredibly ugly. Readability of tests is absolutely critical for me, and because using Cucumber requires splitting up scenarios into one method per step, reading your tests becomes difficult. The step definitions for a single scenario are often dispersed across a number of classes. You can't look at one method and immediately tell the intent of a test. Figuring out what's going on usually involves starting at your feature file and systematically drilling into each step to read the code. It works, but it's not fun. On top of this, the idea of passing test context between steps has always made me nauseous.

In my mind, I have always pictured our Big Tests looking a lot prettier as JUnit tests and our decision to do pursue edge case testing in the form of Big JUnit tests became the opportunity I had been looking for to actually compare Cucumber and JUnit as Testing Frameworks.

To make this happen, we needed to seperate out "framework" type code from our Cucumber testing Maven project into its own Maven project. The framework code is everything we need to run our particular type of Big Tests. Our acceptance tests send in documents through a Soap Service and verify that some data appeared using a REST service. The Soap and REST clients required to do this verification using Java code constitutes our Framework code. This same code was required for our Big Little Tests, so it needed to be somewhere that both the Cucumber and Big Little tests projects could access them.

Once this was all done we got onto writing up our JUnit Big Little tests. What we ended up with was pretty interesting.

Our JUnit tests were concise and readable. The intent of each test could easily be read from the method name and test variables could easily be traced from input to output. These Big tests did everything a Cucumber Test would otherwise do but in a much prettier, coder-friendly manner.

To attempt to illustrate this difference (without showing you any code :( ), one of our Cucumber tests comprise:
  • A Scenario from a Feature File.  You can consider this the text description of a test
  • A Step definition class for the Scenario.  We stick with one class per scenario for the relevant step definitions.
  • A class for common steps.  A number of our scenarios share certain steps. We decided to stick these steps in one class
On the other hand, each of our JUnit Big Little Tests are single methods at roughly 15 lines long.

Considering the intended use of Cucumber and JUnit are completely different, this isn't a comparison that is intended to sway you to start writing your Big Tests in JUnit all of a sudden. I am a big fan of Cucumber as a collaboration tool.

I suppose what we can distil from this is further evidence that Cucumber should only be used for the purpose of verifying high level acceptance criteria. Cucumber is often shoe-horned as a testing tool.  Not only does this lead to an ugly set of acceptance criteria but also a large set of ugly Cucumber tests.

Putting careful thought into what actually belongs in your Cucumber suite can save you a lot of hassle in having to deal with Cucumber's rather unfriendly nature. In most cases, granular edge case testing should be pushed into unit tests. In our unique case, shoe-horning JUnit for Big Testing gave us a more concise and readable set of tests than what we would have had if we shoe-horned Cucumber.

Let me know your thoughts below!

Cheers,

Shrek

Wednesday, 7 May 2014

BDD the Testing Paradigm?

Dear Readers,

I've come to realise that many developers see BDD with Cucumber solely as a means to achieve an automated testing framework.  Sure, the advantages in this regard are clear.  Automated testing frees testers from the burden of regression testing and allows them to focus on exploratory testing.  It also helps to facilitate a Continuous Delivery model with features being pushed out rapidly and over all functionality being guarded by the automated testing suite.  Sounds great.

However, this particular drive for adopting BDD presents the danger of misinterpreting its original intention.Lets start with a definition of BDD:
"Behaviour Driven development is about implementing an application by describing its behaviour from the perspective of its stakeholder"
The primary issue that BDD was created to address is poor communication between primary stakeholders and developers.  It is influenced by the idea that creating a conversational, common language between developers and stakeholders and using this language to specify requirements is the best way of developing the right product. 

Developing automated tests based on this conversational set of specifications is a secondary concern of BDD.

Unfortunately, the drive to create a replacement for regression testing suites has blinded many developers, myself included, from the original purpose of BDD.

The result is a view that does not consider primary stakeholders.  Since the automated tests simply replace regression tests, only developers and testers need to be able to understand their descriptions in the Cucumber Scenario files.  The original purpose of Behaviour Driven Development, as stated in its very name, is lost.

My team at work has suffered from this very issue.  Development had a particular understanding of a feature we were writing, our Business Analyst had another.  It was only during a review of our feature files that we realised that our view of the product had diverged.  If the very first thing we had done was to define a common language between us and to define the behaviour of the product as a result, these divergences would have been addressed in step 1.

So what's my solution?  Firstly, read up on your BDD theory.  Writing up notes on this lecture by Gordon Force was a great start for me.

Before starting the development of a feature: Start with your product's stakeholders.
  1. Define who your stakeholders are
  2. Define the interests of your stakeholders
  3. Defined a common language between you and your stakeholders that you can use to write a set of requirements
  4. Using this, figure out a template for your Feature files and Scenarios
  5. Figure out your development process around BDD.  Write your scenarios and decide their priority.  Have developers pull these scenarios in priority order to develop and your managers/BA decide which of these scenarios comprise your minimally viable product.  Feed this into your sprint planning
My team have recently implemented this approach and so far it is working out quite well!  I first performed the above analysis for a feature we planned to develop.  The following sprint was spent fleshing out a complete set of Scenario descriptions.  This gave us a granular set of sub-features that we could sensibly pick and choose from for the planning of the next sprint.

Although BDD recommends an outside-in approach, a decision to develop the automated tests in parallel to the code was made for the sake of getting the work done efficiently.  I haven't decided how cynical I should be to this approach, but in any case, since both the developers of the functionality and the developer of the automated tests were working directly from the Scenario descriptions, they ended up marrying quite happily.

My take on all of this is that yes indeed Automated Regression suites are important and that BDD is a great way of achieving them.  However, testing coverage should come as a result of requirement definitions that can be communicated easily with stakeholders, so always start with those!

Cheers,

Shrek

Thursday, 20 March 2014

Swt Cucumber Eclipse Plugin!

Hello hello, readers!

I've been finding Cucumber development with Eclipse a bit painful, namely with regards to the lack of linkage between Feature files and Step Definitions as well as executed tests and Step Definitions.  Having to navigate to your Step Definition Java classes and perform text searches for the scenario step you're interested in is just plain inefficient.  So what's the solution?

Luckily for me, my colleague, Tom Whitmore had recently dug up a Cucumber plugin from the internet which seems to work brilliantly.  It does not link executed tests to step definitions :'(  However it does link feature files to step definitions quite beautifully.  All you have to do is hold the 'Ctrl' key and mouse over your feature steps.  If there's a matching step, click to go to it!  It also has all sorts of cute syntax highlighting of feature files.  Makes life easier :)

Anyway, it's called Natural, by Roberto Lo Giacco.  Find it here: Natural on Github.

Happy BDD-ing, people,

Shrek

Sunday, 8 December 2013

The Growing Pains of TDD : Developing the Bare Minimum to Pass a Test

Hello Readers!

The first post in my TDD blog series is about developing the bare minimum to pass a test!

According to the TDD paradigm, you must only develop the absolute bare minimum required to get a single test passing.  The motivation of this being that you never write anything that is left untested.  Edge cases and specifics should be dictated by a test before they are developed on.  In many cases, this can feel trivial and nonsensical.  Say you're writing a boolean function 'isUserAuthorized(UserInfo userInfo)' .  Your first test might be to check that the user is authorized to perform some action because their role is 'top dog'.  However, the absolute bare minimum to do this is basically just to return 'true' without any other logic.

This kind of thing leaves me itching all over.  I can appreciate that maintaining this now passing test during further iterations of TDD is at some point going to result in proving some meaningful functionality.  However, I can't help but implement something reasonable before I can truly say this test has 'passed'.  In the example above, I might actually look into the userInfo object for the role 'top dog', return 'true' if it is present and false otherwise.  It frustrates me to deliberately write code I know is actually useless, just for the sake of passing a test.

Rather than getting a test to pass for the sake of getting a test to pass, I'd rather have the test pass knowing that the actual logic that the test is supposed to prove is working is in fact working.  In more complex scenarios, I have found huge value in developing as per this more pragmatic definition of the 'bare minimum'.  It allows you to understand many of the technical details you will need to know about in order to correctly implement the rest of the requirements of the class/interface you are testing.

To illustrate my point, lets say you're auditing a web service query that includes a SAML assertion.  Your auditing feature needs to record all relevant information about the User performing the query as well as the query itself.  Your first test might be to ensure that a query with a single required parameter, 'dataId' is audited.  In order to implement bare minimum required to pass this test, you need to address the following concerns:
  • How do I get the relevant User information from the incoming query?  Generally, SAML headers are used to create a Session for the current web service transaction, so how do I get access to this Session?
  • How do I go about retrieving the 'dataId' query parameter from the request?
  • How do I appropriately format this dataId query parameter in my audit log?  Perhaps in its present state, it is not very readable to someone viewing the log.
  • How do I even audit any of this information once I've gotten it?  Is there some library or service I can make use of to do this for me?
Complex situations like this are where you see the greatest value in developing the minimum.

A great way to solve a big problem is to narrow it down to the smallest possible sub-problem and then to solve it in a quick and dirty manner before moving on.  In this case of this auditing feature, the smallest sub-problem addresses so many issues that are relevant to the remaining set of requirements.  Attempting to write the whole feature set at once while also trying to address these concerns would be exponentially more difficult.

Thus, I have decided to let pragmatism dictate how I follow the 'develop the bare minimum' step of TDD.  I prefer writing the minimum amount of code that I know will become a part of the solution, rather than a bare minimum that I know is destined to be thrown away.  In difficult scenarios such as the above, there is huge value in developing this way as we focus development on the simplest version of a complex problem.  What are your thoughts, readers?  I'd like to update this post with any interesting discussion points.

Next up is a post on the Refactoring pains of TDD!

Shrek