The Generalization Myth

Generalization is beautiful and exciting – and offers many glorious health benefits.  It reduces the amount of code we have to write, and solves problems before they even arise.  With just a tiny amount of forethought – we can make any future work we do trivial, simply by achieving the right generalization.  It makes you more attractive to the opposite sex – and if done *just* right even grants you eternal life.

elsa-reaching-2

“I can almost reach it Indie”

No wonder, like Elsa in the Indiana Jones movie, we neglect our own lives to attain it.  “I can almost reach it, Indie”…

The Holy Grail, is of course, a myth.  Though it makes for a good tale – sprinkle in some Nazis, some betrayal, some family tension.  What a story.

In reality – the generalization that we all grasp for is also a myth.  There is no way to know ahead of time what generalization will meet all of (or even one of) our upcoming use-cases.  This is based mostly on the fact that we don’t know even what our upcoming use-cases are – let alone what kind of structure will be necessary to meet them.  And the generalization you choose ahead of time, if it doesn’t match what you need in the future, is wasteful, because it limits the moves you can make.  This is what generalization does, it limits expressive options to the abstraction that you select.  And since you can never know the generalization you need, this always results in negative ROI.  (Not as bad as falling into a bottomless pit, but still not exactly what we are going for)

The challenge is that it always SEEMS so obvious that it will result in nothing but advantage in the context we are working in, to generalize.  This instinct is good – if you let it push you toward moderate flexibility in design, and refactoring (after you’ve solved a use-case) to a more generalized structure.  Generalization that you arrive at AFTER you’ve learned what the use-cases will be (that is, after you’ve tested and coded them), and moderate flexibility in your design are both highly profitable.  But they are both the result of disciplined, after-the-fact thinking; not a result of the magical thinking that we can somehow avoid work if we divine the right generalization before-the-fact.

This is also another reason that before-the-fact generalization seems so appealing – because it appears to give us something for nothing.

lastcrusade

“Let it go…”

After-the-fact generalization that results in clean, easy to maintain code, that has a very positive return tends to seem simply like the diligence of a mature adult.  Obviously the former, while maybe not tied to reality, is far more Rock-n-Roll.

As mature Craftsmen – we should do like Mr. Jones, and listen to the advice of his dad – “let it go, Indie, let it go…”

Once we’ve let this temptation go, we can take the following methodical approach – which will satisfy our impulse to generalize, but do it in a way that will result in a powerful, positive outcome.

  1. Solve the use-case(s) at hand, directly, with the simplest possible code.  Use a test (or tests) to prove that you’re doing this.
  2. Solve with designs that are SOLID.  SOLID leads to flexibility – flexible systems are easier to change systems.
  3. Refactor: remove anything creating a lack of clarity, generalize where there is unnecessary duplication.
  4. Rinse and Repeat

If we do this we will be creating amazing software!

Happy Coding!

Kyle

 

 

On The Nature of Reason

WHAM!

That was supposed to be the final nail on the final board of your son’s new tree house.  Instead, it was your thumb.

Colorful language passes through your mind.  The sensation in your thumb evolves and seems to regenerate a new, fascinating kind of pain with every passing second.  You lose the grip on your hammer…and it falls 20 feet from your perch.

The pain, while it should be driving you down the ladder and probably to the emergency room, seems to be making you reflective.  Finding yourself sitting cross-legged on the floor of your creation – you start to think about how amazing it is, that you know exactly how fast that hammer accelerated to the ground.  9.8 m/s^2.  In an effort to reconstruct the pain-driven highly-scientific experiment, you pick up the baseball sitting next to you and drop it as well.  What do you know, it hit the ground in about the same amount of time.

Rockets, airplanes, buildings, circus acts, and numerous other things and activities, reason from and rely upon this principle that you, in your intuition grasp quickly, and re-prove to yourself in short-order.  This intuition is powerful – because it’s been built up over a lifetime of such experiments.  In fact, one of the first things I’ve seen little kids do is start to just arbitrarily drop things.  In doing that, they’re not spelling out the mathematics and the precise nature of this behavior, but they are creating intuition.  Which leads to an ability to reason and act in alignment with the way this thing happens.  If as a 10 year-old I fall out of a newly built tree-house, I realize about the rate I’ll be traveling by the time I hit the ground, and I’ll have intuition I can reason from that will hopefully prevent me from trying that experiment in the first place.

Imagine for a second though that we met an alien, freshly arrived from an alternate universe where this pattern didn’t occur with the same regularity – and thus, didn’t have the intuition built up around it.  And as a new arrival to Earth they begin to comment on the fact that when you drop something it always accelerates to the ground at the same rate.

They know all about electrons and protons and the various interactions that draw them together and push them apart.  They know that past the atomic level, we can’t even really observe things without changing them.  Atoms and their constituent parts are constantly in motion, heading in every direction.

How could it possibly be that at the higher levels of abstraction, that there is this consistent behavior.

It turns out it’s an emergent behavior based on the curvature of space around highly massive objects (in our universe anyway, not in the alien’s apparently).  And it leads to this particular behavior.  It would be VERY difficult to predict the motion of an atom, and impossible to precisely predict that of an electron, but as a collection – the “object” (whatever _that_ is) moves with uniform acceleration.

This emergent behavior has particular characteristics and applies broadly – even if we don’t fully understand the dynamics that create it.  We as humans have an instinctive capability to handle things like this called generalization – we notice this emergent behavior, understand its characteristics, and then can apply another of our powerful instincts, reason, to it.  And we do all of this without even being conscious of it most of the time.

When we look at groups of people creating software – this same thing happens.  Humans are nearly impossible to predict.  At a low level of abstraction – when one human is going to schedule a meeting with another, what one human will say or do to another, is fairly difficult to predict.  It’s like the electron.

But as humans apply themselves to working together….working together on software, behaviors emerge.  Ones that we can generalize, and thus reason about.

This is important to realize as we deal with delivering software – since it is absolutely essential to exercising the best possible craft, that we understand and reason about the world with every available tool.  How we work together can create space for craft or it can destroy it.  The tools and techniques in the marketplace – Agile, Scrum, Kanban, to name a few, work to the extent that they leverage the “forces” (emergent behaviors) toward ends that we like – e.g. making space to craft great software and thus meeting needs.

What forces are in play and how they interact is highly sophisticated, and so a given situation requires a deep understanding to be able to focus and manipulate them toward specific goals.  Out of the box tools help with this, but they aren’t the last word.

Ultimately, though, the point is that this isn’t something we can delegate to someone else, or assume has been covered by the larger organization.

It is fully in our hands.

Here’s to creating great software (together)!

Kyle

The Superfluous Story Point

The exercise of putting “point” values on stories is a hallowed one in Scrum circles.  And rightly so – it is a powerful exercise.  Because of its marked simplicity and the underlying wisdom it embodies – it yields three important fruits, while avoiding some common but deadly software-delivery traps.

Story Pointing is first and foremost a discussion around the details of creating a particular piece of software.  The Story Point (and I’m assuming some familiarity with this exercise here) is a metric of relative complexity.  A team giving itself the mandate to arrive at this metric will instantly create deep conversation around details.

This brings impressions about the impending implementation to the forefront of peoples’ minds, turns intuition into explicit discussion, and generally drives powerful group learning.

Secondly, it pushes toward understanding the amount of scope (e.g. what size of story) is meaningful to entertain in this kind of discussion.  So, for example, a story point value of over 60 (or whatever the number is for a specific team) may mean that the story needs to be broken down into smaller parts in order to have meaningful discussions around the details of the implementation.

And lastly, the number of points in a sprint can begin to give a rough prediction of future throughput.  This allows a certain degree of anticipation and planning to start happening for stakeholders.

It does all of this while avoiding setting unrealistic expectations (which happens a lot when estimating with time values), and while not assuming or mandating the specific individuals working on the story.

Story Pointing is awesome.  But what I really want to do with this post is to save you a little time and effort.  And I want to do this by suggesting something ostensibly radical, but that I believe if you look a little deeper is only the next logical progression.  I’d like to suggest that you…

Do the Story Pointing Exercise but get rid of the points.

Huh?  Have you finally lost it, Kyle?

No – well I don’t think so – but follow me on this for a sec…

The usual series of Point values available goes something like: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100 and (Infinity).  Not quite Fibbonacci – but it captures the idea that the bigger something is the less we can think specifically about small steps in complexity.  Great – so far so good.  If something is “Infinity”, we need more information and it needs to be broken down to make sense out of it.  It’s easy to see how the exercise works; we assign points to stories.  And it’s easy to see how the advantages listed above follow.

Now what if we took the 100 and INF cards and threw them out, and just accepted as a new rule that if it’s more complex than a 40, we have to break it down smaller before we can make sense of it.  Does that meaningfully alter the advantages that we noted above?  No – discussion will still be driven, velocity predicted, and all without triggering any of the pitfalls.

Practically speaking, the last few teams that I’ve worked on – they went further.  We never really  used anything beyond 13.  And even 13 is typically looked upon skeptically, in terms of being able to analyze the story in a meaningful way.  So what if we throw out 20 and 13 as well?  Anything over an 8 needs to be broken down smaller.  Have we lost out on any of the advantages yet?

Before we go any further – I’d like to highlight that the act of breaking a story down is as potent in terms of driving conversation as putting complexity numbers on them.  If you need to understand the details as a group to put a complexity estimate on a story, you even more need to understand details to break a story smaller.

So – if we would have otherwise had any stories larger than an 8 – we will have broken them down, and thus driven the conversation around them to a greater degree than if we’d put those higher level points on them.  So not only have we not lost anything by reducing our potential point values to 0, 1, 2, 3, 5, 8…we’re actually getting into higher quality discussion because of the breakdown of the larger things.

And if we remove 8 – have we lost anything?  Nope – again we gain.

5? Same.

3? Same.

2? Same.

Now we’re down to 0 and 1.  0 is trivial – we know when something doesn’t involve work, and there’s no reason to talk about that. Which leaves us with 1….if something isn’t a 1 we break it down further until it is.

Our pointing exercise is now – what point value is it?  Oh it’s more than a 1 – let’s break it down again.

Though, that wording is confusing – I’d suggest we make a slight semantic transformation, and simply ask “is this a small, nearly trivial story, or is it not.”  If it’s not, we break it down further.

It follows – but to make it explicit – that velocity planning is now simply counting stories, because they only have one possible value.

The time-saver with this – is the up-front savings of not having to teach people about the story pointing exercise (if it’s a shop that hasn’t done scrum before).  And the ongoing savings of simply giving a thumbs-up/thumbs-down on every story when you groom, as opposed to trying to arrive at a point value and also break the story down.

And thus we have story pointing….without the points.  And with all the same advantages.  And streamlined for the new millenium.  Story-Pointing++, if you will.  All the same great taste, half the calories.  Ok, I’ll stop.

Here’s to writing great software!

Kyle

 

 

 

 

The Great Convergence: Part I

In my journey as a software craftsman, I’ve learned a few things.  One of them is that software is an art – and an aspect of that art is the interesting tension of serving two distinct audiences.  Audiences whose interests sometimes, but don’t always, align.  I’m talking of course about the audience that puts your software to use, and the audience of builders that will come behind you and make your software do new things.

In the normal world of software – the second audience is the one most neglected.  Serving this audience, though, is the measure of true craft.  It shows that an individual has developed the ability to hear the quiet impulse that has driven the greats of the past to their heights of achievement – the lust to build something great, just because we can.  And it shows an ability to think beyond the instant gratification of a pat on the back from a boss, or relieving the pressure of a driven project manager.

The other thing that I’ve learned is that, as with any art, finding the underlying principles is generally a matter of trying a thing, observing the aesthetic quality of the result, and then judging based on that if you should use that thing again in the future.  The thing you try may come from a flash of your insight – though – as I like to say – good artists borrow, great artists steal (which, incidentally, I’m pretty sure I stole from someone).

Using that approach, I’ve applied the SOLID principles to my art and I’ve discovered an intensely aesthetically pleasing result – in terms of just looking and reading the resulting code, and the ability to adjust it and modify it as situations and needs change.  These results both directly apply to the two audiences mentioned above.

Recently, a wise craftsman brought an interesting aspect of one of the SOLID principles to my attention.  He pointed out that with regards to the Open-Closed Principle, if a class exposes a public member variable it is necessarily not Closed in the OCP sense.  This is because the purpose of a member variable is to be used by methods to keep state between method calls.  That is to say, the purpose of a member variable is to alter the behavior of a method — so the behavior of the method is no longer pre-determined – it can be changed  at an arbitrary point in time.

Now technically, if the class, and more specifically the particular member variable, is simply acting as a data, and there is no behavior dependent on the member variable then changing it doesn’t alter behavior.  But, practically speaking, the expectation of member variables is that they’re used by methods in the class.

I had personally always thought of being “closed for modification” to be meant in strictly a compile-time sense.  That is, “Closed” referred specifically to source code.  But as I thought about my friend’s assertion more, a question occurred to me.  What is SOLID good for?  Well returning to what I arrived at in an experimental fashion – it is good for making source code aesthetically pleasing and easy to change.  And then a second question occurred to me – how does OCP contribute to that?  It contributes by allowing the reader to ignore the internals of existing code, which brings focus to the overall structure and to the overarching message that it is communicating.  This is artistically more compelling and it makes the overall code-base easier to understand.

So I would suggest that changes at run-time as well as compile-time are important to eliminate in this respect.  And as such – OCP does in fact include run-time closure in “closed for modification”.

Having this settled in our mind – another interesting question arises.  Does “encapsulating” the change of the member variable by making it private and only modifying it through a method call make the class “closed”?  There are two differences between setting a member variable directly, and encapsulating it.  The first is that you don’t actually use an assignment operator.  But this doesn’t do anything to eliminate the fact that you’re changing the variable.  And the second is that you might potentially limit the values that the state may take on, and thus you potentially have a better idea about the nature of the behavior.  While this may be true, the fact that the state can change at an arbitrary time means that the internals of the class can no longer be ignored – since a given method may have more than one potential behavior.  This means that we clearly don’t have a “closed” class.

To take this just one step further – another thing that I’ve discovered is that the more SOLID I make my code, the more my OO code looks very much like FP code.  Because of this, I’ve said for some time that the two paradigms are converging.  This has been based primarily on the experimental approach I’ve talked about here.  But if we look at this situation with OCP – what we’ve basically shown is that a class isn’t SOLID if it maintains state (again, barring strictly-data types from this discussion).  A class with just behavior is very close to being just a namespace with a set of functions in it.

All this being said, I believe even more strongly that the paradigms are converging.  Furthermore, I’m fairly convinced that there are underlying principles that dictate this.  Both paradigms seek to make code “easy to reason about” (to use the FP mantra) though they come at it from different angles.  But in the end, they’re shooting to engage the same mechanism – our human instinct to reason – in the most efficient way possible.  After all, what’s more aesthetically pleasing than that which fully engages the instincts it targets in us.

Do You Even ScrumMaster, Bro?

Language is rough sometimes.  It messes things up, and changes the way we perceive things in substantive ways.

This is certainly true with the title “ScrumMaster”.  It implies that it is a role – a full time job even, just by virtue of being a title.

What would happen if we assigned a title of “UnitTester”.  Imagine the implications – we’d start isolating individuals, having them do nothing but write unit tests, they’d start to defend their turf and prevent others from doing it.  Managers would make job postings.  Recruiters would be all like “Ya – lookin’ for a Senior UnitTester to fill a really transformative role” ….. ok, well something like that.

The mechanisms that scrum uses to balance all the natural, competing and complementary forces that arise during the course of software delivery is brilliant.  But the mechanisms need to be facilitated – groups of people aren’t good at maintaining momentum without someone focused on that.  Because of Conway’s law – and we can get into the mechanics of this in another post – methodology and the structure of the software are very closely related.  The best equipped individuals to be facilitating these mechanisms are the members of the team that are delivering the software.   Further, having one finger on the pulse of the methodology gives every developer a sense if it starts to head in a direction that’s not aligned with the collective vision for the architecture (again, tightly connected via Conway).

So the language that we’ve historically used to designate the person who is facilitating scrum mechanisms, is the very thing that’s driven facilitation in a very inappropriate direction.

For action items – I got two things for you:

#1 – everyone on the team should regularly facilitate.  It’s not hard.  Just do it.

#2 – to use language to our advantage here – we can refer to the activity, “facilitating”, rather than some imaginary role “ScrumMaster”.  It will create the correct perception that this is just an activity that everyone does.  Just like unit testing.

Partial Mocking and Scala

I don’t know how to get this across with the level of excitement it brought to me as I discovered it.  But I’ll try.

I love TDD – the clarity, focus, and downright beauty it brings to software creation is absolutely one of my favorite parts of doing it.  I love the learning that it drives, and I love the foundation it lays for a masterpiece of simplicity.  It’s unparalleled as a programming technique — I only wish I would have caught on sooner.

I love Scala.  I can’t seem to find the link – but there was a good list about how to shoot yourself in the foot in various languages – in Scala, you stare at your foot for two days and then shoot with a single line of code.  The language is amazing in its ability to let you get across your meaning in a radically concise, but type-safe way.  I often find myself expressing a thorough bit of business logic in one or two lines.  Things that would have taken 20-30 lines in a typical C-derivative language.  It’s a fantastic language.

Writing Scala – I’ve gotten into a situation that finally, powerfully, crystallized in an experience this morning.  I spent probably an hour struggling to get at the most understandable, most flexible solution.

The situation is this – I have a class that’s definitely a reasonable size – 30-50 lines or so.  In this case, most of the methods were one-liners.  And they were one-liners that built on each other.  The class had one Responsibility, one “axis of change”.  I liked it as it was.

One problem that arose was that one of the methods was wrapping some “legacy code” (read: untestable – and worse unmockable).  In my Java days – this wouldn’t have even arisen as a problem, because the method using the legacy code would have probably warranted its own class, and thus I could have easily just mocked that class.  As it was, I considered it.  But as I said, the class was very expressive, and said as much as it should have, without saying any more.  To cut a one line method and make it a one line class…would have bordered on ridiculous – it would have been far too fine grained at any rate.

So what’s a code-monkey to do?  Well – I tripped across this idea of a partial mock.  Which, I would have derided as pointless in my Java days – and in fact, the prevailing wisdom on the interwebs – was that partial mocking is bad.  I don’t want to do bad.  By the way, if you haven’t googled it already – partial mocking is simply taking a class, mocking out some methods, but letting others do their original behavior (including calling the now mocked methods on the same class).

Anyway – the more I stared at the problem and balanced the two forces at play, the more I realized how right the solution really is.  In my experience, in Scala, the scenario I just laid out is common, and the only real way to solve for it is with partial mocking.

(Big thanks to Mockito for providing this capability – so awesome!)

Why Patterns Suck

Tags

, , ,

I was at a jazz jam session a few years back. I didn’t know it then – but I was learning a valuable lesson about design patterns.

Music, in its similarity to software, has a class of design patterns called scales (think Sound of Music – “doe re mi fa so..”).  Scales help musicians to have an intuitive understanding of the harmonic relationships between different pitches.  These relationships have mathematical connections that lead to a lot of simplification and a lot of carrying lessons learned in one context to another context.  There is a lot of value in deeply thinking about these relationships, and getting them “under your fingers” – getting an intuitive feeling for them by playing them.

The jazz solo is an interesting thing – it’s a time for a musician to attempt to convey feeling to the listeners, following as few rules as possible.  Though there are a lot of underlying laws to how to create certain feels….most musicians, in order to be able to express feeling in real-time, work hard to have an intuitive grasp of these laws.  Thinking through the specific details of the performance while performing would be impractical, and it would destroy the original inspiration.  Hence, musicians have located patterns (such as scales) – that allow them to work on that grasp when not performing.

After stepping down from my solo (which was undoubtedly life-changing for the audience) … another soloist took the stage.  He played scales.  For the whole solo.

A fellow listener leaned over and whispered in my ear about the ineffectiveness of the approach….in more colorful language.

Scales…..like design patterns in any domain are for developing intuitive understanding of the space.  They are NOT to be included for their own sake, thoughtlessly in the actual creation.

I’ve seen this a couple of times, at grand-scale, in software.  In the early 2000’s – I can’t remember how many singletons I saw cropping up all over the place (yeah, I may have been responsible for a few of those)…many, many times very unnecessarily.

These days there are a number of patterns that get picked up and used wholesale (with little thought) – MVC, Monad, Lambda, Onion, etc..  This is not how great software is written.  Like music – the domain has to be well-understood, and then the thing created from that understanding.  Picking up the design patterns, whether they’re scales or singletons, and instead of using them in private to gain understanding, we pass them off as creation, we are using them in exactly the most wrong (and harmful) way.

It will make our software worse – decreasing our understanding, and increasing the complexity of our software by creating code that doesn’t match the problem.

 

 

Oxygen

Tags

“I would sooner destroy a stained glass window than an artist like yourself. However, since I can’t have you follow me either…”The Dread Pirate Roberts (shh – actually it’s Wesley)

Wesley proceeds to bonk Inigo over the head (saber-whips him?) rather than killing him. It’s fortunate for Inigo that Wesley had such an appreciation for his art and the calibre of craftsman that he was fighting against. A lesser man may have gone ahead and destroyed the stained glass window.

In software – it’s not (at least in my experience) so dramatic – we don’t find ourselves in life or death situations based on the level of our craft. But an understanding and recognition of the level of our craft is an important and powerful thing. It’s almost like oxygen to our sense of contentedness with the world, to our self-worth, to the level of fun we’re having crafting software.

This is important for two reasons. Reason number one is that – the craftspeople that we work with share this need – and as we grow and progress in the craft, we are able to provide it for more and more people. The embedded thing here though is that we are only able to provide this oxygen to people whose level of craft we understand and can truly appreciate – folks that are at or below our level. And we should take every opportunity to do this – because it’s good to do this for our fellow human, and because it increases by untold amounts the effectiveness of those around us.

The second reason is because many times we will find ourselves going without oxygen. We need to recognize this – because if we are not careful, it can have massive negative effects on every part of our being – even including our physical health.

What can we do about this..

First – be aware that it is a thing. And be ready to remedy it when it happens. Second – know what some of the remedies are.

They include…

1) Holding your breath – we can for a time go without oxygen without permanent effects – know your limits, but be prepared to hold your breath.

2) Surrounding yourself with craftspeople that are at or ahead of your level. They are the only ones that will recognize your craft – and thus the only ones that can provide the much needed oxygen. This is a hard one though – it may mean leaving comfort for an ultimately better situation in a number of different ways; choosing a different team, engaging people that you don’t have a natural affinity for, or leaving an organization.

The Case for Scala

Why take sides if you don’t have to?  It’s uncomfortable; people get upset with you for not seeing things their way; you find yourself spending a lot of time thinking on and defending your position.  Thinking is hard, am I right?!

In technology, things change so much – new tools rise and fall – new techniques rapidly give way to newer techniques.  And large swaths of tech are based on the particular preferences or biases of their creators and/or the community that uses them.  Many times choosing one tool over another is only a matter of taste. Don’t get me wrong – informed taste is the underpinning

hammerof great software.  A set of tools, though, all sharing the same basic properties but offering different ways of going about them – sharing substance, differing in style – make the idea of picking tech less like “picking the right tool for the job” and more like “do you want a red handle on your hammer or a pink one with sparkles?”.

It is INSANE to expend significant energy on the handle of your hammer.  Pick what you like and use it – don’t try to convince anyone to come along and ignore anyone who says red is better than pink.

There are, though, tools that are demonstrably better.

Scala is one of these.

Being a better tool depends on what kind of job you are trying to accomplish.

If you are a software craftsman – focused on creating software that is elegant, just because you can…..if you know that clean, clear code leads to better software in the hands of your users…..if you respect the next developer to look at the codebase enough to leave them a work of art rather than a work of excrement…..then Scala is one of the most premium tools available today.

Why?

There are several reasons – they are fairly subtle – but they add up to granting a deeper ability to craft great software to the person interested in doing such a thing.

#1 – It has a type-system.  Type-systems are like free unit tests.  Unit tests are good.

#2 – It leverages both the object metaphor and the functional/mathematical metaphor – giving you a great degree of expression.  It subtly encourages good stateless, immutable, referentially transparent structuring of your code, but gives you the flexibility to not be that way if you judge the situation to warrant it.

#3 – It (while having great static typing) offers an extremely malleable language, with plenty of syntactic sugar to help you get your point across in the most concise way possible.

scala-hammer All of these come down to the fact that the language is more expressive than anything else available.  And all of this while running on the JVM – providing a battle-tested runtime environment and a time-tested community of brilliant developers that consistently create the most advanced eco-system around.


Choose Scala
.  You will write better software.

 

Increase your Code Covfefe with these three easy steps…

Test First

Plenty has been said on this particular topic.  But I constantly have conversations with folks that aren’t quite convinced of the value that this brings, especially with regards to code covfefe.  There are only two things I can do to convince you on this – and one of them is entirely in your hands.

The first is to make the argument.  Of all the solutions to a particular problem, the crisply-defined, highly-modular ones that makes testing easy are only a small subset of the larger solution space.  If you feel your way around intuitively to the solution as many of us do, the statistical likelihood that you will trip on this subset is small.  Most of the time, you will find yourself doing significant refactoring in order to accommodate tests after the fact.  This is lame, and feels like an utter waste of time – which makes it less likely that we will continue to do it.

The second thing that I can do is just to urge you to try it – earnestly try it.  You will love it.

To increase your code covfefe – drive your development with testing.

Write Modular Code

We used to think that modularity was all about reuse – but two weeks into a software engineering career you realize how ridiculous the promise of reuse really is (at least in the uplanned way suggested by our CompSci professors).  But modularity is still super important.  Modularity means flexibility, flexibility means three things: speed, expressiveness and testability.  You want your systems to grow and come to life faster – write modularly.  You want the person that follows you in a code base to love you instead of hate you – write modularly.   You want high levels of code covfefe – write modularly.

Why does modular code lead to more covfefe?  Because it’s easier, many times WAY easier to test.  Every level of nested anything multiplies the complexity of the test code that’s covering (covfefing? not quite sure about the conjugation) it – whether that’s nested conditionals or loops.  That is, the complexity of a test is almost exponentially related to the complexity of the production code it’s testing (this is totally not any kind of scientific analysis – it’s based only on years of doing this myself).  When a module (class, method, function, whatever) violates good modularity principles, the tests get hard to write.  And thus, again, hard to write tests means less covfefe.

As a side note – this can play to your advantage – because, aside from understanding good design principles (like SOLID), a good way to keep your code modular is to “listen to the tests”.  If the tests get difficult to write, your code is probably getting a little monolithic.

To increase your code covfefe – write modular code.

Make Human Space

It behooves all of us to understand well the iron-triangle – features, time, and quality.  You can lock in any two of these at a time.  One of the most oppressive, evil, and seemingly unintentional things that happens in corporate software development settings is that extreme pressure is placed on features and time being locked in.  The iron triangle is a law of the universe.  Putting intense pressure to lock in two of the arms necessarily leads to compromising the third.  Which plays out in terms of overwork, less test covfefe, sloppier coding, etc.

Fortunately we have some nascent tools in our tool chest to deal with this situation (viz. agile, scrum, etc).  The wise engineer will apply these techniques to make the space to apply their craft in a professional manner – with high quality, and much test covfefe.  The challenge though is that some of the impediments to really embedding agile approaches in an organization can not be entirely “bottom up”.

To increase your code covfefe – make the space for your team to focus on quality, using agile, scrum and any other methodological tool you can find.

–Kyle