Sometimes called “1, 2, refactor”, the “rule of three” is code refactoring rule of thumb to decide when a replicated piece of code should be replaced by a new procedure. It states that the code can be copied once, but that when the same code is used three times, it should be extracted into a new procedure.
“Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior” ― Martin Fowler
This is a rule that has really served me well in recent years when I’ve come up against challenges on when something should be refactored.Read More »
Now and again I come across patterns that I see in codebases and pull requests, especially when the project or team is maturing. This covers one of them.
I’ve found that developers that have picked up more advanced patterns and become familiar with a codebase that is full of compromises made during it’s organic growth, they begin to apply those same patterns to solve the problem with a broad stroke, sometimes at the risk of not fixing the underlying issue.
One such pattern is try-catch. Once the power of a try-catch is learned, it starts being used everywhere. I’ve seen the whole contents of a method wrapped in a try-catch, catching anything it can. When all you have is a hammer, everything looks like a nail.Read More »
There’s no getting away from it, quality is a whole team responsibility. If you’re aiming for Continuous Delivery, then you’ll recognise one of the core principles of Continuous Delivery is to “Build quality in”.
If you’ve heard of lean development, then you will no doubt have heard of the principle of “The Toyota Production System” of “building quality into” software.
It’s inevitable then, that there will be a tension between filling that “QA” role and building a team of “T-Shaped people” who treat quality as a first class citizen. Read More »
Recently I’ve been asked why switch-case statements should be avoided, it turns out to be a pretty common question, and although I’m pretty happy with the reasoning of switch statements being less readable and maintainable, I had forgotten the source of this wisdom so I thought it would be worth revisiting.Read More »
I first heard of the term “serverless” in about 2015, probably around the time that the “Serverless Framework” launched, October 2015.
I next heard about “serverless” about a year later, only this time, it was used on the much broader topic of “Serverless Computing”, around June/July time there seemed to be a huge push from InfoQ on this topic, in particular at QCon London.
According to many, the concept of “serverless” really only became a reality in 2014 when Amazon Web Services (AWS) launched their functions as a service (FaaS) “Lambda” service, allowing you to run Node.js code in the cloud on demand, without really any knowledge or care for the servers they run on.
Around 2016 there was lots of talk of FaaS (functions as a service), PaaS (platform as a service), and the benefits of the serverless architecture, which was really encouraging and began to feel like it was ready for “prime time”.
In November 2016, I wrote an article entitled “Will the last person to leave turn the LAMP off?”, which was a play on words, a nod to serverless and an introduction to the concept of “stackless”.
Think about it, the “stack” is becoming less important; who cares about what hardware you’re running, platform you’re on, service you’re using, or even operating system, so long as it does what you need it to do.
Gone are the days where you need to buy or rent part or all of a physical server in a data centre somewhere, perhaps near to docklands London, as is financially viable. The concept of “serverless” only helps to cement this idea.
So when it came to rewriting a legacy API the idea of using AWS API Gateway and Lambda felt like it might be the right way to go. Read More »
subscribe via RSS