Category Archives: Application Development

Windows file LastAccessTime (faceplant)

I am probably the only one left who didn’t know, but I just found out that Microsoft, in its infinite wisdom, disabled the updating on LastAccessTime in the file systems ever since Vista days.  It affects both desktop and server OS versions. This means you get the LastModifiedTime instead. Which is pretty useless if you want to find files which have not been used recently, or conversely, want to find files that have been accessed at a time you would not have expected (e.g. for forensics).

To check your systems, you can use (at a command prompt run as Administrator)

fsutil behaviour query

 To enable correct access time logging, use

fsutil behavior set disablelastaccess 0

 But as I said, I am probably the last person to find this out. And when I found out, I did a full-on face plant…


Is it methodology or people that make the biggest difference?

I seem to be thinking and reading a lot about tools, methods and effectiveness this month.

Earlier, I got enmeshed in a discussion on LinkedIn about whether email is still an effective tool.   In one post to that discussion, I wrote “As a third-generation woodworker, I have a (probably unhealthily) large collection of tools. Some of them I rarely use, some much more often. Why do I keep them all? Because that way, I don’t have to use a hammer for everything – I can generally use the right tool for the job.”

Elsewhere, I read a piece by Greg Jorgensen which asked  Why don’t software development methodologies work?   in which he concluded that software development methodologies ultimately fail to deliver predictable, repeatable successful results.  Like me, he has been through the waterfall/BDUF (big design up front), structured programming, top-down, bottom-up, modular design, components, agile, Scrum, extreme, TDD, OOP, rapid prototyping, RAD etc toolboxes. And like me, he has seen them sometimes succeed and sometimes fail to deliver projects on time, on budget.  Amongst other reasons, he ascribes this to blind adherence to the method:

“Once a programming team has adopted a methodology it’s almost inevitable that a few members of the team, or maybe just one bully, will demand strict adherence and turn it into a religion. The resulting passive-aggression kills productivity faster than any methodology or technology decision.”

He then concludes that ultimately, it is the people using the method (or no method) that make the bigger difference – how they work together, whether they share a vision of where the project is going, how they communicate, how skilled they are.

Nik Silver, however, argues that although people are important, method is by far the greater determinate of success.   He says “Change the methodology and you change the culture.”  And “the same people working together much more effectively than ever before to deliver impressive results”.

I do believe in using the right tool for the job, and that it is easier to do a job right if you are skilled at using the tools you have.

At the risk of sounding like a consultant, I am given to thinking that Greg and Nik are both right and both wrong.  Because it takes both – skilled, communicative, intelligent and committed people to use (ideally) the most appropriate tools to do the best possible job.

Having lived through, and sometimes led, method and organisational change processes – does anyone else remember TQM? – I wonder whether it is not the novelty of the “new” method that makes the difference. By making people think in greater detail about what they are doing and how they do it, is it  not more likely that they will also give more attention to actually doing all the things they should have done anyway?

The big thing most software development methodologies have in common is that they define methods, processes and frameworks for communication between the various parties – the customers, the stakeholders, analysts, designers, developers and project governance apparatus.

Which made me review my own experience and observe that:

  •  A group of talented, determined developers that don’t communicate outside their circle will develop something, but that something may not be the right thing (ie what the customer wants) and if they don’t communicate with each other, it probably won’t work very well
  • A group of highly communicative developers without a driving force (vision? delivery plan? stakeholder involvement? methodology?) will spend a lot of time talking but will never finish developing anything
  • A group of communicative, talented, determined developers, who know what they are doing and why (because they know and share the vision)  are more likely to achieve success regardless of the method employed than a group of less-talented and/or less communicative developers applying any chosen development method.

Sifting through the myriad development languages, tools, environments and  methods that I have experienced over the years, I can’t escape the conclusion that new methods and languages and tools are invented, popularised, discredited and discarded not necessarily because they don’t work, but all too often because they don’t work any better than what they were meant to replace.  And all too often because they are just the wrong hammer to crack the particular nut in front of us.

Is that the fault of the tool or of the user of the tool? Probably at least a little of both.

Re-engineering legacy systems

I know it might sound like the flaming obvious, but the overarching driver to re-engineering a legacy system, and most particularly one that is core to the business, is that the risk and cost of NOT doing it must be greater than the risk and cost of doing it.

Despite the power of modern development tools and methods, it will almost certainly take longer and cost more to develop the replacement system than it did to build the legacy one. There are many reasons for this – undocumented features, algorithms and exceptions, convoluted and arcane logic, architectural challenges (e.g. business logic embedded at every layer from the UI to the controller to the database or even the lack of any distinction between these layers), entrenched thinking amongst all the key stakeholders (including IT), and the engineer’s resolve to ‘do it right this time’.

Big Bang – re-engineer in peace

The big-bang might be purest and simplest from the engineer’s perspective, but offers challenges in terms of keeping pace with necessary changes to the legacy system, managing stakeholder expectations (they will wait a long time before being able to see much and even longer to use anything), and team motivation (nobody wants to work on the old mess, they want to make the new one). Those managing the budget will look at the cool new technology being built and demand that rather than waiting for the new system, the new features should be retrofitted into the old system ‘to gain immediate payback on the re-engineering investment’. This of course takes people and attention away from the work on the re-engineered system, which extends the timescale, adds pressure to enhance the old version, etc.

And when the new system is ready, parallel running will often be a massive challenge, involving staff levels, training, support and contingency costs (read ‘disaster recovery’) that are difficult to predict at the start of the project.

Hybrid – build out the old

A hybrid approach involves re-developing the existing application in modules, gradually replacing the old with the new. This inevitably means that there will be an integration effort, making old creaky components and data work smoothly with the shiny, slick and well-designed new bits and data. A hybrid approach carries the risk and cost related to maintaining the old system plus maintaining the new components being used to replace or wrapper parts of the old one whilst simultaneously developing the new system.  If new components are run alongside the old ones, maintaining a new, almost certainly richer and better designed data model alongside the old one will add considerably to the effort, increasing the duration and/or resource requirements of the project.

It is also challenging to make it a fully Agile project when parts of the new system have become ‘set in stone’ or at least much harder to modify by being put into production. And you will almost inevitably find yourself having to resolve issues in the hybrid system that would not have occurred either in the old or the new systems operating separately.  It might be wisest to apply a hybrid Agile approach to a hybrid re-engineering project.  Creating user stories  will be less about establishing what users want to do and more about what they want to change when moving to the new system. You might call them “iterations” but really they will be more about sprints focusing on one or more functional features.  There are many ways in which the Agile approach will be applicable – self-organising teams, story cards, burn-down and the like. But because there should be less uncertainty about the overall functional requirements, the project is likely to feel rather more waterfall than Agile – it will take some experimentation and thinking about to arrive at the right set of methods.

All projects have risks, but re-engineering projects have some unique risks that will need to be carefully assessed and managed. And as I said at the beginning, an important consideration is the cost of doing it v. not doing it at all.  Doing it will mean that a significant amount of effort/cost/time will go into doing something that, once finished, will put you exactly where you were when you started.  OK, maybe cleaner, shinier and with a more maintainable undercarriage, but not really doing anything that wasn’t happening before. That takes some serious stakeholder management – again the flaming obvious but too often overlooked.