I've used a variety of tools to produce sequence diagrams over the years, many of them very poor, some of them satisfactory, none of them what you'd call "good". Usually because they force you to do battle with a plastic mouse for several hours to make anything other than the simplest of interactions come to life.
However, forced to ditch my old favourites (Rational Rose and Rational Software Architect) in favour of open-source alternatives (Papyrus), has not been as fruitful as I'd have liked. Papyrus does a good job but is just as bad as all the commercial variants at drawing sequence diagrams - resulting in micro mouse movements to get just the right connection in just the right position. I don't get it, sequence diagrams are more like code than any other diagram and it's easier to write them than it is to draw them... so with this in mind; and having tried websequencediagrams which is cool but doesn't seem to handle life-lines properly (and I'm not paying for the commercial version to find out), I knocked up my own means in a couple of lonely evenings just to see how bad it is...
Turns out a reasonable approximation isn't too hard and since I can make it do what I want to do with (relative) ease in the browser, so be it. Doesn't do a lot, but then I don't want it to do much. Anyway, try it out if you get some time at The Mighty Stellarmap.com sequence diagraming tool thingy... hummm...
2014/09/28
2014/09/25
Shellshock
I'm sure this is going down well across the globe right now...
Details over at NIST.
As I understand it, it allows env variables to be propagated to child processes and where they start with a particular string "() {" for this to enable execution of any commands beyond the function definition. Nice. Will affect mainly CGI based servers which are many though typically older websites these days... I suspect 500 million sites affected is overdoing it a little but it doesn't overplay the seriousness of this bug.
... off to find whatever servers I have vulnerable to this little bugger...
Update: This guy is scanning the net for the vulnerability...
http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html#.VCQSaC5dVnI
Update: And Redhat have a very good article on this one including a nice command to test your installation to see if you're affected on their security blog.
Details over at NIST.
As I understand it, it allows env variables to be propagated to child processes and where they start with a particular string "() {" for this to enable execution of any commands beyond the function definition. Nice. Will affect mainly CGI based servers which are many though typically older websites these days... I suspect 500 million sites affected is overdoing it a little but it doesn't overplay the seriousness of this bug.
... off to find whatever servers I have vulnerable to this little bugger...
Update: This guy is scanning the net for the vulnerability...
http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html#.VCQSaC5dVnI
Update: And Redhat have a very good article on this one including a nice command to test your installation to see if you're affected on their security blog.
2014/09/17
Longevity
Longevity - An availability requirement I rarely see... i.e. how long will the system need to run for before you expect it to be replaced or retired. For short term projects it's an obvious one, but for stuff we expect to last it seems we often default to forever...
Eternity is a very long time...
Eternity is a very long time...
2014/09/15
Documents v Wikis
I used to spend a significant proportion of my time working with documents. Nasty 100 page beasties where 80% of them felt like generic copy text designed principally with the goal of obscuring the 20% of new content you were really interested in. Consequently I developed a severe dislike of any document more than a few pages long.
The agile manifesto came along and suggested we focus on “working software over comprehensive documentation” which by some has unfortunately been taken to mean “no documentation”. Let’s just say there’s a significant grey area between the extremes of “comprehensive” and “none-at-all”.
Personally I’m aware that I fall more into the “comprehensive” camp than the other though I put this down to the fact that; for me, putting things down on paper is a great way of helping me think things through. For me, documentation is a design tool.
On the other hand, wikis…! I used to see wikis as a saviour from the hours/days/weeks spent reviewing documents and trying to keep content consistent across multiple work-products. Finally a tool where I can update in once place, link to other content and focus on the detail without the endless repetition. Something in support of the agile manifesto which endeavours to provide enough documentation without going overboard on it. Unfortunately recent experience has exposed a few issues with this.
Table below compares the two.
Hyperlinks, like the connections in relational databases, are really cool and to my mind often more important than the content itself. That wikis makes such a poor job of maintaining these links is; in my view, a significant flaw in their nature. The web benefits from such loose coupling through competition between content providers (i.e. websites) but wikis – as often maintained by internal departments – don’t have the interested user base or competitive stimulus to make this model work consistently well. Documents just side-step the issue by duplicating content.
So wikis should work well to maintain operational documentation where the user base has a vested interest in ensuring the content is maintained. Just so they don’t get called out at 3am on Sunday morning because no-one knows the location of a database or what script to run to restart the system.
Documents on the other hand should work better as design products and contractual documents which all parties need to agree on at a fixed point and which aren’t going to be turned to very often. Ten commandments stuff.
The problem is the bit connecting the two and the passage of time and decay turning truth into lies. The matrix connecting requirements and solution, architecture and operations, theory and practice, design and build, dogma and pragmatism. Where the “why?” is lost and forgotten, myth becomes currency and where the rain dance needed each time the database fails-over originates. Not so much the whole truth and nothing but the truth, as truth in parts, assumptions throughout and lies overall.
The solution may be to spend more effort maintaining the repositories of knowledge – whether it’s documents, wikis or stone tablets sent from above. It’s just a shame that effort costs so much and is seen as being worth so little by most whilst providing the dogmatic few with the zeal to pursue a pointless agenda.
The agile manifesto came along and suggested we focus on “working software over comprehensive documentation” which by some has unfortunately been taken to mean “no documentation”. Let’s just say there’s a significant grey area between the extremes of “comprehensive” and “none-at-all”.
Personally I’m aware that I fall more into the “comprehensive” camp than the other though I put this down to the fact that; for me, putting things down on paper is a great way of helping me think things through. For me, documentation is a design tool.
On the other hand, wikis…! I used to see wikis as a saviour from the hours/days/weeks spent reviewing documents and trying to keep content consistent across multiple work-products. Finally a tool where I can update in once place, link to other content and focus on the detail without the endless repetition. Something in support of the agile manifesto which endeavours to provide enough documentation without going overboard on it. Unfortunately recent experience has exposed a few issues with this.
Table below compares the two.
Documents | Wikis | |
For | Good formatting control. Easy to provide a historic record. Provides a point of contract sign-off. Easy to work offline. Generally accessible. | Highly used critical content is maintained. Good sharing within the team. Hyperlinking – easy to reference other content. Promotes short/succinct content. Good historic record (one is kept automatically). |
Against | You have a naïve hope that someone will read it. Promotes bloat and duplication. Promotes the MS monopoly. Poorly maintained. Rapidly goes stale. | Promotes the illusion of quality. Poor formatting control. Requires online connectivity. Low usage detail becomes rapidly out of date. Poor sharing outside of the team. Hyperlinking – nothing is ever in one place. Poor historic record (too may fine grain changes makes finding a version difficult). |
Hyperlinks, like the connections in relational databases, are really cool and to my mind often more important than the content itself. That wikis makes such a poor job of maintaining these links is; in my view, a significant flaw in their nature. The web benefits from such loose coupling through competition between content providers (i.e. websites) but wikis – as often maintained by internal departments – don’t have the interested user base or competitive stimulus to make this model work consistently well. Documents just side-step the issue by duplicating content.
So wikis should work well to maintain operational documentation where the user base has a vested interest in ensuring the content is maintained. Just so they don’t get called out at 3am on Sunday morning because no-one knows the location of a database or what script to run to restart the system.
Documents on the other hand should work better as design products and contractual documents which all parties need to agree on at a fixed point and which aren’t going to be turned to very often. Ten commandments stuff.
The problem is the bit connecting the two and the passage of time and decay turning truth into lies. The matrix connecting requirements and solution, architecture and operations, theory and practice, design and build, dogma and pragmatism. Where the “why?” is lost and forgotten, myth becomes currency and where the rain dance needed each time the database fails-over originates. Not so much the whole truth and nothing but the truth, as truth in parts, assumptions throughout and lies overall.
The solution may be to spend more effort maintaining the repositories of knowledge – whether it’s documents, wikis or stone tablets sent from above. It’s just a shame that effort costs so much and is seen as being worth so little by most whilst providing the dogmatic few with the zeal to pursue a pointless agenda.
2014/09/01
Heavy Handed?
Is it really heavy-handed to give users a slightly second rate experience because they use an out of date browser?
Me thinks not really... effort spent should be proportional to the size of the user base.
Just a pity they didn't go further and send any user of IE off to the 1999 edition and throttle their download to the 28kbps they deserve... 80% of the effort for 20% of the users.
Me thinks not really... effort spent should be proportional to the size of the user base.
Just a pity they didn't go further and send any user of IE off to the 1999 edition and throttle their download to the 28kbps they deserve... 80% of the effort for 20% of the users.
Subscribe to:
Posts (Atom)
Voyaging dwarves riding phantom eagles
It's been said before... the only two difficult things in computing are naming things and cache invalidation... or naming things and som...
-
PO: We need a bridge over the river right here? Me: Why? PO: Because the customer needs to get to the other side? Me: Why can't they use...
-
It's been said before... the only two difficult things in computing are naming things and cache invalidation... or naming things and som...
-
My ageing brain sees things in what feels like an overly simplistic and reductionist way. Not exactly the truth so much as a simplified ver...