Word of the Day: Plutoed

Pointe de Penhir looking back to Pointe de Dinan and Plage de la PaludHow often does that happen…I just finish writing a post and something else comes up to support it.

I just wrote a piece about pseudo-words and up on the BBC pops a news article stating:

Plutoed” has been chosen as word of the year for 2006 by the American Dialect Society, beating “climate canary” in a run-off vote.

If you have been “plutoed” you have been demoted or devalued, just as happened to the former planet Pluto when its status was downgraded.

 

Technorati tags: ,

Word of the Day: Real words and pseudo-words

DovedaleEvery now and then I create a post title “Word of the Day”. Most of the time it’s because I’ve come across some new word (or pseudo-word) in something I’ve read.

Sometimes these are real words that I’ve never come across.

More often they are pseudo-words, they look like real words but actually they are something that someone has invented.

Every now and then we I come across a word that I am sure is a pseudo-word only to find out it’s a real word, “burglarized” was my favourite example of this.

Dilbert summarised this phenomenon wonderfully this weekend:

 

Speaking as someone who is paid to think about how people collaborate I think there are many reasons why we see the use of pseudo-words.

I’m sure for some people they are an attempt to assert their thinking into a situation or organisation. It’s a demonstration of your influence within an organisation if you can invent a word, spit it out, and hove others using it. I once had a manager who invented a new phrase every week, he would use it for a week and see how long it took before it was said back to him. He would also see how obscure he could make it and see whether anyone had the balls to ask what the phrase meant.

I suspect for others that the issue is actually laziness. Rather than trying to construct a proper sentence they try to create a word for it. By creating a short-hand the concept becomes easier to communicate. The most recent example of this would be the pseudo-word “de-portalize“. Everyone within the IT architecture community knows exactly what it means, but it’s not a real word. It’s short-hand for something that those who need to understand will understand.

Once upon a time..ah no I won’t say that I’ll leave it for the 5 things that Stuart has tagged me for.

Is the Shared File Server Dead? (Part 2)

Mum evicts the cat from the sofaIn the dim and distant past of 2006 I wrote an article on the death of the shared file server, Steve responded.

It seems I was ahead of my time and I’ve seen a few articles on the subject recently.

Yesterday the Microsoft SharePoint Team pitched in. They seem quite upbeat about the level of penetration that they are going to be able to achieve. As with all of our posts they are realistic about the places where file storage is going to carry on being used. Their list:

  • Product Distribution (Product packages like Office)
  • SMS distribution point (desktop patches and hot fixes)
  • NT Backups, Backup Servers and Desktop Backups (backups)
  • Database Storage (.mdb, .ldf, ndf, .pst, .ost)
  • Large Audio/Video and Streaming Media and other large archive read only media such as DVDs, CDs storage (.iso, .wmv, .ram, .vhd)
  • Developer Source Control 
  • Batch, Command Scripts, Executables (.exe, .vbs, .cmd, .bat)
  • Application Server… Client Application Storage Linked Files and File Dependencies –  (.lnk, .lck)
  • Archives and Dumps (.arj, .rar, .zip, .dmp, .bak)

The challenge here is highlighted in their summary:

Collaborative file shares can be replaced with SharePoint deployments.  Product distribution and database storage will continue to persist as valid scenarios.  End users will need training to understand where to save their files.  With most file sharing scenarios for the most common file sizes SharePoint lists will be the Microsoft recommended way of sending files inside the corporation and with collaborative SharePoint site extranet deployments, it’s the way to share with partners.  Most non technical end users scenarios such as the most common HR, Sales, and Marketing teams can say goodbye to using file shares for file sharing.  Some groups and divisions like IT SMS/Product Distribution, Data Warehousing (SQL), Media, and Development groups won’t be saying good bye to file servers in Windows 2003 and in code name “Longhorn” with key scenarios leveraging cheap NTFS file storage.

Analyzing your current file servers by server or share or folder may allow you to group them by purpose.  Here are some examples of common classifications: Collaborative File Sharing, Historical Archive, Media Server, Dump/Desktop Backup, Source Control Servers/Databases, Personal Storage, Product Distribution, and Application Servers.

(Highlighting mine)

The challenge for most enterprises is this:

  • It’s incredibly difficult to change end-users working process.
  • It’s incredibly expensive to get a good understanding of the data that already exists.

While replacement of shared file storage with SharePoint requires these things it will happen very slowly.

 

Word of the Day: Ideation

Grandad's had a long dayIdeation:

“the process of forming ideas or images.”

You’ve probably used it loads of times, but it was a new one on me today. All I need to do now is work out how it would be said here in Lancashire .

Microsoft Software Assurance Tipping Point

Acorns by EmilyThere’s no new news here, I’m just bringing together a number of pieces of information that I’m not sure many people have put together.

Microsoft released it’s Software Assurance licensing programme some time ago now. Many customers looked at it, but struggled to see the benefit in it. There perception was that it worked for people who were going to adopt every release of Microsoft software as soon as it came out, but that was about it. The thing is that many customers don’t upgrade quickly, nor do they take every release. Many enterprise customers skip versions of Office, taking every other one, many did the same with Windows, skipping Windows 2000 and moving straight from NT or 98 to Windows XP.

Microsoft is steadily changing the landscape on Software Assurance though. A number of recent announcements have increased the pressure on customers to take Software Assurance, or made it more valuable, depending on your point of view.

Desktop Optimisation Kit

Microsoft have made the Desktop Optimisation Pack available to Software Assurance Customers. This kit includes four interesting components:

  • Microsoft SoftGrid – formerly Softricity SoftGrid
  • Microsoft Asset Inventory – formerly AssetMetrix
  • Microsoft Diagnostic and Recovery Toolset – formerly Winternals IT Admin Pack
  • Microsoft Advanced Group Policy Management – formerly Desktop Standard GPOVault

As you can see, this is a bundling of recent acquisitions – and only available to Software Assurance customers. These products will be available for a period as separate products with perpetual licenses, but eventually the plan is that they will only be available to Software Assurance customers (source: Gartner).

Yes, that’s right, if you want these capabilities you need to have Software Assurance.

These products are the type of products around which you build a whole process, you don’t just deploy them for some added value. In other words, they are the type of products which you get locked in to.

If you have spent a load of money packaging and deploying applications via SoftGrid you aren’t going to change to anything else easily.

If you have invested a lot in getting a GPO management process which relies upon the capabilities a AGPM then you aren’t going to replace it easily.

The Desktop Optimisation Kit now puts a cost on exit from Software Assurance.

Vista Enterprise

There will be one edition of Vista which will only be available to Software Assurance or Enterprise Agreement Customers – Vista Enterprise.

The primary benefit of Vista Enterprise is the availability of BitLocker without all of the cost of Vista Ultimate. If you are an enterprise customer you probably don’t want all of the Media Center capabilities that Ultimate anyway, nor do you want the heavier hardware footprint that it brings because that just pushes cost up. As an enterprise customer you probably do want desktop hard disk encryption.

The need for a licensing agreement to use Enterprise Edition puts another cost on the exit from the agreement. How many customers would want to deploy clients as Enterprise Edition, to then downgrade to Business Edition. If you’ve made extensive use of BitLocker it’s going to be very expensive to change.

Conclusion

These two activities provide a benefit to Software Assurance customers, which will mean that it’s preferable to more customers, but it will also add a cost of exit from the agreement which will make them more cautious.

(I’ve also learnt that writing a complicated post while blowing into a tissue every 2 minutes is hard work . So if you saw an unfinished version of this post – sorry. If this post doesn’t make much sense – sorry.)

 

Gadgets Impact on the Family

Jimmy brings the dog foodIf you are addicted to your Blackberry, or any other gadget that you use outside the office you should really read this:

BlackBerry Orphans – Wall Street Journal

The refusal of parents to follow a few simple rules is pushing some children to the brink. They are fearful that parents will be distracted by emails while driving, concerned about Mom and Dad’s shortening attention spans and exasperated by their parents’ obsession with their gadgets. Bob Ledbetter III, a third-grader in Rome, Ga., says he tries to tell his father to put the BlackBerry down, but can’t even get his attention. “Sometimes I think he’s deaf,” says the 9-year-old.

These things all have one really important button – “Off”.

I haven’t fallen into complete addiction, yet, but I know what these kids are talking about. The other day I found myself at the dinner table with friends searching for something on my phone. Sue pointed out in very clear terms how rude this was, and she was 100% right. I won’t be doing that again.

Technorati tags: ,

"Wait for Service Pack 1" – Valid?

Grandad's had a long dayThe release of Exchange 2007 last week prompted me to re-evaluate, again, a long help mantra in IT – "Wait for Service Pack 1".

The basic premise goes like this: New software, especially software from Microsoft, is normally so buggy on its release that it is far more sensible to wait for Service Pack 1 (SP1). This way others have gone through the pain that’s bound to be there.

But is that really still valid?

Does recent history from Microsoft support the premise? Is there any evidence?

The "Wait for SP1" situation can’t be one that Microsoft wants to persist because it delays that adoption of their newer software and potential stifles a revenue stream for them. But perhaps I’m wrong, perhaps they prefer the damper effect this has on demand. Let’s face it, if everyone upgraded fast they would have a problem.

There are a number of pieces of evidence that are available to us, but do they actually answer the question?

Service Pack History

Does Service Pack history help us out here? Is there evidence that the number of fixes in Service Pack 1 is substantially higher than the number of fixes in later Service Packs? Does the number of fixes in a Service Pack change depending on the maturity or generation of a product? Did Windows 2003 Service Pack 1 have less issues than Windows 2000 Service Pack 1, for instance?

Here are some numbers:

Number of Issues Resolved by Service Pack

Service Pack 1 Service Pack 2 Service Pack 3 Service Pack 4
Windows 2000 287 470 1014 675
Windows XP 321 826
Windows Server 2003 1012
Exchange 2000 129 25 25
Exchange 2003 40 131

About the only thing that you can say about those numbers is that there is no correlation between the age of a product or the product generation and the number of issues that need to be resolved.

On these number Windows 2000 Service Pack 1 looked like a safe product, and so did Windows XP Service Pack 1, both of which resulted in a huge number of issues later in their life.

I had wondered about whether there might be some stronger correlation between the number of issues resolved and the time between Service Packs, but I don’t have that much time in my life.

These numbers do highlight one significant issue though, and that is that we are trying to make a judgement of quality based on quantity, and that’s normally not a good thing to do. The quantity of fixes probably doesn’t relate to the quality (or impact) of those fixes.

All it takes is one big issue and the quality is a problem. I suppose I could have gone through and tried to measure the number of "major" issues or something like that, but again, I don’t have the time.

There is some indication of quality in the numbers but you need to understand the back story. There are loads of fixes about the time of Exchange 2000 Service Pack 1 and Windows 2000 Service Pack 3 which demonstrate the first awakening to security as an issue within Microsoft, the emergence of SPAM, SASSER, etc.. The high number on Windows XP Service Pack 2 and Exchange 2003 Service Pack 2 demonstrate the second security awakening within Microsoft and the famous Bill Gates announcement. The high number of fixes in Windows Server 2003 demonstrates a shift towards more continuous update and less frequent Service Packs, this resulted in this particular Service Pack being released a long time after the release of Windows Server 2003.

Testing Process

We’ve already seen that we can’t make quality judgements on the basis of quantity. Perhaps, therefore, we need to look at the way that quality is built. In the case of software this is best demonstrated (in my view) by the level of good testing that occurs prior to release.

In the case of Exchange 2007 the numbers of live testers appear to be as follows:

We’ve bet the company on this product. Here at Microsoft, we have over 120,000 mailboxes running in production on Exchange 2007 – exceeding our SLA of 99.95% availability. Likewise, over 200 Technology Adoption Partners and Rapid Deployment Partners have over 55,000 mailboxes in production operating within their enterprise SLA’s.

You Had Me At EHLO…

Yes, I know I’m mixing quality and quantity again, and that’s the problem. Every time I try to assess quality I get to quantity. But taking these quantity numbers on there own, does 175,000 mailboxes account for enough testing?

There are apparently somewhere around 130 million corporate Exchange users. As a representative community 175,000 represents 0.13% of the user population!

Is that really representative, and if it isn’t what would be? The 120,000 internal users are certainly significant if you are going to deliver Exchange service the way that they have, if you aren’t then I’m not sure what it tells you.

I’ve been involved in a few TAP and RDP and the testing hasn’t really been representative of the real requirement. By this I mean that the testing wasn’t really done in a "production" environment and wasn’t really subject to the corporate SLA.

Continuos Update blunting the Bleeding Edge

We have moved a long way towards continuous update these days, this has the tendency to blunt the bleeding edge. Waiting for Service Pack 1 used to mean waiting for the first set of roll-up fixes; today we are used to an almost constant stream of updates. If a major problem is found people can get hold of it very quickly and apply it quickly too.

In the continuous update world it seems anomalous to talk about waiting for Service Pack 1, because that may be some time away.

My Personal Conclusion

My personal point of view is that there is no point in waiting for Service Pack 1 of Exchange 2007 specifically but there is a value in waiting a few months before actual deployment just-in-case. I would extend this view to other software too.

Likewise, there is no safety in staying with current product. The current product may have more uncovered issues that the new product.

The Effect of Measurement

Jimmy plays hide-and-seek with babyOne of the first rules of problem solving is to define a set of measures for the problem. The problem is that creating these measures normally requires you to change something. This then gets you into the problem of the measures and their impact.

I was reminded of this today.

My home router has been dropping connections and generally misbehaving, so I decided to enable logging and to have the logs sent to my e-mail. This way I could get some idea of what the router was doing prior to having problems.

  • Yesterday, without logging, the router dropped more than 10 times.
  • Today, with logging, the router hasn’t dropped once.

IT systems can be very infuriating.

It’s nearly as bad as trying to understand Schrodinger’s Cat and the Copenhagen Interpretation.

PS: While I was writing this post, the router decided that it would drop the connection .

ITIL Thoughts

Formby BeachLast week I was on an ITIL Foundation course.

If you work in IT and you don’t know what ITIL is – you probably soon will. ITIL which stands for “IT Infrastructure Library” is an emerging standard for the operation of IT, it started in the UK, but is rapidly being adopted as a global standard.

I’ve worked in and around it operations for nearly 20 years now (a scary thought) and the framework that ITIL sets out makes a lot of sense. It’s not dissimilar to the one most organisations operate. What it does do is clarify a number of roles and processes in a way that will help a lot of organisations to assess the effectiveness of their operations. I suspect many organisations will look at the list of processes and know immediately which is the one they struggle with.

If ITIL manages to create a common understanding of roles and processes, or even just a common taxonomy for things like change, problem, incident, capacity then it will have achieved a lot. Speaking as a Solution Architect it will be great to be able to plug into a known set of operational processes during the implementation phase of projects.

The problem with all frameworks is that the problems are in the detail – and I don’t know enough detail to comment on that.

Technorati tags:

Windows Live Search for mobile beta

Our Beach (as it became known) La PaludOver the last few days Microsoft have made a Beta available of Windows Live Search for mobile (download). There has been loads of comments on how good it is, so I thought I would give it a go – I’m almost amazed.

It’s one thing being able to see down to minute detail from a PC, being able to do the same thing from a mobile device opens up a whole new set of possibilities. As an example I searched for my church and it returned a whole load of really useful details, great. Clicking on the “map” bar then showed a remarkable combined map and aerial view. I then asked for directions to my house, which were faultless (as you’d expect these days) what I found particularly nice, though, was the ability to click another button and get a map and aerial view of the next turn. It’s not GPS, but it’s still fabulous.

The category searches don’t seem to work for my area, but the beta is only advertised as being a US based beta so I’m not surprised.

I was going to try both Google and Live, but at the Gizmodo review said that it was a clear win for Microsoft so I haven’t bother with the Google offering.

All of my testing (playing) has been done over a WiFi network from an iMate SP5 which I have. I’ve not tried it on GPRS because the person paying the bill is a bit twitchy about data charges. I suspect that it’s not quite as impressive over GPRS because of the bandwidth limitations.

 

Technorati tags: , , , ,

ITIL Foundation Course

TramwayI’ve been out for a few days training – ITIL Foundation.

I haven’t done any classroom training for year and I’d forgotten how enjoyable and frustrating an experience it was.

It’s very enjoyable to have the time to interact with others and to learn from each other. The people dynamics can be frustrating too.

I did the ITIL Foundation exam straight after the course and passed (92%, swat).

I can now confidently say that I understand the difference between an “incident” and a “problem”.

Technorati tags: ,

User Experience Nightmare – In Hospital

Oceanapolis, Brest, FranceYesterday I went into hospital as a day case, when the nurses who were doing all of the pre-operation checks knew what my job was they decided to show me the software that they were using.

It was very interesting.

The software was very complex and the nurses had to know some amazing tricks to get it to do what it was supposed to do. At my initial check-in asked me a number of basic questions, when it came to the pre-operation checks they asked me the same questions again. Both times they asked the basic questions they asked me my height and weight, both times the software was supposed to convert the number that I gave them. I deliberately gave the same answer both times just to see, and both times it failed even though the two nurses entered the details differently.

The lack of intelligence in the questions being asked by the system were mostly masked by the nurses. Every now and then they would skip over a load of question when I asked them what the questions were they said that they were all questions that were only pertinent to a woman, or a person under 16 etc.

I was also struck by how impersonal it was that myself and the nurses were both pointing towards the computer to answer personal questions. If it was me, I would implement the use of tablets for this one reason. Using tablets would definitely let the nurse face the patient, in the same way as paper used to do.

Clearly no-one had thought about the user experience here.