User Experience Thinking: Office Capabilities in Notes

Think the slide might be a bit big

Ed Brill points to a document in The Boston Globe which is reporting on the inclusion of Office capabilities and ODF into Notes.

How is this improving the experience of the user of the system?

Well I’m not sure exactly, and that’s my problem with the premise that it’s a good idea. If this is going to be a good idea it has to make the experience of the end-user better.

I don’t see anyone ditching Office altogether in favour of an ODF alternative at this point. The problem is the inter-connects between individuals and organisations. Microsoft Office is the standard, because Microsoft Office is the standard.

If anyone creates a Word document they can be confident that whoever they send it to will be able to read it, very few people only communicate within an organisation (where a change of standard is relatively simple). As soon as the communication leaves an organisation you need to go for the highest level of confidence which is Word, Excel, PowerPoint. The next level of confidence is achieved by using Acrobat, but that has certain restrictions that sometimes are a benefit and sometimes not (the ability to edit).

The highest level of confidence equates to the best user experience. using ODF may be free, but it probably gives the person receiving the communication a problem giving them a poor user experience.

Organisations could choose to dual-skill their staff in using two different editors but that’s not a great user experience either.

The World of Me

Watch out for those Nettles Jimmy

Matt Deacon (Microsoft Architect) has an interesting diagram that has come out of the meetings at WAX. It represents the user significant drivers and represents them as ‘The World of Me’. Unfortunately the web version of the post is a bit of a mess so you have to scroll down to find it.

Make sure you also read this article so that it all makes sense.

Monitoring and Troubleshooting

Jimmy gets stranded

I have been catching up on some reading, today’s reading was: Monitoring and Troubleshooting a really interesting article on how Microsoft have constructed their organisation and technology to tackle the operation of one of the world’s busiest Internet sites.

A few things struck me.

They obviously have the same monitoring problems as the rest of us:

“Left to their default configurations, most monitoring systems generate an excessive number of alerts that become like spam to administrators. Especially with large systems, it is important for organizations to carefully define what should be monitored and what events or combination of events should be raised to the attention of operations personnel. An organization must also plan to learn from the data collected. As with alert planning, this aspect of the solution is a significant undertaking. It requires creating data retention and aggregation policies, and combining and correlating all of the data into a data warehouse from which administrators can generate both predefined and impromptu reports.”

But have got to a point where:

“The overall system processes over 60,000 alerts a day, conducts approximately 11.5 million availability tests a day, parses 1.7 terabytes of IIS log data a day, and collects 185 million performance counters a day at a sampling rate of 45 seconds. However, to reach this degree of monitoring sophistication was a long process and required significant effort and cross-organizational coordination.”

I’m not sure whether those numbers indicate ‘monitoring sophistication’ or not.

The other thing was the ability of Microsoft to leverage internal resources and to operate a continuous improvement methodology that genuinely improved things. These things are incredibly difficult in large organisations.

“After implementing and stabilizing the asset management and reactive monitoring systems, the focus of the operations team shifted to proactive testing of applications and defining proactive monitoring events.”


“The testing process also helps to determine what events are meaningful, and what corrective actions are appropriate in the case of those events. All of the information learned from transactional and stress testing is thoroughly documented as part of the release management process of the Microsoft Solutions Framework (MSF) that many of the development teams use.”


“The operations team wants to create a common eventing and logging class, based on recommendations from the Microsoft Patterns and Practices group, with deep application tracing.”

It’s very easy to implement something and then to leave it alone because it’s working, that is until it stops working. When it stops working that’s when the problems start because people expect thing to be as they left them when they implemented them and they never are. Changes occur, the best thing you can do is make sure the changes contribute to improvement rather than to service entropy.

Microsoft 'Motion'

Jimmy and Grandad struggle to get back into the house

Channel 9 today has a video on Motion, it was one of the topics at the Architecture Insight Conference.

There’s also a couple of ARCasts to.

If you are an IT Architect then Motion will be of interest to you. If your a technical person it won’t.

Motion is about building a bridge between business architecture and IT architecture it does this via building a bridge between business services and IT services. That’s right Microsoft doing business architecture.

It looks really interesting as an approach but there isn’t that much collateral available online today because it’s still in incubation and ‘motion’ is still a code name.

Tags: , , , ,

What is Architecture? The return

A family outing - with Grandad driving. Oh dear!!!

I commented on a piece by Michael Platt the other day looking at the definition of architect.

Since then Steve has commented.

Craig Andera has commented on Michael’s initial post and Michael has responded.

Michael has also added another document.

The chasing of definitions can be a wonderful tool for procrastination and I’m in danger of doing just that so I’m not going to comment anymore. If I ever produce something as wonderful as Sir Christopher Wren (Architect) or as useful as Sir Joseph Bazalgette (Engineer) I’ll be more than happy.

“My definition of an expert in any field is a person who knows enough about what’s really going on to be scared.” P. J. Plauger, Computer Language, March 1983

Getting Control of the Infrastructure: Atonomic, WSDM, DSI, SDM, etc.

Grandad find a snow drift

In my previous post I talked to the problem of the complex infrastructure.

Is there anything going on in the industry to try and resolve these issues.

One of the first things that should be evidently clear is that this issue isn’t an issue for a single company and thankfully a number of companies are working together to resolve the issues (Microsoft, IBM, HP).

As with most technologies that are early in their development cycle many names are being used and there is no clear taxonomy yet. Most people seem to recognise the title of ‘Autonomic’ which was originally conceived by IBM (I think) but each vendor has their own initiative but are coming together under the ‘WS-DM’ banner also. The problem with the ‘Autonomic’ word is that is has another perfectly good use in biology. I’m not sure that WS-DM help either, as it links the issue to Web Services which is a bit limiting when the major elements are infrastructure and infrastructure does lots of things which aren’t really Web Services.

The basic concept is that a service and all of its elements can be described starting with the business requirements and working down into technical requirements, Microsoft call this the Service Definition Model. Each of the service elements are then told to follow the document, if the document updates they update, likewise changes made to the elements are assessed against the document and can only be applied if they don’t have an impact, they then update the document.

The technology has a long way to go, but the concept seems to work.


One of the questions that CFO did ask, one of the few repeatable ones, was this:

“What changed to cause the problem, it was working fine so it must have been caused by a change”.

It’s one of those questions that cuts through to the issue, “who knows” is the real response. There are lots of changes going on all the time, patches, fixes, configuration. If someone did change something how were they supposed to know it was impacting upon a service that was using the element of the infrastructure that they were changing.

Until we can answer this question categorically and precisely, and preferably with the answer “nothing that’s had an impact”, we haven’t finished.

Tags: , , WSDM, ,