More on Questions..

Jimmy and Grandad take a walk on the wild side

Questions – it’s becoming a bit of a theme . Perhaps I should actually write something that helps people rather than point them somewhere else? But then that would be encouraging answers rather than questions .

Adrian Savage in his Breakthrough Manifest (Part 2) highlights the important of questions with these two encouragements:

  • Forget looking for answers. Questions are so much more useful. Questions lure you on, poke and prod you to discover more. Questions are like bits of grit in a bed: they stop you from resting comfortably with what you think you already know. Answers are a dead end. If you know the answer, there’s nowhere else to go.   
  • Become a specialist in asking stupid questions. They’re the very best ones. Worry about the answer, not the question. Lots of people never get beyond an initial state of confusion because they’re afraid to ask what seems to be a foolish question. Innocent people with a true desire to learn have the greatest chance of spectacular success. Who learns best and fastest? Little children. Your target must be to go through life learning at the same rate as an infant.  

There’s more great stuff over there so go and have a look.

Is the Shared File Server Dead – Steve Responds

Grandad tries the fireman's pole

Steve has written a few responses to my post on the Shared File Server being dead:

There are some good comments there.

In response – my article was written about Shared File Servers specifically and much of what Steve has written relates to the File System as a much broader concept.

The one element I would challenge though is that ‘everyone knows how to navigate a file system’ recent experience has shown me how limited that statement is. People know how to post something to the place where their application has been configured to place it but actually thinking about it in terms of a structure to be navigated I’m becoming less convinced. I have recently seen directories with hundreds (and thousands) of files in a flat structure which would have been far more productive if they had been slit into directories. No-one was thinking about the file system as a structure, they were thinking it more like a set of buckets to put things into, and the buckets were defined by the applications.

Fundamentally, though, I agree with Steve in his conclusion that the alternatives have got a long, long way to go.

User Experience Thinking: Flickr Upgrade

Adventures in Teenbed-Ageroom: Jimmy scales the mighty obolisk called Guitar

Flickr has been upgraded.

Did they add in loads of new features to make me happy – not really.

Did they sit back and think about how people use the service and make me smile with the way they have thought about the user experience – oh yes .

FlickBlog has the details.

Loads of thing which I used to have to do through two pages I can now do through a drop down. It’s still two clicks of the mouse, but it’s only one page load. Much, much nicer .

“Your Photos” is now dramatically cleaner and shows more of what the service is really about – photos .

They have put the number of photos and the number of views near the top of the screen which is just catering to our megalomaniac tendencies – but I’m sure I’m not the only one that spends a lot of time looking at these numbers .

Moving the product away from being a ‘beta’ product also makes me feel happy. It was only a title, but it made me feel uncomfortable especially when I’m paying for it. Who buys a beta product?

The 10% Myth

Time to cut the grass Grandad

The is a myth that surrounds the technology arena. The latest time that I read it was in a Boston Globe article on Notes upgrades.

According to The Boston Globe:

Bisconti said admitted that the Lotus office software won’t have all the advanced features of Microsoft Office, but most people rarely use these tools, he added. ”Most customers tell us that 90 percent of my users use 10 percent of the functions,” Bisconti said.

I’d love to be able to say that I have managed to do the research and find out where this myth came from but I can’t. I used to know, but it’s one of those examples where search has a long way to go. If my memory serves me correctly it was some research done by the Microsoft User Interface team and started them down the road of hiding functions that people weren’t using so they could get to the ones they were using quicker.

My experience on the functions that people use is this. Users use a variable amount of the capabilities of large applications like Microsoft Word and most of them only use a small amount of the capabilities that are available to them. But the capabilities they use are different to the capabilities used by the person sat in the cubicle next to them. The way that they do something is different to the way I do it. Adding together all of the capabilities results in a set of capabilities that are all used by someone.

My other experience is that the 10% of users – the power users not in the 90% – use significantly more of the capabilities. It is these individuals who make the other 90% productive and keep encouraging them to increase their productivity.

The Microsoft Office 12/2007 team chose to change the user interface for all of the Office applications because a huge majority of the capabilities they were asked for in Office 12 already existed in Office 2003. It was just that people didn’t know where to find them.

User Experience Thinking: Office Capabilities in Notes

Think the slide might be a bit big

Ed Brill points to a document in The Boston Globe which is reporting on the inclusion of Office capabilities and ODF into Notes.

How is this improving the experience of the user of the system?

Well I’m not sure exactly, and that’s my problem with the premise that it’s a good idea. If this is going to be a good idea it has to make the experience of the end-user better.

I don’t see anyone ditching Office altogether in favour of an ODF alternative at this point. The problem is the inter-connects between individuals and organisations. Microsoft Office is the standard, because Microsoft Office is the standard.

If anyone creates a Word document they can be confident that whoever they send it to will be able to read it, very few people only communicate within an organisation (where a change of standard is relatively simple). As soon as the communication leaves an organisation you need to go for the highest level of confidence which is Word, Excel, PowerPoint. The next level of confidence is achieved by using Acrobat, but that has certain restrictions that sometimes are a benefit and sometimes not (the ability to edit).

The highest level of confidence equates to the best user experience. using ODF may be free, but it probably gives the person receiving the communication a problem giving them a poor user experience.

Organisations could choose to dual-skill their staff in using two different editors but that’s not a great user experience either.

The World of Me

Watch out for those Nettles Jimmy

Matt Deacon (Microsoft Architect) has an interesting diagram that has come out of the meetings at WAX. It represents the user significant drivers and represents them as ‘The World of Me’. Unfortunately the web version of the post is a bit of a mess so you have to scroll down to find it.

Make sure you also read this article so that it all makes sense.

Monitoring and Troubleshooting Microsoft.com

Jimmy gets stranded

I have been catching up on some reading, today’s reading was: Monitoring and Troubleshooting Microsoft.com a really interesting article on how Microsoft have constructed their organisation and technology to tackle the operation of one of the world’s busiest Internet sites.

A few things struck me.

They obviously have the same monitoring problems as the rest of us:

“Left to their default configurations, most monitoring systems generate an excessive number of alerts that become like spam to administrators. Especially with large systems, it is important for organizations to carefully define what should be monitored and what events or combination of events should be raised to the attention of operations personnel. An organization must also plan to learn from the data collected. As with alert planning, this aspect of the solution is a significant undertaking. It requires creating data retention and aggregation policies, and combining and correlating all of the data into a data warehouse from which administrators can generate both predefined and impromptu reports.”

But have got to a point where:

“The overall system processes over 60,000 alerts a day, conducts approximately 11.5 million availability tests a day, parses 1.7 terabytes of IIS log data a day, and collects 185 million performance counters a day at a sampling rate of 45 seconds. However, to reach this degree of monitoring sophistication was a long process and required significant effort and cross-organizational coordination.”

I’m not sure whether those numbers indicate ‘monitoring sophistication’ or not.

The other thing was the ability of Microsoft to leverage internal resources and to operate a continuous improvement methodology that genuinely improved things. These things are incredibly difficult in large organisations.

“After implementing and stabilizing the asset management and reactive monitoring systems, the focus of the operations team shifted to proactive testing of applications and defining proactive monitoring events.”

and

“The testing process also helps to determine what events are meaningful, and what corrective actions are appropriate in the case of those events. All of the information learned from transactional and stress testing is thoroughly documented as part of the release management process of the Microsoft Solutions Framework (MSF) that many of the development teams use.”

and

“The operations team wants to create a common eventing and logging class, based on recommendations from the Microsoft Patterns and Practices group, with deep application tracing.”

It’s very easy to implement something and then to leave it alone because it’s working, that is until it stops working. When it stops working that’s when the problems start because people expect thing to be as they left them when they implemented them and they never are. Changes occur, the best thing you can do is make sure the changes contribute to improvement rather than to service entropy.

Microsoft 'Motion'

Jimmy and Grandad struggle to get back into the house

Channel 9 today has a video on Motion, it was one of the topics at the Architecture Insight Conference.

There’s also a couple of ARCasts to.

If you are an IT Architect then Motion will be of interest to you. If your a technical person it won’t.

Motion is about building a bridge between business architecture and IT architecture it does this via building a bridge between business services and IT services. That’s right Microsoft doing business architecture.

It looks really interesting as an approach but there isn’t that much collateral available online today because it’s still in incubation and ‘motion’ is still a code name.

Tags: , , , ,

Architecture Insight Conference Presentation

The presentations from the Architecture Insight Conference are available here.

What is Architecture? The return

A family outing - with Grandad driving. Oh dear!!!

I commented on a piece by Michael Platt the other day looking at the definition of architect.

Since then Steve has commented.

Craig Andera has commented on Michael’s initial post and Michael has responded.

Michael has also added another document.

The chasing of definitions can be a wonderful tool for procrastination and I’m in danger of doing just that so I’m not going to comment anymore. If I ever produce something as wonderful as Sir Christopher Wren (Architect) or as useful as Sir Joseph Bazalgette (Engineer) I’ll be more than happy.

“My definition of an expert in any field is a person who knows enough about what’s really going on to be scared.” P. J. Plauger, Computer Language, March 1983

Getting Control of the Infrastructure: Atonomic, WSDM, DSI, SDM, etc.

Grandad find a snow drift

In my previous post I talked to the problem of the complex infrastructure.

Is there anything going on in the industry to try and resolve these issues.

One of the first things that should be evidently clear is that this issue isn’t an issue for a single company and thankfully a number of companies are working together to resolve the issues (Microsoft, IBM, HP).

As with most technologies that are early in their development cycle many names are being used and there is no clear taxonomy yet. Most people seem to recognise the title of ‘Autonomic’ which was originally conceived by IBM (I think) but each vendor has their own initiative but are coming together under the ‘WS-DM’ banner also. The problem with the ‘Autonomic’ word is that is has another perfectly good use in biology. I’m not sure that WS-DM help either, as it links the issue to Web Services which is a bit limiting when the major elements are infrastructure and infrastructure does lots of things which aren’t really Web Services.

The basic concept is that a service and all of its elements can be described starting with the business requirements and working down into technical requirements, Microsoft call this the Service Definition Model. Each of the service elements are then told to follow the document, if the document updates they update, likewise changes made to the elements are assessed against the document and can only be applied if they don’t have an impact, they then update the document.

The technology has a long way to go, but the concept seems to work.

————

One of the questions that CFO did ask, one of the few repeatable ones, was this:

“What changed to cause the problem, it was working fine so it must have been caused by a change”.

It’s one of those questions that cuts through to the issue, “who knows” is the real response. There are lots of changes going on all the time, patches, fixes, configuration. If someone did change something how were they supposed to know it was impacting upon a service that was using the element of the infrastructure that they were changing.

Until we can answer this question categorically and precisely, and preferably with the answer “nothing that’s had an impact”, we haven’t finished.

Tags: , , WSDM, ,

Getting control of the Infrastructure: The Problem

Grandad decides to sweep up

Once upon a time a developer sat down and started work on an application he had been commissioned to write. He looked at the specification, it required screens which required inputs. So he started coding. He decided what the data file structure should look like and created it on the local computer, he decided what the screens should look and created them and he decided the logic and coded it. There was no network involved, there was no database involved, there was only going to be one client on one computer. The whole project required a team of one.

But that was then and this is now:

An organisation has a process they are wanting to automate, it’s a completely new process because they have already automated all of the others. A group of architects get together to try and understand the data inputs, the work-flow, the security requirements, the resilience requirements, the flexibility requirements, the extensibility requirement, the interfaces to other applications and the diverse client base. They decide that this process can be automated by linking together a number of existing systems and by using a browser based interface which will be developed using an off-the-shelf application. The people who require access to the interface include people from a variety of different partners and suppliers, each of them with a variety of browsers. In order for the process to complete a plethora of applications, databases, networks and servers are involved. This service makes this customer the bulk of their money, it is critical to them, the faster it runs the faster they get paid.

About 9 months after the service has been created it has a problem, it’s running slow and people are starting to notice. The CFO makes an angry call to the person responsible for the service.

“Where is the problem and why haven’t you fixed it?” is the question the CFO starts with.

“Well” says the service provider “this is the first I have heard of a problem, who contacted you?”.

“John from our delivery partner phoned me to say that they were only getting requests for deliveries after the due date and that it wasn’t their fault that parcels were being delivered late” the CFO retorts.

“I’ll get the team together to assess the likely cause” says the Service Provider a little sheepishly, though trying to sound bold and assertive.

“You have until 14:00” says the CFO.

Following the call the Service Provider gets together the team of people he regards as responsible for the technical elements of the service, they each involve others they think should also be involved, they each contact the vendors who deliver the software or hardware they are responsible for. They each take a look at their elements of the service.

At 14:00 the Service Provider has his follow-up call with the CFO and delivers this report:

“I have formed an investigative team to try and get to the root cause of this problem, I am still waiting for a representative from our local networks team, but I already have representatives from the internal SAP team, the Windows team, the SQL Server team, the firewall team, the wide area network team, the storage team, the backup team, the batch processing team, the CRM team, the Oracle team, the work-flow team, the identity management team, the desktop team, the AS400 team, the directory team, the email team and the UNIX team. I have also managed to free up one of the original architects to try and get his overall view of the issue. Furthermore a number of the vendors have offered their help.

They have each assessed their element of the service and can find no issues. Everything is working as they would expect it to work.”

I think you can imagine the CFO’s response.

This story is based on a caricature of a real situation I have personally been involved in, and I don’t believe it is at all overstated. The modern infrastructure and application mix is very complicated.

Current monitoring and management techniques don’t recognise the service in the same way as the person using it does, they recognise all of the elements, but don’t put it together as a whole.

Is anyone working on an answer?