My changing workplace – part 5: Client-server 90's

As computers became steadily more connected across the 90’s new forms of applications started to emerge. In the 80’s applications ran on one computer with people accessing them from a terminal. This was fine for the simple text applications that we were used to running, but a new way of doing things was already becoming the normal way of doing things – the graphical user interface. It became known as the GUI which I personally pronounce something similar to gooey.

The emergence of the GUI and increasing compute power in the clients led to a realisation that it was perhaps better to do some work on the client device and other work on the server supporting it. This became known as client-server with each application having its own client and its own server.

My first experience of a graphical client-server application was Lotus Notes I think. We’d done some work with cc:Mail before it was purchased by Lotus, but it wasn’t really client server because there wasn’t any cc:Mail running on the server, all of the work was done by the client. All that you needed for cc:Mail to work was a shared file area that each of the clients wrote to and read from. Lotus Notes was different because it had a server that the client talked to. I don’t think we did too much with it prior to version 3, and even then, it wasn’t the dominant email client that people used in the organisation. We also did some playing around with Microsoft Mail and eventually early versions of Microsoft Exchange. The organisation I worked for, and later supported (after being outsourced), had a number of divisions and each division had its own opinion about email so ran its own systems. Each of these systems talked to the other via the x.400 protocol and we exchanged directory information using x.500. At some point the division I was supporting chose to consolidate its systems into a divisional Lotus Notes system as did some other division, other divisions chose Microsoft Exchange then at version 5.5 and still quite limited (the database on a server was limited, practically to 100GB and you could only have one database).

At one point I did some pretty basic Lotus Notes database development. We had a paper based ordering system for IT equipment and wanted to get away from all of the writing, so I was tasked with creating a database with the forms in it. We still printed the forms out and sent them off for approval and processing, but at least the creation of the order was done electronically. It was a great idea and the database lived on long after I’d stopped supporting it (it was given to a more professional development team to look after). Like many organisations the deployment of Lotus Notes lead to an explosion of databases for different tasks, everything from a lending database to numerous discussion databases, from document library stores to customer relationship lists.

Lotus Notes was only one of many applications built in this client-server architecture. We thought that this was the way that applications were going to be written for the foreseeable future but a new way of displaying information was already being used and a new way of finding information was about to be launched. I can’t be confident that I used Google in the 90’s, but certainly by the early 00’s ‘to Google’ was something that had become second nature. Before Google though, there had already been Yahoo and a number of other ways of searching the growing catalogue of information on the Internet. The Internet was now the place to find information (text and pictures) but little did we perceive that this would become the default way of providing any and every capability. Although you could argue that the browser is itself client-server in architecture the difference comes from the browser being the universal client for all sorts of applications. It took until the late 90’s for Microsoft to realise this and it wasn’t until the successive releases of Internet Explorer 5.0 (1999) and 6.0 (2001) that they built a (not to healthy as it turned out) dominant position.

In the early 1990’s we were still in a world were every device was built to support the person using it and that person could do whatever they liked on it. If we wanted to upgrade a client-server application we had plan an upgrade to the server and all of the clients. There was very little commonality between them and every device required local support. We knew how much it cost us to buy IT equipment and it wasn’t cheap, but a new term was starting to gain momentum – total cost of ownership (TCO). TCO was popularised by Gartner and it meant that it was no longer sufficient to just think about the procurement costs of IT we needed to start talking about other costs – operating costs, training costs, support costs, licensing costs, management costs, consumable costs. This change in thinking put personal computing under a bright spotlight as a place where organisations were spending huge amounts of money compared to the procurement costs of the equipment. PC support organisations were shown to be significantly larger per user than the equivalent mainframe support organisation (or so it seemed).

In the mid-90’s we started looking at ways of driving down the costs of personal computing; techniques that made use of network technologies to make things easier and to reduce the amount of travel needed for support purpose. We figured that if things could be centrally managed then they ought to be cheaper to operate, and perhaps there were some benefits for the end user in doing this too. The organisation I now worked for had a Novell Netware infrastructure (which was the dominant way of doing it at the time), but our client preferred us to look at Windows NT as the way of achieving this. They’d already done some work with the technology, some of it out of curiosity about it’s key developer, Dave Cutler, who also had a lot of involvement in the development of Digital OpenVMS. People who already supported OpenVMS, like myself, saw a lot of commonality between the two.

The free-spirited pioneering days were coming under significant pressure to demonstrate their value, but that’s another journey for another day.

My changing workplace:

My changing workplace – part 3: The mid-to-late 90's

One thing I should say about this little series is that I’m writing the commentary as I remember it. In the previous post (part 1 and part 2) my memory of the sequence of events is clearer for some reason. I started this post thinking that I could cover them all in one post, but it was getting way too long, so there’s more to follow.

I’d moved to yet another office, this one was smaller and only had four of us in it. Each one of us has an L-shaped desk facing into the corner.

A number of tectonic technology shifts had occurred since the early 90’s. The first and most immediately visible shift was the emergence of operating systems and personal computers. I’m not just talking about PC’s, there were all sorts of personal computing being used around the place.

There were UNIX workstations from Sun, Digital, Silicon Graphics and IBM. I remember being quite impressed by the Sun SPARCstations, particularly the ‘lunch-box’ sized ones that stacked together with other SCSI connected peripherals. If I remember correctly there was one particular character in the office who’s stack of papers on his desk were always at least as high as his stacks of SPARCstation equipment. The IBM AIX Workstations were used extensively in the design department for 3D CAD. There were a few Silicon Graphics devices, limited to some high-end graphics requirements that we had. There were also VAXstations and X-terminals. These systems were all used for engineering purposes. Calculation has always been a huge part of engineering, those calculations where becoming computations and the computations were being integrated into applications. The human barrier to calculation was being removed.

Over on the business systems side another set of personal computers and operating systems were being used. DOS was still being used, as was Windows ’95 and Windows ’98, we also used IBM OS/2 1.3, 2.0 and Warp 3. As business systems these devices were used for word processing, spread-sheets and the newly emerging activity of presentations. Because we had a strong IBM heritage and also a history with Lotus Software we preferred the IBM OS/2 and Lotus SmartSuite approach. People would ask us all the time why we didn’t use WordPerfect, they were wrong. When Lotus was purchased by IBM in 1995 we thought that there was a winning formula there, we were very wrong too. Neither WordPerfect or Lotus SmartSuite would be the ultimate winners in the battle for dominance of office automation software.

On the hardware side of things, being an IBM shop, we preferred the PS/2. We thought that the MCA architecture was superior to the ISA architecture that all the clone manufacturers were pursuing. We stuck with IBM even after the PS/2 had been superseded by the IBM PC Series 300 and 700 (named in BMW model style). PCI was replacing both MCA and ICA. There were a number of clone devices around, primarily those from DEC and latterly Compaq, but there were also a number of Toshiba laptops around. The DEC devices were introduced by the teams supporting the engineering computing environment. IBM’s grip on the PC hardware and software market was well and truly slipping. At some point, I don’t quite remember when, we left IBM behind for desktop devices and moved over to HP Pavilions we never went back.

The laptop was a luxury and only provided to those who were important enough to justify it. I remember the embarrassment of one particular manager who had left his laptop on the roof of his car, forgotten about it, and then reversed down a hill, eventually driving over the top of his much loved Toshiba. It survived quite well, remaining in working order apart from a big crack down the middle of the screen. Most laptops were, however, IBM ThinkPad even after we had switched over to HP for the desktops. There was a number of people who got massively excited about the Toshiba Libretto. The problem with laptops of the day was weight and the diminutive Libretto promised a lot more mobility. They never really took off. The same was also true for the HP OmniBook 300 with it’s odd, inbuilt, mouse contraption. I’ve had a number of ThinkPad’s down the years and they’ve always been reliable work-horses.

There were also a number of Apple Mac devices around, but they were seen as special and only used by the people in some of the graphics departments. In our little office of four one of the team spent much of his time fulfilling the needs of this community, but Mac’s were never regarded as mainstream devices.

Most of these personal systems were built as stand-alone devices. Each one was built in it’s own unique way with floppy disk and CD, a few applications were becoming available on DVD but only the newest devices could read them anyway. We would know some of them intimately because all of the support was done in person. Most of my time was spent tripping from one device to another, changing a configuration here, adding some software there. We only patched things when it was absolutely necessary.

Another class of devices were also starting to be used, the PDA. Some people had tried to use the original Psion organisers but it was the release of the Psion Series 3 that moved these devices into the mainstream. When in 1998 Psion got together with other Nokia and Ericcson to form Symbian everyone thought that this plucky British company was onto a winner, we were, again, wrong. Another set of devices were already starting to become popular, the PalmPilot. The Apple man in the office played around with the Newton for a period of time too. HP introduced the Jornada PDA running Windows CE in 1998, that never really took off either.

The mobile phone was starting to have an impact too. My first mobile was a Nokia 3100 and later moved onto a Nokia 6130 which I still have today (It still works and occasionally, when the kids have damaged their more modern mobiles I’ve made them use it as a lesson. They affectionately know it as ‘bricky’). The cost of calls was high and we still did a lot of communication via pagers. The mobile phone was, after all, just a phone, although people were already starting to think of it as a more general communication device. At some point we started to use SMS for text messages, but that was, again, limited by the cost.

In a few short years computing had moved from 8-bit to 16-bit and on to 32-bit and 64-bit systems, it had also moved from the computer room onto our desks and into our pockets. Our expectations of what we could do had massively shifted too. Most documents were produced by the author and the typing pool was becoming a thing of the past. The personal printer had also arrived and we no longer took the long walk to the print room. We didn’t always print everything either, email was becoming the normal way of communicating.

There were other tectonic technology shifts changing my workplace. The network was starting to change the way that we thought about the whole computing landscape,
things were becoming connected. Microsoft was building a position of dominance with Windows and Office. Applications were becoming client-server.  I no longer worked for an engineering company, I had been moved into an IT company through the emerging business trend of outsourcing.

Those shifts will, however, have to wait for another day.

My changing workplace:

Creating Creativity

I’ve puzzled for some time about what makes something or someone creative.

Pablo Picasso apparently said:

"Every child is an artist. The problem is how to remain an artist once he grows up."

There have been times in my life when I’ve been massively more creative than I feel I am at this phase. But why should that be? What are the things that create a good recipe for creativity? What are the elements the stifle artistry?

In this talk, at Google, Tina Seelig author of Ingenius: Unleash Your Creativity to Transform Obstacles into Opportunities outlines how it’s the interaction of attitude, knowledge and imagination with resources, culture and habitat that creates the engine for innovation:

Tina Seelig: “InGenius”, Authors at Google

(I’m writing this sat at a light grey desk, with a darker grey surround looking at a grey and silver laptop connected to a silver monitor while typing on  a very dark grey keyboard. I think it might be my habitat that I change first.)

via Inc.com

The Move to Mobile (in the UK)

Google has made a new set of data available at their Our Mobile Planet site covering research into the use of smartphones for 2012:

image

The site contains a set of reports on mobile usage by country, but also makes the entire dataset of the research available as both a file and also via an interactive charting tool.

Being from the UK I was particularly interested in the report for here which has the following observations in its Executive Summary:

  • Smartphones have become an indispensable part of our daily lives
  • Smartphones have transformed consumer behaviour
  • Smartphones help users navigate the world
  • Smartphones have changed the way consumers shop
  • Smartphones help advertisers connect with consumers

There are a number of really interesting statistics too:

  • Smartphone penetration has grown from 30% in 2011 to 51% in 2012
  • 78% of smartphone owners don’t leave home without their device
  • 72% of smartphone owners use them at work
  • 64% of smartphone owners access the Internet at least once a day and emailing is still the most popular usage
  • 21% would rather give up the TV than their smartphone
  • 80% of people use their smartphone while doing other things, 55% of them while watching the TV
  • and many more…

This would ring true from what I am seeing out-and-about, and the observations are similar across the globe.

Here’s one of the charts for men of my age group and the importance of the smartphone, 74% won’t leave home without their phone and 20% would rather give up their computer than give up their smartphone:

ourmobileplanet.com_chart_1a584681

"40 hours a week is just about right"

Productivity is the key to business success, not working hours.

Derwentwater RootsFor centuries we’ve known that productivity is heavily influenced by the number of hours we work. We know that we have to put in the hours if we are going to produce anything, but we also know that if we work too many hours our productivity decreases. Put simply – there’s a limit to how much you can produce in a week.

Inc. returned to this subject this week – Stop Working More Than 40 Hours a Week:

The workaholics (and their profoundly misguided management) may think they’re accomplishing more than the less fanatical worker, but in every case that I’ve personally observed, the long hours result in work that must be scrapped or redone.

This article was written on the back of the announcement that Sheryl Sandberg the Chief Operating Officer at Facebook leaves work at 17:30 every day to be with her family. That this is newsworthy is itself a testament to the state of the modern working environment.

The Inc. article is a good summary of the issue, but there’s one part that I’d quite like to comment on:

Proponents of long work weeks often point to the even longer average work weeks in countries like Thailand, Korea, and Pakistan–with the implication that the longer work weeks are creating a competitive advantage.

However, the facts don’t bear this out.  In six of the top 10 most competitive countries in the world (Sweden, Finland, Germany, Netherlands, Denmark, and the United Kingdom), it’s illegal to demand more than a 48-hour work week.  You simply don’t see the 50-, 60-, and 70-hour work weeks that have become de rigeur in some parts of the U.S. business world.

As a worker in the United Kingdom I can tell you that while these details are technically correct, they aren’t practically correct, as least not from my perspective. I know many people in Britain who regularly put in 50, 60, 70 hour working weeks and have done so for an extended periods of time. For them these kind of working hours have become de rigeur. We have traditionally had quite a lax implementation of the working time directive so it’s not really appropriate to assume that people work less than 48 hours because that’s what the law says.

It’s personally very interesting that five countries (50%) who, I understand, implement the working time directive in a more stringent way are ahead of the UK in the Global Competitive Report. So in that respect the article still makes a very valid point, we still have a lot of lessons to learn.

"Be with your friends who are here"

There are a number of situations where I would quite like to do this:

Releasing creativity through doodling

An interesting article in the Wall street Journal entitled Doodling for Dollars says:

YewPut down that smartphone; pick up that crayon.

Employees at a range of businesses are being encouraged by their companies to doodle their ideas and draw diagrams to explain complicated concepts to colleagues.

While whiteboards long have been staples in conference rooms, companies such as Facebook Inc. are incorporating whiteboards, chalkboards and writable glass on all sorts of surfaces to spark creativity.

This is something I have noticed too. People are so distracted by technology these days that they need to be drawn into a meeting before they really engage. The most productive meetings I have are ones where there are a small number of people all contributing to a whiteboard. It’s not possible to be a part-time member of that type of meeting, you’re either in, or you are out.

The most popular posts on this site continue to be ones on Rich Pictures which is a form of doodling to communicate a concept. I regularly walk into meetings with sheets of A3 paper in order to draw out what I think I’m hearing, this often takes the form of a mind-map, but is just as likely to be a spider diagram linking together the conversations.

"Companies need to help employees unplug"

This is a quote from Ndubuisi Ekekwe in the Harvard Business Review talking in an article entitled Is Your Smartphone Making You Less Productive?:

Companies need to help employees unplug. (Of course, every business is unique, and must take its own processes into consideration. But for most companies, giving employees predictable time off will not hurt the bottom line.) In my own firm, when we noticed that always-on was not producing better results, we phased it out of our culture. A policy was instituted that encouraged everyone to respect time off, and discouraged people from sending unnecessary emails and making distracting calls after hours. It’s a system that works if all of the team members commit to it. Over time, we’ve seen a more motivated team that comes to work ready for business, and goes home to get rejuvenated. They work smarter, not blindly faster. And morale is higher.

Give it a try in your own company. As a trial, talk to your team and agree to shutdown tonight. I’m confident that you’ll all feel the benefits in the morning.

How do you try to create shutdown times and unplug?

(May I apologise for my ramblings last week, there was way to much information in one post, I promise to be get back to my normal approach of little and often)

Conversation, Connection, Communication, Rudeness, Isolation, Etiquette and Technology

This is probably more than one post, but all of the thoughts came at the same time and they kind of fit together so here they are as a single stream:

I have a rule, if I’m in a conversation with someone and they start to look at their mobile device or laptop I stop talking. I used to just sit there until the person came back, but after a couple of occasions where I’ve sat for a few minutes waiting for the person to come back I’ve modified my behaviour and I now leave. I give them a little while to come back, but if they have clearly left the conversation I will leave too.

Castle Stalker BayPreviously I’ve written about being In the same room, but not together when observing the interactions in my own family. At this year’s TED Sherry Turkle gave a talk on Connected, but alone? She has some very interesting, and worrying, things to say about our relationship with our devices:

Our little devices are so psychologically powerful that they don’t only change what we do, they change who we are.

She makes a much better job than I did of explaining the worry that I was expressing in my post Post 1000: Thinking about thinking, the brain and information addiction.

She goes on to say when talking about the way that we flit between being present and being somewhere else:

Across the generations I see that people can’t get enough of each other if, and only if, they can have each other at a distance in amounts they can control. I call it the goldilocks effect – not too close, not too far, just right.

In other words – we are desperate to connect but we want to do it on our own terms and in a way that provides immediate gratification.

Sherry Turkle: Connected, but alone?

If you watch the recent Project Glass video posted by Google you’ll notice many of these same characteristics in the interactions that they envisage. Notice how long it is before the person wearing the glasses interacts with a real person and how many opportunities he had to interact that were replaced by technology.

Project Glass: One day…

In a report from August 2011 Ofcom highlighted our changing attitude towards technology and, in particular smartphones:

I wasn’t sure about the statistic on usage in the toilet until the other day when I went into a toilet and noticed the gentleman (teenager) at the latrine next to me had one hand dealing with normal latrine activity while texting/tweeting with the other.

In a recent InformationWeek article Cindy Waxer describes 6 Ways To Beat IT Career Burnout and what’s #6:

6. Take a week off. Seriously.

"By off, I mean off," says Russell. No smartphone, no email, no telephone calls.

It’s been interesting over the last couple of week talking to colleagues returning from an Easter holiday break. Some of them have said something along the lines of "it was great i completely got away from it all" while others have said "I stayed on top of my email while I was away so the return was much easier". To the second set of individuals I’d like to ask the question – "what was the person you went on holiday with doing while you were staying on top?"

Most of my posts have a conclusion on them, but I’m struggling to work out what it should be on this post. We need to start to understand where we are letting the technology take us to, but what does that mean? We need to work out what our relationships are going to look like in the future, but how do we do that? We need to understand what the new etiquette is going to be, but how? I think, though, I’ll finish off with Sherry’s words "it’s time to talk".

"Let us make a special effort to stop communicating with each other, so we can have some conversation." Mark Twain

"Vulnerability is the birthplace of innovation, creativity and change"

In a follow-up to here very popular TED talk on vulnerability Brene Brown talks about the impact of that first talk and the power of shame.

In talking about the impact of the initial talk she talks about requests from the business community to go and speak, but not to speak about vulnerability to talk about innovation, creativity and change, it’s then that she uses these words:

“Vulnerability is the birthplace of innovation, creativity and change”

How true those words are.

Brene Brown: Listening to Shame

Post 1000: Thinking about thinking, the brain and information addiction

Today is my birthday, it also happens to be the day on which I have reached 1000 posts, so it seems like a good time to reflect a bit on previous post themes.

Morecombe Bay SunsetWe are currently going through a revolution that is being fuelled by technology but is primarily a social and economic change.

I first posted about this back in 2006 when I started with a couple of posts:

Both of these posts put forward the view that the people we are going to need in the new economy are people who are versatile generalists and people who are creative. In other words we are going to move from a left-brain economy to a right-brain one, at least in the traditional developed economies. This, in turn will make the brain ever more important.

I have a nagging fear and it’s this: The brain is ever more important yet we make people work in ways and subject them to technologies for which we really have no idea of their impact. In other words, I worry that we will, in years to come, see employees suing their employer for the damage that they have received through the impact of current technology much like we have seen mine workers receiving compensation for the impact of their chosen trade on them.

I worry that the millions of people constantly being interrupted by Facebook and Twitter are doing themselves unseen and yet to be understood damage.

We are already starting to know about some of the impacts and they are concerning.

It’s already accepted wisdom that people’s attention span is shorter than it used to be. In a post from 2010 Nicholas Carr stated that The Web Shatters Focus, Rewires Brains.

There’s impacts such as information addiction are starting to be documented, researched and understood. But we are only at the beginning of that journey. I know of a number of young people who rarely leave their bedrooms and think nothing about putting in 10 hours solid on a particular game. I know of people who can’t go for more than a few seconds without having to check-in to one or other of the social media networks. Anyone else heard the phrase Facebook widower?

Then there are impacts such as the drive to multitask even though we are awful at it and it causes us all sorts of problems. One of the more popular posts on this blog is entitled

“Multitasking is dumbing us down and driving us crazy”. I wrote that post back in 2008 and then Walter Kirn estimated that workers wasted 28 percent of their time "dealing with multitasking related transitions and interruptions". Multitasking has become a huge epidemic everything from the woman who was driving behind me yesterday while on the phone (in her hand) and doing her lipstick through to the conference calls which you know would only take 10 minutes if everyone just concentrated.

There is immerging evidence to show that the brain of digital natives is different to that of digital immigrants like myself, but do we know that’s a good thing?

There’s also the physical impact that I know a number of people are already experiencing, I explain my experience with in blogs about Tension Headaches. There’s also the current conversation and research on the dangers of sitting for long periods of time.

It’s time to look after ourselves and especially to look after our brain.

(I was amazed how much I had written on this subject once I started looking into it, but I’ve kept the post short because I know how short an attention span you all have Smile)