"We're writing these things that we can no longer read." Kevin Slavin

A little while ago I wrote an article on algorithms: Living with the algorithms in which I was trying to convey some of the ways in which algorithms are already influencing our daily lives.

Kevin Slavin does a better job in his TED talk titled: How algorithms shape our world

You're buying a service now! You don't get to set the pace of change.

There’s a huge power shift taking place in corporate IT.

Previously Jane in Manufacturing would use the tools that were deployed to her by Frank in the IT department at a pace defined by Frank or Frank’s boss, Mary.

Vendors would provide updates to Frank annually and Frank would decide whether to deploy the updates or not. If Frank didn’t like the update, or didn’t have enough time to deploy the update because Mary had him busy on other things, Frank would skip an update and wait for the next one.

Before Frank could deploy anything, though, he would have to prove to Mary and the business management that the planned change wouldn’t impact the business too much. He’d do this by putting the updates into a number of test environments. There would be a ‘sand-pit’ testing area where he’d get to see what the update looked like. He’d then move on to the ‘pre-production’ environment where he’d show that the update didn’t impact other system. He is likely to use a ‘pilot’ before eventually deploying the updates to the rest of the business. In each phase various people would be involved to make sure that the planned change did what was expected of it.

If Jane wanted something that was in the new update she just had to wait. Likewise, when Frank decided that an update was being deployed Jane didn’t have much choice whether to accept it or not, she normally didn’t even have a choice about when the update was happening.

In recent years the IT market has adopted as-a-Service as the way of delivering capability to the people like Jane.

Previously Frank in IT decided when updates were going to occur, now the person who decides on whether an update gets deployed isn’t Jane, it’s someone in the provider of the Service. The rate of update is intrinsic in the Service being used.

The pace is no longer being set by Frank, the pace is being set by the Service Provider. Frank, and Mary, just need to keep up. Frank is still involved in this as-a-Service world because he is still providing support for the tools to the business but he’s no longer in control of the rate of change.

To compound Frank’s problems, the Service Provider is no longer updating the Service on an annual basis, they are updating the Service every day with significant changes coming, at least, every quarter. The Service Providers need to keep up with their competition and that means rapid change. Some of the time Jane is delighted by the new capabilities, at other times she’s dismayed that something has changed or been removed.

The previous testing process has lost most of its relevance because it’s the Service Provider doing that testing, but there are still areas where the Service integrates with other Services that Frank would like to test but there simply isn’t the time to keep up with the pace of change.

There are times when Jane comes in to work and needs to do something quickly, only to discover that everything looks different and she’s no idea how to do what she needs to do. She phones Frank, but he has no idea either and it’s going to take him a little time to talk to the Service Provider and work it out for her. The work that Mary had planned for Frank will have to wait because operating the business is always more important than the IT department’s priorities. Jane asks if she can talk to the Service Provider directly, but the contract with the Service Provider only allows a set of named people to contact them.

Not only is Frank in IT having to get used to the pace of change, so is Jane in Manufacturing and so are all of her people. With a higher rate of change the impact of each change is lower, which is a good thing, but the overall volume of change is much higher.

The issue for Jane isn’t just about getting today’s job done though, the other challenge is keeping ahead of the competition. The services that she uses are evolving rapidly and she can’t afford to be behind her competitors who are using the same services. The competition gets the new capabilities on the same day that she does and her ability to exploit them has become a competitive differentiator.

Many services mitigate some of these issues by giving service users a time when they can choose whether to adopt the update. In these schemes, though, the update eventually becomes mandatory and you no longer have a choice. Other schemes include pioneer approaches that allow businesses to give some people insights into the next set of changes prior to the majority of the service users. This approach would allow Frank to use the next iteration of the tools before Jane gets them so that he could be ready, this doesn’t help Jane keep ahead of the completion though.

Rather than treating change as a constant risk it’s time to step aside from the old ways of doing things and adopt new ones that support change as a mechanism for growth.

“Change is inevitable. Growth is optional”

John C. Maxwell

Knowing the Real Story in a world of Headlines and Algorithms

I’ve been pondering the question of how we know that what we are being told is the real story. This was highlighted by a recent incident at an AFC Championship game.

At a recent game between the Denver Broncos and the New England Patriots there was a technical problem with the systems that provide vital information to the sidelines. This system uses, as part of a marketing deal, a set of Microsoft Surface tablets.

Disclaimer: I’m British and know absolutely nothing about American Football, nor want to really, thankfully I’m not commenting on the game. I’m not even commenting on whether the Microsoft Surface is good at what it does. I’m commenting on how stories emerge and get transmitted.

The most visible part of this failure was a whole gang of people looking blankly and shaking their heads at a set of very visible bright blue Microsoft Surface devices.

All of the initial news headlines were around the failure of Microsoft’s Surface tablets:

These headlines later became a bit more nuanced:

The headlines call out the Microsoft Surface but the articles themselves state that the problem wasn’t with the Microsoft devices at all, but with the stadium network that they were connecting to. It’s worth noting tha these are all headlines from professional news organisations.

Microsoft has had to launch a full media defence of their technology in an attempt to regain the marketing momentum:

“Microsoft Surfaces have not experienced a single failure in the two years they’ve been used on NFL sidelines. In the past two years, Surfaces have supported nearly 100,000 minutes of sideline action, and in that time, not a single issue has been reported that is related to the tablet itself.”

Microsoft Devices Blog

Their attempts to change the perception that their devices failed is admirable but probably ultimately futile, we live in a world of headlines and algorithms.

The search algorithms aren’t too bothered about presenting a balanced story, they are presenting the popular story and the popular story at the moment, in the headlines, is that Microsoft Surface failed.

The natural thing to search for is surface fail, or nfl surface fail both of which start with the stories that have headlines that include the words surface and fail, it’s only lower down the list the more balanced headlines come out.

Search twitter for surface fail and it’s a bit easier to see the progression of the story because the results are presented on a timeline where later developments are reflected at the top. The algorithms aren’t having as much of an impact, but even there the top story is this one:

As I said in my disclaimer, I’m not commenting on whether the Microsoft Surface is any good, or not. What I was intrigued by was the progression of the story. The headline was one thing, the real story was another, the conclusions jumped to were incorrect and yet the overarching commentary remains with the headlines, remembering that the headlines have been cleverly constructed to appear high in the algorithms.

This challenge is nothing new, we’ve always had a story told to us by various agents. It used to be the newspapers:

You can never get all the facts from just one newspaper, and unless you have all the facts, you cannot make proper judgements about what is going on.

Harry S Truman

Now the story agents are on-line media, but we still have to remember that the story we are receiving is filtered and even manipulated. We need, therefore, to approach the on-line media with the same dose of suspicion that we approached the newspapers.

Just because all of the other fish are swimming in one direction doesn’t mean that they are swimming in the right direction.

Disappointing Technology

Do you have disappointing technology?

We have a tumble drying that is supposed to detect when something is dry and then stop. It has a number of settings ranging from Iron Dry to Bone Dry. Guess which one we always use? That’s right Bone Dry. Even then it doesn’t always dry everything. The simplicity of placing a set of items in a drier, setting the time and coming back later for dry clothing has been replaced by a lingering doubt that the washing will, indeed, get dried. It’s so disappointing.

On a similar note, I purchased a drying rack. One of those things that you erect to dry clothing without the need for a drier. It looks great, but some of the clips that hold it into place aren’t up to the job and you get the frustration of placing wet clothes onto a part of the construction only for it to collapse while you are doing it. It could be so good, but it’s so disappointing.

I’ve owned many home network routers in my time. A home router is one of those things that you should just be able to connect to the network and forget about. Unfortunately each one has had an ability to stop working at the time when the technical support person is furthest away from being able to fix it. How disappointing.

I purchased a tiny iPhone charging cable that fitted onto my keyring so that I always had it with me. The first few times I used it were great, but it soon became worn and eventually one of the ends came off completely. Very disappointing.

Our Tivo box has decided that it doesn’t like playing YouTube videos. Another things to investigate and hopefully fix. Disappointing.

My work supplied Android phone has decided that it’s going to stop synchronising calendar entries into the calendar app that I like. It’ll synchronise them into the inferior in-built calendar app, but I don’t want to use that app, I want to use the one I like. Most disappointing.

As technology people we can sometimes get excited about this magical new thing or that incredible widget but sometimes I think it does us good to remember that technology can be disappointing.

Do we have a truth problem?

What is truth?

It’s a question that philosophers have debated over for millennia. Such philosophical debates are well beyond the remit of what I would normally talk about on this blog and I’m not going to change that with this post.

I only raise the question because I think we are increasingly struggling with understanding what is true.

Recently someone told me a story as if it were fact and then proceeded to tell me that it had to be true because they checked on Google! Is Google a keeper of truth?

The BBC recently highlighted a set of false rumours that were circulating around the Internet regarding the Nepal earthquakes.

Dramatic footage and images have emerged from Nepal, showing the devastation caused by the most deadly earthquake in the country in 81 years. But amid the authentic pictures are fake footage and viral hoaxes.

One of the biggest: On Facebook and YouTube, various versions of a video were erroneously described as closed-circuit television footage from a Kathmandu hotel. They show an earthquake causing violent waves in a swimming pool. The video was picked up by internationalmedia – including one of the BBC’s main news bulletins – and has been viewed more than 5m times. However it’s not from Nepal – it appears to have been be taken during an earthquake in Mexico, in April 2010.

Someone went through the effort of scrubbing the date stamp from the video to make it more believable! Even the BBC wasn’t sure about truth?

I don’t think a month goes by without someone sending me an email, tweet or Facebook post about some scare story that I need to respond to. Not one of them has been true?

In a world where information is replicated, sent, favourited, retweeted and recreated by billions of taping fingers and thousands of robots, how do we recognise the tellers of truth? In that same world how do we use the indexers of information to validate truth?

Google isn’t trying to be a truth teller – it’s just answering the questions you ask it from the index of information that it has.

How was a  parents told about the Game of 72 to know that is was completely fake?

We’ve had systems of trust for generations that have relied upon personal relationships and having proven track record. Most people know someone who they can rely on to tell the truth, likewise most of us know someone who’s words aren’t worth the breath that created them.

Once we started writing we began to place our trust in those doing the writing.  When it came to news, the journalist became our teller of truth.

Then came the radio and the television and the journalist retained their position.

The position of the journalist is under massive pressure though. The pressure to report ever more rapidly means that they have less time to validate a story. The ownership of news organisations creates problems when the owners want to portray a particular viewpoint. Revenue reduction for newspapers means that fewer journalists are covering more news.

We are becoming increasingly sceptical about the truth-telling of journalist.

That’s just one sphere of truth – the news.

How many times have you read about a new scientific study only to be told a month later that another one contradicts it.

The following diagram shows the diversity of outcomes from studies on foods and cancer:

How do you tell the truth from that? Should I drink tea or avoid it?

We are become increasingly sceptical about the truth-telling of scientist.

If we have a problem with journalists and scientists how do we decide who is trustworthy? Who are our tellers of truth?

I think we need a new set of skills to help us, or perhaps it’s just the same old skills but used in a new way. We need to learn how to do our own investigating. We need to learn to wait for stories to mature and for the truth to become clear. We need to become questioners.

"The Rise of Dynamic Teams" – Alan Lepofsky and Bryan Goode

Continuing my review of some of the sessions from Microsoft Ignite 2015 the title The Rise of Dynamic Teams caught my attention.

When I saw that the presenters were Alan Lepofsky and Bryan Goode it was definitely going to be one to watch.

This session has an overarching question raised by Alan:

Could you be more effective at work?

Well of course I can.

All I had to do is to think back to the last time I was frustrated at work and there clearly presented was an opportunity to be more effective.

Promised Productivity

Alan also highlight that we’ve been promised improved productivity for decades now, but in his opinion not really been delivered it.

My personal opinion is that we have improved our productivity, but mostly by doing the same things quicker, rather than working in different way. A good example of this is email where we send far more messages far quicker, but definitely less effectively.

Framing the problem

Many of us can recognise the issue of information overload. We use many different systems and are fed information all the time.

Alan frames a different problem which I also recognise – input overload. This is the problem we experience when we think about creating something and can’t decided what it is we are creating or where we are putting it – Which tool should I use? Where did I post it?

The point is that we now have a multitude of choices of tools so we don’t necessarily need more tools, but we do need to tools to be simpler and to collaborate together.

Best of Breed v Integrated Suites

Alan reflects on two distinct approaches to collaborative tooling – one which focusses on the best of breed capabilities and one which takes a suite of collaborative capabilities.

These are illustrated below:

Best of Breed Collaboration Tools

Suites Collaboration Tools

The key to the suites approach is the content of the centre combined with the ability to integrate third-party capability and have data portability.

I’m not sure I would put everything in the centre that Alan does but I wholly agree with the principal. One of the significant challenges with a suite approach is that by choosing a suite you risk creating a lock-in situation. This lock-in isn’t necessarily one of data lock-in, what’s more likely is capability lock-in.

Intelligent Collaboration

Alan explains what he means by Intelligent Collaboration:

“This is poised to be the coolest shift we’ve had in collaboration tools we’ve had in 20 years”

“The ability for us to start doing really cool things based on intelligence is really going to dramatically change the way we work”

In the Microsoft approach this intelligence will initially be focussed on the individual, but will then extend to teams and organisations.

The systems that we have today have a very limited view of context and what view they do have they tend not to use with any intelligence. Take the simple example of email build-up during a holiday period. You can set up an out-of-office response, but wouldn’t it be great if something more intelligent happened.

If we take that simple example and add onto it all of the sensors that will soon be reporting on our well-being and location. You can then imagine getting a response from your bosses intelligent assistant asking you to attend a meeting on her behalf because her flight back from holiday has been placed into quarantine due to an outbreak of a virus for which she is show the initial symptoms.

Adding to the context will enable many more intelligent interaction.

Imagine a digital assistant system that made decisions based on – location, time, time-zone, emotional state, physical state and many more.

The Rise of the Dynamic Team

This is the point in the session where Bryan Goode adds the Microsoft perspective. He does this by focussing on:

Modern Collaboration

The perspective defined by Bryan is that teams will continue to utilise many different tools and will be increasingly mobile.

Microsoft are also investing heavily in meeting experiences, something that is in desperate need of improvement for all of us.

Intelligent Fabric

In order to enable modern collaboration Bryan talks through the Microsoft view of the need for an Intelligent Fabric.

Two examples of this fabric being built are Office 365 Groups and Office Graph.

Office 365 Groups provide a unified capability across the Office 365 tools for the creation of teams. A group created in one of the Office 365 tools will be visible in all of the other tools – Sites, OneDrive, Yammer, Exchange. Doing this makes a group a fabric entity rather than being locked into any particular tool.

Office Graph brings together all of the signalling information from the Office 365 tools and any other integrated tools. It’s role is to bring together the meta-data from different interactions and activities.

Personalised Insight

An Intelligent Fabric is one thing, but creating value from it is the important part.

In the presentation Bryan demonstrates Office Delve which utilises the signalling from Office Graph to create personal insights.

The personal insights currently focus on the individual, but they are being extended to provide insights for groups and organisations.

“Teamwork is becoming a first-class entity across our products”

Bryan Goode

I’m not going to explain the demonstrations other than to say that they are worth watching, as is the rest of the presentation.

Conclusions

Productivity and collaboration are going to be a defining features of future organisations as can be seen from the posts that I wrote on the Productive Workplace.

Microsoft is in a position to generate a lot of innovation and disruption by building on top of the Office 365 ecosystem. Groups, Graph and Delve are just the start of that. Having released themselves from the shackles of delivery by Enterprise IT organisation they can potential move at a pace that places them ahead of the pack.

More…

The presentation and video for this session is here.

The video is also embedded below:

https://channel9.msdn.com/Events/Ignite/2015/BRK1106/player

Exchange 2016 Architecture Update – a few highlights from Ignite 2015

I’m catching up on some of the sessions from Ignite 2015. I wasn’t able to go, but thankfully many of the sessions are now available as videos with downloads of the presentations.

For some time now the Exchange team have defined a Preferred Architecture; the one for Exchange 2013 is here. This Preferred Architecture defines a set of best-practices including the use of multi-role servers and commodity physical servers.

At Ignite Microsoft, via Ross Smith IV, put some further detail behind the change in architecture for Exchange 2016.

These are some of my highlights:

A Preferred Architecture?

If you are buying an expensive item it’s generally a good idea to read the manufacturer’s manual; the Preferred Architecture is that manual for Exchange.

For Exchange 2013 the Preferred Architecture says this:

While Exchange 2013 offers a wide variety of architectural choices for on-premises deployments, the architecture discussed below is our most scrutinized one ever.

While there are other supported deployment architectures, other architectures are not recommended.

In other words – we strongly recommend that you do this.

There is good reasoning in the Exchange 2013 Preferred Architecture for the recommendations that are made there. As Microsoft show more about Exchange 2016 then further reasoning for following the recommendations are being revealed.

Multi-Role to Single Role

Exchange 2010 and Exchange 2013 had a number of server roles (client access server, mailbox server), which could be split across different servers. In Exchange 2013 the Preferred Architecture was to deploy multi-role servers – in other words put all the roles on one server. In Exchange 2016 there is only the mailbox server, the architecture alternative of splitting the client access server role and the mailbox server role no longer exists. I never really understood the benefit of splitting them anyway;  not having to discuss the alternative in the future will be most welcome.

Topology Requirements and Improvements

For co-existence you’ll need to be on Exchange 2010 SP3 RU11 or later. You’ll need Windows Server 2012 R2 to run Exchange 2016. You’ll also need your Active Directory to be at Windows Server 2008 R2 FFM/DFM or later, running on at least Windows Server 2008 R2.

From a client perspective you’ll need Outlook 2010 SP2, at least, you’ll also need some specific patches. Outlook 2013 will also need to be at SP1 and have some patches. This particular recommendation is, of course, open to change prior to Exchange 2016 shipping.

There are improvements in indexing which will result in significant reductions in inter-site replication traffic which is always welcome.

MAPI/HTTP is now the default protocol, MAPI/RPC is finally completely and utterly dead. The use of HTTP significantly improves the ability to deliver services from consolidated data centres and across slow networks, including the Internet.

These changes, and others make for a more seamless co-existence with Exchange 2013 and an improved migration experience. If you’ve previously followed the Preferred Architecture you are potentially in a place to drop an Exchange 2016 server into the environment and start using it quite quickly.

Building Block Hardware

Microsoft’s preferred architecture, as it was with Exchange 2013, continues to use physical commodity building block servers.

If I could recover all the hours that I’ve spent debating this point I would have invested them in practising sketching and I would now be a master illustrator. As the laws of physics don’t currently allow me to retrieve that time, I’ll continue to convince people of the error of their ways when they want to add virtualisation, RAID, SANS, backup and all sorts of other resiliency technology.

Ross Smith’s point is this Exchange has a full resiliency and recovery model so use it. In order to make the best use of that Exchange recovery model these are the recommendations for the building block server:

  • Servers are deployed on commodity hardware
    • Dual-socket systems only (20-24 cores total, mid-range processors)
    • Up to 196GB of memory
    • All servers handle both client connectivity and mailbox data
  • JBOD storage
    • Large capacity 7.2k SAS disks
    • Battery-backed cache controller (75/25)
    • Multiple databases/volume
  • AutoReseed with hot spare
  • Data volumes are formatted with ReFS
  • Data volumes are encrypted with BitLocker

You are still limited to 100 databases a server and 16 servers in a DAG, but they are understandable, and I’ve never seen them become a significant constraint.

More…

These are just my highlights, for more architecture changes get the presentation  deck here or watch the video below:

https://channel9.msdn.com/Events/Ignite/2015/BRK3197/player

"Your site has updated to WordPress 4.2.2‏"

This morning I received a number of emails from WordPress sites that I manage telling me that they had automatically updated to WordPress 4.2.2.

This is a security update that happened in the background without me having to do anything.

This is a positive sign of the growing maturity of the WordPress ecosystem. Security updates are inevitable, making those updates seamless to users and administrators is very welcome. It also speeds up the rate of deployment massively thus reducing the exposure of a security problem.

I could, of course, choose to move away from self-hosting WordPress and to use the cloud delivered WordPress.com, but I quite like the fun.

Security Software – The Ivy Around the Tree

Most mornings I go for a walk trying to get my 10,000 steps in for the day. Many of these walks take me through a local wood. This wood isn’t heavily managed, it’s pretty much left to nature to decide what happens.

In the middle of the wood the trees are looking a bit ramshackle. They are reasonably tall adult trees, but a number of them have come down over recent years. Each one of them is covered in dense ivy foliage; the ivy is slowly killing its host.

I regarded much of the security software that we carry around on our devices like the ivy. It wraps itself around everything sapping its energy and providing little in return.

Eric Lawrence recently wrote an article about Browser Benchmarks in which he made this claim:

Every year for Microsoft’s annual AV summit, the IE Team puts together a chart of the impact of AV on browser performance, showing the variation across the top 20 AV products (the variation is huge). They don’t want to publish this data, but the impact ranges from “bad” to “absurdly unbelievably bad.” The best products impact performance by ~15%, the worst slow the browser by 400% or more. Several of the products crash the browser entirely and can’t be benchmarked fully. Conducting these benchmarks correctly is difficult—you need to account for every piece of software running on the machine and ensure that the test conditions are entirely fair (hardware, software, updates, etc); as a consequence, many of the “public” benchmarks are rather inaccurate.

I’m taking Eric at his word that this is really what happens at the AV Summit each year. Eric is a former member of the IE team after all and I have no reason to doubt him, but likewise I have no other evidence to corroborate it either.

Eric then goes on to talk through anecdotal evidence of his own which confirms the benchmarks. My own anecdotal evidence would parallel with the benchmark experience also. The home laptop with a simple security configuration renders browser pages much faster than my corporate machine with lots of security software even though the corporate machines has significantly more power.

Google, Microsoft, Apple, Opera and the Firefox Foundation are investing thousands of hours in optimising the performance of their browsers. The IT press write thousands of lines of material commenting on those optimisations and their impact on benchmarks. Millions of people use devices every day that get nowhere those benchmarks because of the ivy wrapped around their devices. Or to put it another way:

Mobile devices offer “Desktop Class” performance only because your desktop has been wrecked.

Eric concludes with a phrase that I’ve used a lot over the years:

Antivirus software is too often a cure that’s as bad as the disease. The business model of AV rewards noisy products, and the desire for “checkbox parity” leads to a race to shove its tentacles in all sorts of places they don’t belong (e.g. the internal data structures of the browser). Unfortunately, even beyond antitrust concerns, Microsoft is very limited in its ability to deal with horrible AV products due to court precedents that say that AV can pretty much get away with doing anything it wants in the name of “protecting the user.”

The concern for Microsoft has to be that while they try to grow their tree while carrying the ivy, other trees in the wood do not have the handicap of the ivy and can grow more freely. Those other trees have been left mostly unscathed by the impact of ivy. There was a time when the ivy was required, but the Microsoft tree now has good enough protection of its own. Let’s face it, the protection that the ivy provided was never really that good anyway. It’s time to start chopping back the ivy and to stop feeding it.

Password (lack of) complexity and the impact of mobile

I need to confess right at the start that I’ve never been a fan of adding rules to create ‘complex’ passwords. I’m talking about the type of thing where you insist that someone uses a capital letter, some numbers and a special character.

Studies show that when you do this most people create a pattern that they can remember, something like this:

  • Start with a capital letter
  • Numbers at the end
  • Special character right at the end

In so doing we inadvertently create a set of passwords that are easier to crack, not harder.

When creating a password on a mobile device the pattern usage gets even more embedded. It’s hard enough to type a long string on a mobile device keyboard, constant switching between the various keyboard context makes it even more difficult.

Let me explain using the standard iOS keyboard:

When I want to type a password the first screen that I see is the letter view:

Character Keyboard

Entering the first character as a capital letter I click on the up arrow and then type the first part of the password.

If the capital letter is anywhere other than at the beginning I need to select the up arrow part way through the sequence which is a bit messy, so I’m not likely to do that.

The next thing I do is click on the number key to show the numeric keyboard:

Numeric Keyboard

I’ll then type in the numbers and I’m also likely to enter the special character at the end from the subset being shown ($!~&=#[]._-+@).

If I chose not to use the special character from the subset on the numeric keyboard I then have click on the special character key to see the special character keyboard:

IMG_2385.PNG

There’s no direct route from the letter keyboard to the special character keyboard, so I’m never going to choose a special character from this keyboard in the middle of the password.

Also, experience tells me that some of the characters on the numeric keyboard don’t always work as special characters in passwords, so I use an even smaller subset.

There’s another factor, as someone who uses multiple mobile devices the variation in special characters that I’m likely to use are further reduced by the standard Android keyboard (as an example). It’s subset of special characters on the numeric keyboard is different and there are only a few common to both.

If you then layer on top of that the placement of the special characters on a full-sized keyboard you further reduce the easily available special characters. Why would you choose ~ over #?

So rather than making the password more difficult to guess the inclusion of complexity rules actually makes it easier.

For a slightly more scientific answer:

I can understand why it’s an issue, particularly when people stick to such common password: Top 500 Passwords: Is your there? It’s just that I happen to think that we would be better to just extend the length of passwords, until we get rid that is.

Desktop Scatterer, Folder Fanatic and File Dropper

As I walked around the office this morning I was struck by a colleague’s desktop on their PC. It was absolutely full of file icons, completely covered. I’ve seen this phenomena before but never to such an extreme. I found myself recoiling at what I saw as a complete and utter mess. You may have guessed by my tone that I’m not a desktop scatterer.

My desktop has 16 icons on it; all of them from applications that have decided that I need a desktop icon. Sometimes I delete them, but many will make their way back at a later date normally after an update. All of my files are in folders in a hierarchical structure; I am a bit of a folder fanatic.

There are other people who can never find anything, they seem to have an approach of dropping files into all sorts of places in the hope that they can find them later. There are times when the disorganised side of my personality turns me into file dropper also.

I’ve never really understood the desktop scatterer, I suspect that scatterer is a bit derogatory and the desktop is highly optimised to the way that they work. I understand the file dropper a bit, sometimes you just want to get on with things without having to think about organising what you are doing. Occasionally my folder fanaticism gets out of control and I put files within folders, within folder, within folders, within folders and can’t find anything.

The joy of being a folder fanatic or a file dropper is that there are now so many places to create folders and drop files available: local disks, usb drives, network drives, DropBox, OneDrive, Google Drive, SharePoint, Wiki, email, Box, ShareFile, etc.

We all think and work differently and there are (believe it or not) advantages and disadvantages to each of these approaches.

File structures and systems are going to be around for some time because they are so flexible and enable us to optimise how we work. Perhaps it’s time, though, that we started helping each other to be as productive as possible in their use, what works for you?

Talking Technology Talk

In each area of life we love to create language to describe what we are doing. As technology people we are masters at acronyms and words that make no sense in normal life.

The image below is of a card that I was given and it illustrates how impenetrable our language can be: