Your data in their hands | When was the last time you read a privacy policy?

Last week Evernote got themselves into a public relations storm by updating their terms and conditions relating to privacy of data. They then had to hastily update the policy, stating that they would no longer be making the changes as planned.

The other month I wrote about digital exhaust, but there’s a lot of data that we place into others hands deliberately. When you type an email, upload a file, fill in an online form do you think about who may have access to that data? I’m not sure we often give it the consideration it deserves.

You should assume that the data is going to live forever, so our actions have lasting consequences, and so do the actions of those people who have access to our data.

Each of us have signed up to many terms and conditions that have included privacy statements, but few of us have read any of them.

Those privacy policies were mostly written for a relatively static world but we are entering a new era of data privacy concerns as more of our data gets given to artificial intelligence and machine learning to assess and give value on. That was one of the aspects of the Evernote situation:

“Human beings don’t read notes without people’s permission. Full stop. We just don’t do that,” says O’Neill, noting that there’s an exception for court-mandated requests. “Where we were ham-fisted in communicating is this notion of taking advantage of machine learning and other technologies, which frankly are commonplace anywhere in the valley or anywhere you look in any tech company today.”

Evernote CEO Explains Why He Reversed Its New Privacy Policy: “We Screwed Up”

The reality is that Google, Microsoft, Facebook and Apple have both been using machine learning for a long time, that’s how they know to tells us interesting things like pre-warning us about traffic problems on our journey home when we haven’t told them where home is.

Most of the time we don’t even give the privacy of our data a thought, and we should. Did you know:

  • Many site reserve the right to change the terms without telling you.
  • Many services claim copyright over parts, or all of your data.
  • Some sites don’t let you delete your account.
  • Many sites track you on other sites.

It’s terms like these that enable adverts for an item I searched for just a few minutes ago to now be showing in my Facebook.

When was the last time you checked the PrivacyGrade of an app before you downloaded it? Or check Terms of Service: Didn’t Read before agreeing to the terms on a site? I suspect that for most of my readers they’ve never visited these sites.

Ultimately the only lever that we have over these services is the commercial one and most of them aren’t going to do anything to jeopardize that, but that won’t stop them pushing up against the edges of what we regard as acceptable. What we regard as acceptable is greatly influenced by whether we feel like we are getting something for free.

This constant pushing against the barriers will then influence what the next generation regard as acceptable. The Facebook privacy policy runs to 2719 words and was last updated on the 29th September 2016. Even if I had read the privacy policy in when I started using it I couldn’t tell you how many iterations it had been through or what changes had been made.

We are trading our privacy for access and I’m not sure we really understand the cost.

You're buying a service now! You don't get to set the pace of change.

There’s a huge power shift taking place in corporate IT.

Previously Jane in Manufacturing would use the tools that were deployed to her by Frank in the IT department at a pace defined by Frank or Frank’s boss, Mary.

Vendors would provide updates to Frank annually and Frank would decide whether to deploy the updates or not. If Frank didn’t like the update, or didn’t have enough time to deploy the update because Mary had him busy on other things, Frank would skip an update and wait for the next one.

Before Frank could deploy anything, though, he would have to prove to Mary and the business management that the planned change wouldn’t impact the business too much. He’d do this by putting the updates into a number of test environments. There would be a ‘sand-pit’ testing area where he’d get to see what the update looked like. He’d then move on to the ‘pre-production’ environment where he’d show that the update didn’t impact other system. He is likely to use a ‘pilot’ before eventually deploying the updates to the rest of the business. In each phase various people would be involved to make sure that the planned change did what was expected of it.

If Jane wanted something that was in the new update she just had to wait. Likewise, when Frank decided that an update was being deployed Jane didn’t have much choice whether to accept it or not, she normally didn’t even have a choice about when the update was happening.

In recent years the IT market has adopted as-a-Service as the way of delivering capability to the people like Jane.

Previously Frank in IT decided when updates were going to occur, now the person who decides on whether an update gets deployed isn’t Jane, it’s someone in the provider of the Service. The rate of update is intrinsic in the Service being used.

The pace is no longer being set by Frank, the pace is being set by the Service Provider. Frank, and Mary, just need to keep up. Frank is still involved in this as-a-Service world because he is still providing support for the tools to the business but he’s no longer in control of the rate of change.

To compound Frank’s problems, the Service Provider is no longer updating the Service on an annual basis, they are updating the Service every day with significant changes coming, at least, every quarter. The Service Providers need to keep up with their competition and that means rapid change. Some of the time Jane is delighted by the new capabilities, at other times she’s dismayed that something has changed or been removed.

The previous testing process has lost most of its relevance because it’s the Service Provider doing that testing, but there are still areas where the Service integrates with other Services that Frank would like to test but there simply isn’t the time to keep up with the pace of change.

There are times when Jane comes in to work and needs to do something quickly, only to discover that everything looks different and she’s no idea how to do what she needs to do. She phones Frank, but he has no idea either and it’s going to take him a little time to talk to the Service Provider and work it out for her. The work that Mary had planned for Frank will have to wait because operating the business is always more important than the IT department’s priorities. Jane asks if she can talk to the Service Provider directly, but the contract with the Service Provider only allows a set of named people to contact them.

Not only is Frank in IT having to get used to the pace of change, so is Jane in Manufacturing and so are all of her people. With a higher rate of change the impact of each change is lower, which is a good thing, but the overall volume of change is much higher.

The issue for Jane isn’t just about getting today’s job done though, the other challenge is keeping ahead of the competition. The services that she uses are evolving rapidly and she can’t afford to be behind her competitors who are using the same services. The competition gets the new capabilities on the same day that she does and her ability to exploit them has become a competitive differentiator.

Many services mitigate some of these issues by giving service users a time when they can choose whether to adopt the update. In these schemes, though, the update eventually becomes mandatory and you no longer have a choice. Other schemes include pioneer approaches that allow businesses to give some people insights into the next set of changes prior to the majority of the service users. This approach would allow Frank to use the next iteration of the tools before Jane gets them so that he could be ready, this doesn’t help Jane keep ahead of the completion though.

Rather than treating change as a constant risk it’s time to step aside from the old ways of doing things and adopt new ones that support change as a mechanism for growth.

“Change is inevitable. Growth is optional”

John C. Maxwell

Human Behaviour, a Printer and a Ream of Paper

Today I went to the large multi-function-printer in the corner of the office expecting to pick up some printing that I’d just sent to it.

(You might be wondering what I was doing printing, but that’s a question for another day.)

I was expecting to be greeted by a set of pages on the side of the printer, but instead I was greeted by a red-light and a message on the screen.

The message told me in very clear terms that the printer was out of paper. This particular printer has four trays, three of which are dedicated to the type of A4 paper that I wanted to use, all three of these trays were empty.

Being a good office citizen I opened the cupboard next to the printer where the spare paper is stored. Having open the cupboard I was accosted by a sight I’ve seen in every office I’ve ever worked in. Instead of the cupboard containing full reams of paper it was littered with ripped open paper wrappings containing loose collections of paper. Some of these collections had barely 50 sheets in them, some a 100 sheets, but all of them less than half a ream of paper. There were so many bits of reams that I couldn’t see the full reams.

Most home printers only take a few sheets of paper, but for some years now, decades even, designers of office printers have understood something quite basic. These design geniuses have understood that the basic design requirement for a printer tray is that it takes a ream of paper. I don’t think I’ve seen a paper tray that takes part of a ream for a very, very long time. Yet, despite this being obvious to the designers of printer trays it’s clearly not obvious to the users of printer trays. What could be simpler:

  • Open paper tray
  • Remove ream of paper from cupboard
  • Remove wrapping from ream of paper
  • Put full ream of paper in paper tray
  • Close paper tray
  • Dispose of wrapping

Instead people prefer, for some reason, a different process:

  • Open paper tray
  • Remove ream of paper from cupboard
  • Open wrapping covering ream of paper
  • Remove a handful of paper from wrapping
  • Place this portion of paper into paper tray
  • Place partial ream of paper back into cupboard
  • Close paper tray

The only logical conclusions I can think of for this behaviour are as follows:

  • People haven’t understood, even after all this time, that the paper tray can take a full ream of paper.
  • Disposing of the paper wrapping around a ream of paper requires such special skills that this step is to be avoided. Possible, but I’ve not come across it.

I wonder what the designers of paper trays think about this situation. They’ve done the design work, they’ve created an optimised solution, and yet people prefer to work in a way that creates extra work.

This silly little example shows to me the difficulty of adjusting human behaviour. Even when there is an obviously simpler way of doing things we prefer to follow the tried and trusted path. We prefer to put too little paper in the printer because we are afraid that putting too much in it might break it. This is just a tiny example, but there is evidence of this type of behaviour everywhere you look. The challenge that many organisations face is that these tiny examples scale up into huge areas of inefficiency.

Chris Milk: How virtual reality can create the ultimate empathy machine

One of the methods that I use to keep up-to-date with technology is to listen to all sorts of podcasts and then to look into some of the people that they highlight.

Today I was listening to a TED Radio Hour episode where they highlighted the work of Chris Milk and his use of Virtual Reality as a way of deepening the connections between people.

The films are beautiful and moving, even without a VR headset:

Do we have a truth problem?

What is truth?

It’s a question that philosophers have debated over for millennia. Such philosophical debates are well beyond the remit of what I would normally talk about on this blog and I’m not going to change that with this post.

I only raise the question because I think we are increasingly struggling with understanding what is true.

Recently someone told me a story as if it were fact and then proceeded to tell me that it had to be true because they checked on Google! Is Google a keeper of truth?

The BBC recently highlighted a set of false rumours that were circulating around the Internet regarding the Nepal earthquakes.

Dramatic footage and images have emerged from Nepal, showing the devastation caused by the most deadly earthquake in the country in 81 years. But amid the authentic pictures are fake footage and viral hoaxes.

One of the biggest: On Facebook and YouTube, various versions of a video were erroneously described as closed-circuit television footage from a Kathmandu hotel. They show an earthquake causing violent waves in a swimming pool. The video was picked up by internationalmedia – including one of the BBC’s main news bulletins – and has been viewed more than 5m times. However it’s not from Nepal – it appears to have been be taken during an earthquake in Mexico, in April 2010.

Someone went through the effort of scrubbing the date stamp from the video to make it more believable! Even the BBC wasn’t sure about truth?

I don’t think a month goes by without someone sending me an email, tweet or Facebook post about some scare story that I need to respond to. Not one of them has been true?

In a world where information is replicated, sent, favourited, retweeted and recreated by billions of taping fingers and thousands of robots, how do we recognise the tellers of truth? In that same world how do we use the indexers of information to validate truth?

Google isn’t trying to be a truth teller – it’s just answering the questions you ask it from the index of information that it has.

How was a  parents told about the Game of 72 to know that is was completely fake?

We’ve had systems of trust for generations that have relied upon personal relationships and having proven track record. Most people know someone who they can rely on to tell the truth, likewise most of us know someone who’s words aren’t worth the breath that created them.

Once we started writing we began to place our trust in those doing the writing.  When it came to news, the journalist became our teller of truth.

Then came the radio and the television and the journalist retained their position.

The position of the journalist is under massive pressure though. The pressure to report ever more rapidly means that they have less time to validate a story. The ownership of news organisations creates problems when the owners want to portray a particular viewpoint. Revenue reduction for newspapers means that fewer journalists are covering more news.

We are becoming increasingly sceptical about the truth-telling of journalist.

That’s just one sphere of truth – the news.

How many times have you read about a new scientific study only to be told a month later that another one contradicts it.

The following diagram shows the diversity of outcomes from studies on foods and cancer:

How do you tell the truth from that? Should I drink tea or avoid it?

We are become increasingly sceptical about the truth-telling of scientist.

If we have a problem with journalists and scientists how do we decide who is trustworthy? Who are our tellers of truth?

I think we need a new set of skills to help us, or perhaps it’s just the same old skills but used in a new way. We need to learn how to do our own investigating. We need to learn to wait for stories to mature and for the truth to become clear. We need to become questioners.

"The Rise of Dynamic Teams" – Alan Lepofsky and Bryan Goode

Continuing my review of some of the sessions from Microsoft Ignite 2015 the title The Rise of Dynamic Teams caught my attention.

When I saw that the presenters were Alan Lepofsky and Bryan Goode it was definitely going to be one to watch.

This session has an overarching question raised by Alan:

Could you be more effective at work?

Well of course I can.

All I had to do is to think back to the last time I was frustrated at work and there clearly presented was an opportunity to be more effective.

Promised Productivity

Alan also highlight that we’ve been promised improved productivity for decades now, but in his opinion not really been delivered it.

My personal opinion is that we have improved our productivity, but mostly by doing the same things quicker, rather than working in different way. A good example of this is email where we send far more messages far quicker, but definitely less effectively.

Framing the problem

Many of us can recognise the issue of information overload. We use many different systems and are fed information all the time.

Alan frames a different problem which I also recognise – input overload. This is the problem we experience when we think about creating something and can’t decided what it is we are creating or where we are putting it – Which tool should I use? Where did I post it?

The point is that we now have a multitude of choices of tools so we don’t necessarily need more tools, but we do need to tools to be simpler and to collaborate together.

Best of Breed v Integrated Suites

Alan reflects on two distinct approaches to collaborative tooling – one which focusses on the best of breed capabilities and one which takes a suite of collaborative capabilities.

These are illustrated below:

Best of Breed Collaboration Tools

Suites Collaboration Tools

The key to the suites approach is the content of the centre combined with the ability to integrate third-party capability and have data portability.

I’m not sure I would put everything in the centre that Alan does but I wholly agree with the principal. One of the significant challenges with a suite approach is that by choosing a suite you risk creating a lock-in situation. This lock-in isn’t necessarily one of data lock-in, what’s more likely is capability lock-in.

Intelligent Collaboration

Alan explains what he means by Intelligent Collaboration:

“This is poised to be the coolest shift we’ve had in collaboration tools we’ve had in 20 years”

“The ability for us to start doing really cool things based on intelligence is really going to dramatically change the way we work”

In the Microsoft approach this intelligence will initially be focussed on the individual, but will then extend to teams and organisations.

The systems that we have today have a very limited view of context and what view they do have they tend not to use with any intelligence. Take the simple example of email build-up during a holiday period. You can set up an out-of-office response, but wouldn’t it be great if something more intelligent happened.

If we take that simple example and add onto it all of the sensors that will soon be reporting on our well-being and location. You can then imagine getting a response from your bosses intelligent assistant asking you to attend a meeting on her behalf because her flight back from holiday has been placed into quarantine due to an outbreak of a virus for which she is show the initial symptoms.

Adding to the context will enable many more intelligent interaction.

Imagine a digital assistant system that made decisions based on – location, time, time-zone, emotional state, physical state and many more.

The Rise of the Dynamic Team

This is the point in the session where Bryan Goode adds the Microsoft perspective. He does this by focussing on:

Modern Collaboration

The perspective defined by Bryan is that teams will continue to utilise many different tools and will be increasingly mobile.

Microsoft are also investing heavily in meeting experiences, something that is in desperate need of improvement for all of us.

Intelligent Fabric

In order to enable modern collaboration Bryan talks through the Microsoft view of the need for an Intelligent Fabric.

Two examples of this fabric being built are Office 365 Groups and Office Graph.

Office 365 Groups provide a unified capability across the Office 365 tools for the creation of teams. A group created in one of the Office 365 tools will be visible in all of the other tools – Sites, OneDrive, Yammer, Exchange. Doing this makes a group a fabric entity rather than being locked into any particular tool.

Office Graph brings together all of the signalling information from the Office 365 tools and any other integrated tools. It’s role is to bring together the meta-data from different interactions and activities.

Personalised Insight

An Intelligent Fabric is one thing, but creating value from it is the important part.

In the presentation Bryan demonstrates Office Delve which utilises the signalling from Office Graph to create personal insights.

The personal insights currently focus on the individual, but they are being extended to provide insights for groups and organisations.

“Teamwork is becoming a first-class entity across our products”

Bryan Goode

I’m not going to explain the demonstrations other than to say that they are worth watching, as is the rest of the presentation.

Conclusions

Productivity and collaboration are going to be a defining features of future organisations as can be seen from the posts that I wrote on the Productive Workplace.

Microsoft is in a position to generate a lot of innovation and disruption by building on top of the Office 365 ecosystem. Groups, Graph and Delve are just the start of that. Having released themselves from the shackles of delivery by Enterprise IT organisation they can potential move at a pace that places them ahead of the pack.

More…

The presentation and video for this session is here.

The video is also embedded below:

https://channel9.msdn.com/Events/Ignite/2015/BRK1106/player

Exchange 2016 Architecture Update – a few highlights from Ignite 2015

I’m catching up on some of the sessions from Ignite 2015. I wasn’t able to go, but thankfully many of the sessions are now available as videos with downloads of the presentations.

For some time now the Exchange team have defined a Preferred Architecture; the one for Exchange 2013 is here. This Preferred Architecture defines a set of best-practices including the use of multi-role servers and commodity physical servers.

At Ignite Microsoft, via Ross Smith IV, put some further detail behind the change in architecture for Exchange 2016.

These are some of my highlights:

A Preferred Architecture?

If you are buying an expensive item it’s generally a good idea to read the manufacturer’s manual; the Preferred Architecture is that manual for Exchange.

For Exchange 2013 the Preferred Architecture says this:

While Exchange 2013 offers a wide variety of architectural choices for on-premises deployments, the architecture discussed below is our most scrutinized one ever.

While there are other supported deployment architectures, other architectures are not recommended.

In other words – we strongly recommend that you do this.

There is good reasoning in the Exchange 2013 Preferred Architecture for the recommendations that are made there. As Microsoft show more about Exchange 2016 then further reasoning for following the recommendations are being revealed.

Multi-Role to Single Role

Exchange 2010 and Exchange 2013 had a number of server roles (client access server, mailbox server), which could be split across different servers. In Exchange 2013 the Preferred Architecture was to deploy multi-role servers – in other words put all the roles on one server. In Exchange 2016 there is only the mailbox server, the architecture alternative of splitting the client access server role and the mailbox server role no longer exists. I never really understood the benefit of splitting them anyway;  not having to discuss the alternative in the future will be most welcome.

Topology Requirements and Improvements

For co-existence you’ll need to be on Exchange 2010 SP3 RU11 or later. You’ll need Windows Server 2012 R2 to run Exchange 2016. You’ll also need your Active Directory to be at Windows Server 2008 R2 FFM/DFM or later, running on at least Windows Server 2008 R2.

From a client perspective you’ll need Outlook 2010 SP2, at least, you’ll also need some specific patches. Outlook 2013 will also need to be at SP1 and have some patches. This particular recommendation is, of course, open to change prior to Exchange 2016 shipping.

There are improvements in indexing which will result in significant reductions in inter-site replication traffic which is always welcome.

MAPI/HTTP is now the default protocol, MAPI/RPC is finally completely and utterly dead. The use of HTTP significantly improves the ability to deliver services from consolidated data centres and across slow networks, including the Internet.

These changes, and others make for a more seamless co-existence with Exchange 2013 and an improved migration experience. If you’ve previously followed the Preferred Architecture you are potentially in a place to drop an Exchange 2016 server into the environment and start using it quite quickly.

Building Block Hardware

Microsoft’s preferred architecture, as it was with Exchange 2013, continues to use physical commodity building block servers.

If I could recover all the hours that I’ve spent debating this point I would have invested them in practising sketching and I would now be a master illustrator. As the laws of physics don’t currently allow me to retrieve that time, I’ll continue to convince people of the error of their ways when they want to add virtualisation, RAID, SANS, backup and all sorts of other resiliency technology.

Ross Smith’s point is this Exchange has a full resiliency and recovery model so use it. In order to make the best use of that Exchange recovery model these are the recommendations for the building block server:

  • Servers are deployed on commodity hardware
    • Dual-socket systems only (20-24 cores total, mid-range processors)
    • Up to 196GB of memory
    • All servers handle both client connectivity and mailbox data
  • JBOD storage
    • Large capacity 7.2k SAS disks
    • Battery-backed cache controller (75/25)
    • Multiple databases/volume
  • AutoReseed with hot spare
  • Data volumes are formatted with ReFS
  • Data volumes are encrypted with BitLocker

You are still limited to 100 databases a server and 16 servers in a DAG, but they are understandable, and I’ve never seen them become a significant constraint.

More…

These are just my highlights, for more architecture changes get the presentation  deck here or watch the video below:

https://channel9.msdn.com/Events/Ignite/2015/BRK3197/player

Password (lack of) complexity and the impact of mobile

I need to confess right at the start that I’ve never been a fan of adding rules to create ‘complex’ passwords. I’m talking about the type of thing where you insist that someone uses a capital letter, some numbers and a special character.

Studies show that when you do this most people create a pattern that they can remember, something like this:

  • Start with a capital letter
  • Numbers at the end
  • Special character right at the end

In so doing we inadvertently create a set of passwords that are easier to crack, not harder.

When creating a password on a mobile device the pattern usage gets even more embedded. It’s hard enough to type a long string on a mobile device keyboard, constant switching between the various keyboard context makes it even more difficult.

Let me explain using the standard iOS keyboard:

When I want to type a password the first screen that I see is the letter view:

Character Keyboard

Entering the first character as a capital letter I click on the up arrow and then type the first part of the password.

If the capital letter is anywhere other than at the beginning I need to select the up arrow part way through the sequence which is a bit messy, so I’m not likely to do that.

The next thing I do is click on the number key to show the numeric keyboard:

Numeric Keyboard

I’ll then type in the numbers and I’m also likely to enter the special character at the end from the subset being shown ($!~&=#[]._-+@).

If I chose not to use the special character from the subset on the numeric keyboard I then have click on the special character key to see the special character keyboard:

IMG_2385.PNG

There’s no direct route from the letter keyboard to the special character keyboard, so I’m never going to choose a special character from this keyboard in the middle of the password.

Also, experience tells me that some of the characters on the numeric keyboard don’t always work as special characters in passwords, so I use an even smaller subset.

There’s another factor, as someone who uses multiple mobile devices the variation in special characters that I’m likely to use are further reduced by the standard Android keyboard (as an example). It’s subset of special characters on the numeric keyboard is different and there are only a few common to both.

If you then layer on top of that the placement of the special characters on a full-sized keyboard you further reduce the easily available special characters. Why would you choose ~ over #?

So rather than making the password more difficult to guess the inclusion of complexity rules actually makes it easier.

For a slightly more scientific answer:

I can understand why it’s an issue, particularly when people stick to such common password: Top 500 Passwords: Is your there? It’s just that I happen to think that we would be better to just extend the length of passwords, until we get rid that is.

"Once again, these features are available now and you can start using them today."

Reading through an AWS Official blog post today I was struck by the power of the closing statement:

Once again, these features are available now and you can start using them today.

We have become used to continuous change that we forget how profound a statement it is.

For much of my working life I’ve lived through the era of packaged application deployment. Hundreds and thousands of devices, running hundreds of applications with each needing to be updated individually. These updates required terabytes of storage and gigabits of network bandwidth.

Changing a large application required weeks of planning and protracted project timelines. Only then would devices join the network and receive the required updates, even so success rates were variable, at best.

These changes were so massive that organisations would only do a few a year.  The organisational impact of moving any faster was just too high, you would want to finish one before you started the next one.

The move to Software-as-a-Service and Utility Services enables a world of continuous change. It’s no longer valid to talk about version x.y of something when it’s different every day.

Organisations can stop worrying about the impact of change and focus on the value of change.

The post itself is about an impressive set of enhancements to Amazon’s WorkSpaces offering, but the real power is in the ability to deliver the benefits without friction and without protracted deployment projects.

“start using them today” are very powerful words for organisations seeking to keep up with the competition.

Thought Experiment: Glasses Tracker

Yesterday I was doing a job which required me to go up into a loft. Before I could get into the loft I needed to get to the cupboard where the loft hatch was, this meant opening up a number of locked doors. Once inside the cupboard I need to move a number of tables out then open the loft hatch and secure the ladder. It was only then I could go up into the loft space and get on with my work.

Having completed my work I did the same set of things in reverse: descend ladder, replace loft hatch, replace tables and lock doors.

A short while later I was sat at a desk having finished off the rest of the job. It was then that I picked up my keys and looked for my glasses. I expected the glasses to be on the desk, but they weren’t. Where were they? Then it occurred to me – “I wonder if I’ve left them in the loft”. Sure enough, after going through the process again, the loft is exactly where my eye-wear was.

Some years ago I left a set of glasses at Manchester airport on my way out on a business trip. On my return I visited the lost-property office to see if some kind person had handed them in. The friendly man behind the counter asked me the date on which I’d left my glasses he the took out a draw from a cabinet which was at least two metres by one metre.  The tray was full of hundreds of pairs of glasses and represented only a few days of misplaced eye-wear, some of which were very bizarre.  My spectacles weren’t there.

This got me thinking, in this world of shrinking electronics and the Internet of Things, why don’t we have GPS traceable glasses. There are clearly some styles of glasses with very little room for anything, but some of the designs have probably got ample space to store the required gadgetry?  Perhaps it’s enough to have them Bluetooth traceable, but GPS tracking would be better. Bluetooth might have resolved my loft problem, but I think it would have been less likely to have resolved my airport problem. Wouldn’t that be a great differentiator for the glasses manufacturers?

Some people have already thought about something similar:

  • Glasses TrackR – This seems to do a lot of what I want but it’s still a bit big. I like the 2-way ringer function to, which enables you to find your phone from your glasses. The limitation of 100 feet is going to be a common problem though.
  • LOOK – This is a Bluetooth variant that is more stylish, but it’s still an extra something attached to your glasses. Using Bluetooth gives it a 50 feet range which would be OK, but it’s still not GPS.

Both of these are currently concepts looking for funding, perhaps I should invest?

The challenge as always, is going to be power. You can pretty much guarantee that the time when you need this function will be the time when the batteries have dies. It’s also power that limits the range of the device, anyone who has GPS enabled on their phone knows what a power drain it can be.

So we’ve still got a way to go before this can become a reality, but it’s tantalizingly close.

Concept video for the LOOK:

Millennial are just like everyone else! No surprises there then.

Millennials (also known as the Millennial Generation or Generation Y) are the demographic cohort following Generation X. There are no precise dates when the generation starts and ends. Researchers and commentators use birth years ranging from the early 1980s to the early 2000s.

Wikipedia

Millennials are everywhere, both literally and figuratively:

They get characterised in all sorts of ways; the Pew Research Institute allows you to take a survey to assess How Millennial Are You? This survey includes the following questions:

  • Do you have a tattoo?
  • Do you have a piercing in a place other than an earlobe?

(I’m not very Millennial, but that’s not surprising as I was born in the 60’s which are nowhere near the 80’s and I’m lacking any bodily adornment)

Time Magazine characterised them as the Me Me Me Generation.

Recently IBM undertook some research to see whether all of the characterisations were true. You can perhaps imagine some of the findings by the title Myths, exaggerations and uncomfortable truths – The real story behind Millenials in the workplace:

In a multigenerational, global study of employees from organizations large and small we compared the preferences and behavioral patterns of Millennials with those of Gen X and Baby Boomers. We discovered that Millennials want many of the same things their older colleagues do. While there are some distinctions among the generations, Millennials’ attitudes are not poles apart from other employees’.

Our research debunks five common myths about Millennials and exposes three “uncomfortable truths” that apply to employees of all ages. Learn how a multigenerational workforce can thrive in today’s volatile work environment.

(Emphasis mine)

What were the myths:

  • Myth 1: Millennials’ career goals and expectations are different from those of older generations.
  • Myth 2: Millennials want constant acclaim and think everyone on the team should get a trophy.
  • Myth 3: Millennials are digital addicts who want to do – and share – everything online, without regard for personal or professional boundaries.
  • Myth 4: Millennials, unlike their older colleagues, can’t make a decision without first inviting everyone to weigh in.
  • Myth 5: Millennials are more likely to jump ship if a job doesn’t fulfill their passions.

Remember, they are called myths because they aren’t true. In the main the research discovered that the Millennial generation is just like the Baby Boomer and Gen X generations in all of these traits. There are some situations where it’s the other generations that are different – “Gen X employees use their personal social media accounts for work purposes more frequently that other employees” – but there are no polar differences between the generations.

So why is so much being written about the differences that the Millenials will bring, some of it is also research based, but I’m sure that there is a good deal of confirmation bias to it also (but perhaps I like the IBM research because it confirms my bias).

Office for Mac 2016 Preview

In my earlier post – The Return of Microsoft Office – Appearing Everywhere – one of the pieces I thought was still a bit weak was the story around Mac.

Mac’s are increasing in popularity in both the business and consumer and the current version of Office for Mac dates back to 2011. Some elements have been added to this story in the interim – like OneNote for Mac – but the major components have changed little.

Last week Microsoft announced a preview of Office for Mac 2016. Although named 2016 it looks like the timeline is really mid-2015.  It’s interesting to note that this is ahead of the Office 2016 variant for Windows.

Microsoft has always struggled a bit with the look-and-feel of the Mac version of Office – the desire to deliver a standard Office experience hasn’t always aligned with desire to give a Mac consistent experience.  This, again, seems to be one of the major focusses of this release using the taglines: “Unmistakably Office” and “Designed for Mac”

Office for Mac 2016

The third major tagline is: “Cloud connected” which won’t be a surprise to anyone and links back to the strategic play I outlined in my last post.