George Bernard Shaw famously wrote in his play Man and Superman: “He who can, does; he who cannot, teaches”.
I’m a ‘doer’ – no doubt about it. I’m motivated by getting things ‘done’. Give me a ‘to do’ list or ask me to do something and I’m your man (well woman). I’m not good at delegating (‘if you want something doing well, do it yourself’). I can’t bear to see people stuck for ideas or be indecisive so I’ll jump right on in there with solutions (often too quickly) – every time. Ask me about my plans for the future – I can’t tell you that – don’t you know I have this big to do list which is taking all my time up, I’ll think about the future tomorrow.
However my doing tendencies are completely at odds with the role of Manager I find myself in. Like many managers I’ve ended up in this role because of experience, years of service and the assumption/belief from superiors that because I perform well in a role that I’m qualified to manage others doing that role.
I know I’m not alone – thousands if not millions of managers out there are ‘doers’ – given the choice they will focus on the tasks they need to do and spend little time focusing on the people they’re managing. Perhaps the saying should be ‘Those that can do, can’t teach’.
Teaching (or coaching) is going to require me to learn a lot of new skills. I’m going to need to learn to take time to plan ahead, focus on the future and involve others in my plans. I’m going to need to work on how I provide positive and negative feedback to my team (as opposed to praise or criticism). I’m going to need to fight my instincts and delegate and catch myself before jumping in with solutions or answers. I have to give them space to fail (safely) and be there to pick them up when they do. Most importantly I’m going to need to take the time to take an active interest in my team, listen to what they have to say and find motivation in seeing others get things done well.
Now I know what I need to do, how do I go about it? Maybe if I can create a ‘to do’ list……..
Organisations are now thinking ahead to what their customer experience will look and feel like in years to come. These experiences will be personal, fluid and meaningful in ways we can only dream about today. They will be innately social. What friends are doing will drive the experience as much as individual behaviour. Don’t even count on the experience being screen based. Wearable devices, proximity systems combined with voice recognition and motion sensing will force brands to embrace the physical. All of this with no compromise in security, privacy or performance.
Things used to be simple. One website. One device. A couple of browsers. A single user journey. It was possible to take a blank sheet and design your site from scratch. Handover the design to the developers and get it built in relative short-order.
Then things got complicated. You needed experts to build the content management systems. Designers need to be digital natives. Get the right experts together, put together a plan and execute. Sure there were more devices, more user journeys and often a legacy system or two to deal with. The methods got more structured, more governance was required to cope. Projects take longer. However the requirements are still relatively static over the lifetime of the delivery. They are also easy to understand at least by the experts involved.
The type of customer experience we are imagining will change things. The speed at which the expectations of the customers are changing is constantly increasing. Therefore projects must deliver in ever shorter timescales or risk delivering an experience that no-longer meets the customers needs. The ground is always shifting.
Even capturing what it is the customer wants is getting harder. Once customers were delighted just to have nice easy to use experience. Then the requirements were clear. I want to book a flight. I want to find a skirt for the weekend. I need a new vacuum cleaner. However these new customer experiences are about being emotionally engaged. Making something meaningful isn’t so easy.
A tacit, intuitive connection is required between those creating the experience and the complex world of your customers. Often you will be co-creating an experience with your customers. In order to anticipate the future, your brand will need to help bring the dream into reality, by doing, by taking risks, by pushing the boundaries.
There are elements of your business that are easy to predict. You sell Flights, Vacuums or Fashion now, you will continue to sell them in the future. Your core business systems can continue being managed as they are now. Top-down design, expert guidance and detailed project plans work. You’re existing service-orientated architectures will do the job. You just need to keep these systems as independent as possible from the constantly changing world of your customer experience.
So what methods will organisations use to allow a more meaningful customer experience emerge from what exists today? The teams working on the new experiences will need a very close relationship with your customers. They need an innate understanding of their lives. They will understand tacitly what a good experiences feels like. That will allow them to make good decisions on a day-to-day basis.
Feedback will be critical to success. Testing ideas earlier. Failing fast. Performing well designed experiments whose sole purpose is to increase a teams knowledge of what kind of experience the customer will value over another. When success is an emotional response, even figuring out what to measure is going to be hard and ever changing.
The goal here is to remain adaptive. It is the speed at which we are learning about what makes a great customer experience that is key. Any idea that has potential to further this goal should be tested. It is the speed at which these tests can be done that will become the new measure of efficiency.
So the future is unknowable. Organisations will need to accept that the way they run their internal facing systems won’t work so well at the boundary with the customers. Not if that organisations wants to create meaningful and emotional experiences with their customers. That will take a new kind of team, one ready to anticipate their customers changing needs by learning faster than the competition.
Here at True Clarity we like to think of ourselves as ‘agile’ and we like to keep things simple. When asked ‘What’s your process?’ We have been known to say ‘we don’t have a one, we tailor it to you’.
Whilst our existing customers understand the way we work, for potential customers this approach can cause them to perceive we’re a bit ad-hoc, fly-by-the-seat-of-our pants, make-it-up as-we-go along, cowboy sort of an outfit.
Of course we have a process but it’s so familiar and natural to us that it’s like breathing – we don’t think about it and therefore we’re not very good at articulating it.
So this blog post is the first of what we hope will become a series of posts which talk about our processes and the tools and techniques we use. We won’t be inventing any new processes here – these are things we do or tools we use already, every day – naturally.
“The aspects of things that are most important to us are hidden because of their simplicity and familiarity. ”
There done. That was quick. Hold on. Now I’ve made that statement I’d better prove my point by attempting to share some knowledge!
First question. What is knowledge? Damn that is a tricky one. Personally I like splitting it into implicit and explicit knowledge. Explicit knowledge is stuff that can be transferred by writing it down. Historical facts and figures are explicit. Assembly instructions. Implicit knowledge is learn by doing stuff. Riding a bike is implicit. Going to a concert. You’ve got to do it before you learn about it.
So blog post are clearly going to be great at sharing explicit knowledge. Write it down, publish it, people read it. Knowledge transferred. Theory and logic prevail. Nice and simple.
Hold on. What about the other type of knowledge? How are you going to share implicit knowledge in a blog post? Implicit knowledge comes from experiencing and observing. Talking about what it feels like to ride a bike. What steps you took to learn to ride? Emotion, feelings, description. Story telling.
All good but I’m describing the how of sharing knowledge. This blog post is about the why?
Sharing explicit knowledge is useful. However the only real benefit of the blogging format is simplicity. Just as easy to write it up in a document format or email. This kind of knowledge is unlikely to be changed by the process of sharing it. People either find it useful or not.
I believe the real benefits of writing a blog post is to be found in capturing the implicit stuff. This is where the blog format really comes into its own. Sharing an experience you have had. Opening it up to a public audience. Seeing your experiences resonate with others. Then hearing about their own observations on the subject matter. This is where the knowledge sharing process changes the knowledge itself, enriching it. It is certainly what motivates me to blog on a regular basis.
Perhaps you’d like to share you thoughts on “Why Blog?” below?
You have a large ecommerce website.You want to make small incremental improvements to the performance of the website. You can measure the impact via an increase in profits. Everything sounds pretty simple. Just run small experiments on everything from the user experience, pricing, pay-per-click ads etc. when you see something working do more of it. If things aren’t working then try something else.
This is age-old marketing know-how. I’ve seen this approach being used in direct-marketing since the start of my career. This is the beauty of digital. We can measure everything. Not like stodgy old media. But are these assumptions true?
Lets consider a simple model. The experiment could be anything from a new online ad campaign, an A/B test around button positioning or a good old fashioned bit of discounting. For the purpose of this discussion it doesn’t matter. We have a large customer base. We measure success based on a influencing the customers behaviour. We can expect a very low conversion rate. We also have a low cost total cost for the experiment.
We begin with a big cohort of customers. We then split these out into those who we were able to positively influence, and those who we didn’t and had a negative effect on. The second group were never going to buy, or were going to buy anyway. In each group we then consider the accuracy of our measurements, are the results we measure true or false.
This gets confusing really quick. So please stay with me.
When calculating our ROI the measurement we need to get a count of all the positives. This count is made up of two types. The true positives (i.e. people we correctly measure as being influenced by our actions) and false positives (i.e. people who weren’t influenced but because of inaccuracy in the measurement methods we think we’re).
Let’s assume we have a cohort of 100,000 customers. We have a 1% error rate in measuring false positives (people who weren’t actually influenced). Let’s also assume the true influence rate is 5%.
True Positives = 100,000 x 5% x 99% = 4950
False Negatives = 100,000 x 5% x 1% = 50
True Negatives = 100,000 x 95% x 99% = 94050
False Positives = 100,000 x 95% x 1% = 950
So our test results give the following.
Positives = 5,900 = 18% error
Negatives = 94100 = 1% error (as expected)
This is pretty worrying. We could easily be making a decision based on an ROI of 20%, while actually with a small error rate of just 1% that results are break-even.
Let’s consider some real world examples and some possible strategies for avoiding this effect (known as the False-Positive Paradox).
The first is a pay-per-click campaign. So here we just pay for clicks. Tracking purchases is pretty straight-forward most analysis tools give you revenue figures. However it is going to be pretty hard to measure definite cause and effect here unless we adopt a more scientific approach. Ideally we would have a pre-defined cohort of users to whom we show the advert, then we can measure real influence by comparing users who find the site organically vs. those clicking on the ad. Given most reporting tools don’t do this I’d argue the error rate here is much higher than our illustration of 1%. Ideally use cohorts, if not ensure you’re ROI barrier is raised high enough to lift you out of danger.
The final example is implementing personalisation logic. In this case we are segmenting our customer base and to a certain group showing some different content. Again if carried out using A/B logic the results are more scientific. However in analysis of these kind of rules generally will only show the sales figures of the segmented group of users and any uplift seen in this group against the norm. If the rate of ‘influence’ is low then we can expect errors. To avoid this case personalisation rules should again lead to much higher influence rates. In a word keep it simple, creating multiple highly targeted rules based on non-cohort based analysis may be unwise.
The idea that an Enterprise can be modelled on the human mind isn’t new. The term digital nervous system was originally used as far back as 1987 but it was made famous by Bill Gates in his 1999 book, Business @ the Speed of Thought. However even back then The Register pointed out the flaw in this approach.
In trying to apply the concept to computing, you come unstuck very quickly because you can’t validly compare a system where the outcome is determined by logic and the information content, with a system where the outcome is determined by evolution.
It is precisely this argument that has spiked my interest in the idea.
First some context. My interest here is in how the customer facing elements of an on-line enterprise can respond to the individual needs of each customer, now and well into the future. In particular how we can avoid the need for a constant cycle of too-big-to-fail re-platforming projects. Rather have a platform that can evolve. The holy grail here is a platform that not only bounces back every time there is a need to change, but rather bounces back stronger, fitter than it was before, having learnt more about the type of changes it can expect in the future.
So how do we create a learning customer experience platform? I’m going to begin with the thesis that if we are to create a platform that can learn, we should model that platform on the way we think. The psychologist Daniel Kahneman (Nobel winner in Economics) who talks about having two systems that we use for thinking. “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical.
System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious System 2: Slow, effortful, infrequent, logical, calculating, conscious
So what does it mean to take this model and apply these ideas to a customer experience platform?
System 1 is based on rules. Heuristics based on experience of working with real customers. These may not be the most optimal solution, but they will be fast, able to work with very limited information and critically not require the system to know the individual customer history. The response of System 1 is based on the behaviour of the customer in the here and now. Rules will evolve quickly using a basic measure of “fitness”. For an e-commerce site rules that result in larger sales will thrive, rules that have a negative impact will be culled.
System 2 is based on data. This is where analysis of the data held on the customer happens. Analysis is based on historic data. Trends and patterns emerge and are used to refine the experience. Users can be targeted on an individual basis. This is typical to the approach taken by Amazon, e.g. recommendations, related items. Classic big data.
If all your customers login before any other interaction then you can possibly get by with just System 2. However for most enterprises the trick is knowing when to use each system. There can also be a flow of rules being created in System 1 based on the analysis taking place in System 2. The other factor, is that both Systems will require human interaction to help with rule creation and data analysis, However both systems are evolutionary in nature, and while the framework can be “designed” for holding the data, and running the rules, the resulting customer experience will emerge via feedback with real customers.
Unless you’re managing a project with zero risk (unlikely) then before you can commit to budget or timelines you’re going to need to do some contingency planning.
How do you know what the right amount of contingency on a project is? The short answer is you don’t.
If you’ve worked with the client, team and technology before then you can use your previous experience to give you a good idea of costs and the level of contingency you’ll need.
If it’s a new project for a new customer or you’re using new technology or you’re working with a new team or external suppliers/dependencies then the project comes with a higher risk factor and any calculations or knowledge you’ve previously gained are unlikely to apply.
Many PMs start off by adding a blanket contingency e.g. 20% to the costs and time of any project however it is hard to articulate the idea/cost of contingency to a business stakeholder (especially in a pitch) using this approach.
You can make a more calculated estimate as to how much contingency you might need by applying a bit of thinking early on. Contingency is linked to risk so a good method to use is Risk Analysis.
Some typical risks that you may want to mitigate by adding contingency on a project might include:
The scope of the project changing as more becomes known
The initial estimates of the work may been inaccurate
You’re working with unknown technologies
You’re working with third party dependencies e.g. release management, hosting, APIs
There are of course other ways to mitigate risks other than adding cost/contingency to the project which we’ll cover in another post.
A simple way to calculate a contingency would be to multiply the % probability by the cost of impact. For example a risk probability of 30% multiplied by a cost impact of £10K would result in a contingency of £3K.
You are then able to very clearly show business stakeholders your recommended contingency to mitigate any risk (far better than a blanket percentage). They are able to make a more informed decision based on their appetite for risk as to whether they want to include any contingency or they may decide to accept the risk and understand that the project may cost more.
Many project managers like to include a second contingency amount, often known as the ‘programme contingency’. This can be used for risks that were not identified at the start of the project and emerge later. The larger and more complex the project the more likely it will be that you will uncover more risks later on.
There’s no sure fire measure or magic formula to help you establish ‘how much contingency is enough’. As you get more experienced you’ll be able to make more educated guesses.
The most important thing you can do is to make the contingency visible and get sign-off from your stakeholders (internal or external).
If your contingency is agreed you should keep it as a separate budget from that of your main project and the aim should not be to use it up or count it as profit on a project. Having contingency budget left at the end of a project will put you in the stakeholder’s good books.