Sitecore xDB Personalisation at the client

Nick Hills’ headed over to SUGCON last week to share his infinite wisdom with the rest of the Sitecore community, if you missed it or want a recap you can listen to his talk below:

This talk shows off techniques that allow you to combine two great tools: the personalisation capabilities and power of xDB along with the benefits of CDN edge-caching. Editors can configure and design personalisation rules as normal yet still harness all the power and gains that CDN’s offer.

He’s also handily put together a whitepaper weighing up the pros and cons of the various approaches – http://blog.boro2g.co.uk/personalization-at-scale-whitepaper/

Get in touch if you want to find out more on +44 (0)117 932 7700 or info@trueclarity.co.uk

Going digital to save Taylor & Francis 150 hours a week

“Our e-copyright system was a relatively simple idea that required some fairly complicated problem-solving to make a reality. We worked side-by-side with True Clarity throughout the process to make sure the product we ended up with didn’t limit our flexibility. We appreciated the willingness of True Clarity to have people doing the development work available to us throughout the process. ”

Edward A. Cilurso Vice President – Production

Taylor & Francis Group are one of the world’s leading publishers, releasing over 2,100 journals and 4,000 books annually, with a backlist in excess of 60,000 specialist titles.

What was needed

Taylor and Francis (T&F) have to handle around 250,000 copyright agreements annually in order to publish articles on their site. Getting these agreed involved a lengthy process of emailing out forms, toing and froing through lots of amends and relying on scanning, faxing and emails to log author’s approvals electronically. The business case was crying out for moving this process online, in order to eliminate the pain for authors and staff, save time and keep up with the competition’s online offering.

How we tackled it

Prototypes and early delivery

We kicked off with a discovery phase, involving a few workshops and sessions to completely map out the process, then put together some flow charts to illustrate how we would automate the copyright process. At this stage we carried out some proto-types to validate the approaches, flesh out the unknowns and start delivering to T&F as early as possible in the project. Beyond the author approval on-site, we also needed to map out the integration points with internal legacy sites, such as the production and content management system, and map out some complex workflows to keep the site as self-sufficient as possible.

Designed for Flexibility

The site was designed to present the author with a set of questions, which dynamically change based on their response. The outcome is then used to generate a personalised version of the copyright agreement for the author to agree and submit. Key to this project was to ensure migrating the process online didn’t take away the flexibility of the manual process, so we worked with T&F to create a sophisticated back end allowing T&F to define the routing of questions and answers used to generate the copyright agreements. We also created a number of customisable email templates sent out for different statuses, including reminders for authors who may not have completed an agreement after X amount of time.

The Value

Copyright agreements turned around much faster

T&F sent out 193 author publishing agreements upon launch. 118 were approved within a week, with the vast majority clicking standard options, requiring no further intervention from T&F and minimal enquiries. The approval of 118 licenses would have averaged a longer turnaround, with inspection of each form, using the old process.

Saving staff time on menial tasks

T&F are able to compile detailed weekly reports from the author approval system based on the current status of manuscripts, licenses assigned per country and APAs completed by day. T&F have calculated from the initial launch stats that the new system will save them up to 150 man hours per week – the equivalent of four full-time staff members.

Winning the Digital Election with Conservatives.com

“True Clarity’s impressive team helped us – again – to deliver a website of the highest possible standard. Thanks to their input, Conservatives.com played a key part in helping undecided voters understand exactly how our plan would secure a brighter future for Britain while also empowering supporters to help the campaign in a number of ways. True Clarity’s commitment to responsive design meant that the site worked seamlessly across all devices, and their hard work in the months leading up to polling day meant the site was both stable and secure in those crucial final days.”

Craig Elder, Digital Director, The Conservative Party

What was needed

5 years is a long time in politics. In 2010, we helped the Conservatives build what was widely recognised as the best of the UK political parties’ websites but for 2015 we needed to adapt to a changing digital landscape.

In particular, the huge shift towards consumption via mobile devices meant we’d have to approach the election with a “mobile first” mentality, and work to deliver simple user journeys that worked as well on a phone as they did a desktop.

We also had to focus on delivering a tightly-focused website that gave two key audiences what they needed: helping undecided voters understand exactly what the Conservatives’ long-term economic plan meant for them, while also giving Conservative supporters the tools they needed to get more involved in the campaign.

And of course, we had to achieve all of this while ensuring the site could cope under the huge amount of traffic we could expect during the busy election period (and on polling day itself).

How we tackled it

A responsive site for a mobile-first audience

Working with the Conservatives’ in-house design team, we created site templates that worked responsively across all devices – be it phone, tablet, laptop or desktop. This had to be achieved without compromising site editors’ ability to quickly add, edit and remove copy, content and widgets as required.

Giving undecided voters the information they need

Just as it was in 2010, we knew that Conservatives.com would be one of the first places undecided voters would go to find out about the Party’s policies in the days leading up to the election.

Therefore it was imperative that the site focused on helping people understand not just what those policies were – but what they meant for them, their family and their area. We worked with the Conservatives to produce a series of interactive pages throughout the site that allowed users – upon answering questions on location, salary and so on – to find out exactly how the Conservatives’ plan would help them.

We took this approach as far as making the 404 page on the site an interactive “find out what our plan means for you” page to ensure even users who couldn’t find what they were looking for could learn what the Conservatives’ long-term economic plan meant for them and their family.

Making it easy for supporters to support

Of course, one of the other key audiences for a political party’s website is its supporters, and it was important that Conservatives.com gave them the tools they’d need to make the campaign a success. We worked closely with the Conservatives’ team to optimise three key user journeys – membership, donations and volunteering – to ensure supporters could complete these actions as quickly and easily as possible. This was a huge success, with the donation page in particular seeing a huge leap in conversion rate, leading to the Party raising more money in small online donations than at any previous election. In addition, the new Volunteer page played a huge part in helping to assemble the ‘Team2015’ volunteer army which played a key part in winning the election.

Finally, we worked with Dynamic Signal to put gamification at the heart of a new ‘Share the Facts’ section of the website, which rewarded Conservative supporters every time they shared campaign images and videos on their own social networks – significantly increasing participation and reach for each piece of content.

The Value

Conservatives.com played an important part in the Party’s overall election efforts, which saw them win 331 seats and an overall majority – confounding the predictions of pollsters and commentators alike.

The Party raised more money in small online donations than ever before, and also assembled a 100,000+ strong ‘Team2015’ volunteer army thanks to people signing up via the website.

‘Share the Facts’ was hugely successful, helping Conservatives supporters reach an additional 3 million people every week – over and above the direct reach of the Conservatives’ existing digital channels – by empowering people to quickly and easily share campaign content.

And perhaps most importantly, the site performed seamlessly on polling day (just as it had in 2010), meaning voters in key constituencies all around the country were able to get the information they needed.

http://www.conservatives.com

ASOS – Sitecore Award Best Use of Mobile

We collected our 6th Sitecore award in 3 years last night and this time it was for the amazing results from moving ASOS’s mobile site into the Sitecore Experience Platform.

SitecoreASOSawadsitecore-experience-awards14

Since the Sitecore solution went live, unique visitors to mobile have gone from 7.8 million to 8.5 million per month, an increase of 10%. Conversion of visitors who read articles has now increased by 10%.

Have a read all about it – http://www.sitecore.net/customers/experience-awards/2014-finalists-uk.aspx

TrueClarity, Sitecore

Using Cloudflare as a CDN – a review

Recently one of our clients was experiencing an increase in site downtime. During our investigation of the outage incidents we discovered that the site was increasingly becoming a victim of DOS (denial of service) attacks.

From the data we looked at it appeared the ‘hacker’ would trawl the site, honing in on pages which had the longest response times and then repeatedly hit those pages with requests using up resources on the site and eventually causing the CPU on the database to max out and the site to go down.

Our client hosts with Rackspace who offer a security solution so we asked them for pricing.  They suggested that their managed service would be rather expensive  for our needs and recommended we take a look at Cloudflare.

Cloudflare offers a low cost (entry level plans are free) Content Delivery Network which enables you to save bandwidth and reduce requests to your server by caching some content. In addition (and this was the feature we were most interested in), Cloudflare offers built in security protection to guard against DOS attacks.

Both the caching and security settings are highly configurable through an easy to use interface, help documentation is clear and well written and support is good (support tickets are prioritised according to the plan you’re on – support for clients on paid plans get priority over those on free plans which seems fair).

Cloudflare is amazingly simple and low risk to implement. The most simple way is to simply delegate the top level domain DNS e.g. example.com to Cloudflare who take over the management of your Zone file. You can then choose which of your zone file entries you want to send through Cloudflare and which you don’t. You can set Cloudflare up ready to go with all services in ‘pause’ mode which means when your DNS does initially point to them they don’t do anything other than relay requests.

If you (or your IT department) aren’t happy to delegate the entire DNS for your domain (maybe you have internal systems running on that domain) then it is possible to get a CNAME record setup by Cloudflare for a sub domain e.g. http://www.example.com. This is the route we needed to go down for our client and this option does require you to be on a paid for plan (we went for Business at $200 per website per month).

The steps we followed for implementing Cloudflare were as follows:

1) Setup Cloudflare account and add card details for paid for plan
2) Requested CNAME record from Cloudflare support (we got this in 24 hours)
3) Given a TXT record from Cloudflare to add to the DNS for our example.com domain to allow them to take control
4) When that was done, Cloudflare gave us a CNAME record for the DNS record
5) Client reduced the TTL on the domain
6) We setup all the configuration of the http://www.example.com domain in Cloudflare but set it to ‘pause’
7) Client added the CNAME record to the DNS and once we’d waiting for the TTL to expire we did a tracert to see that we were actually pointing at Cloudflare
8) We then did the cool bit which was pressing the ‘unpause’ button and sending users through the CDN

We gave the site a smoke test and everything seemed to be working as expected. During the day we then proceed to ‘tune’ Cloudflare by gradually turning on the various options that allow you to cache static content (Cloudflare provide a handy list of file extensions it sees as ‘static’ files and you can use page rules to bypass these or to cache more file types).

Each time we made a change we checked the site and made sure everything looked OK before making the next change. We also checked that real traffic wasn’t being blocked by looking at Google Analytics to ensure there wasn’t a sudden drop in activity and asking Rackspace to ensure that all Cloudflare IP addresses (again there’s a useful list) were whitelisted.

At the end of the first couple of days of using Cloudflare we had enough data to see that it was making a difference. It had saved lots of requests (almost 50% of all requests were coming from Cloudflare cache) and had blocked over 100 threats (with the security setting on ‘low’).

dashboard2

dashboard1

The website ‘felt’ much, much faster from a user perspective although our external monitoring wasn’t reflecting this which somewhat confused us. This must be something Cloudflare get asked about on a regular basis and they give a very clear response to this at http://blog.cloudflare.com/ttfb-time-to-first-byte-considered-meaningles/.

So the site was faster, we were blocking some hacking attempts, we were saving bandwidth all looked good. However we looked at IIS logs and could see that we were still getting some bad http requests (PROFIND, COOK, OPTIONS requests for non-existent URLs) and attempts to do some XSS and SQL injection. Our site/code was rejecting these requests as our IIS filters and security settings meant the hackers weren’t getting anywhere but we ideally didn’t want these requests hitting our server at all and wanted Cloudflare to catch and block them.

We then took advantage of the Cloudflare WAF (Web Application Firewall) and this is now blocking most of the ‘dodgy’ looking requests we’ve seen in our IIS logs. We’ve raised a support ticket with Cloudflare support about the few remaining dodgy requests and they’ve responded very promptly to say they will add a WAF rule to block those. If they come through on that promise we’ll be very happy.

wafrules

All in all, Cloudflare appears to deliver on it’s promises, is incredibly easy to setup and configure and support seems good.  There are lots of options we’ve not explored yet such as using their API to automatically clear the cache on a publish from Sitecore which would enable us to cache more than static content.  For a relatively low cost it certainly seems to offer a good alternative to Akamai.

A Tale of Two Architects

Ever got caught up in a debate about up front architecture versus evolutionary architecture? I know I have, but which is right? I believe it all depends on the business context and the information you have available.

I my mind both approaches are valid, just generally not on the same set of problems. Deciding which architectural approach to take should be considered seriously, and if you’re not having a long hard think about which approach to take, you probably should be.

This post hopes to equip you with some basic ideas to have a meaningful conversation about how you should approach your project from an architectural point of view: whether it’s the design it before you code it approach, or quickly start on the implementation and use learning to feedback on the direction you should be heading.

I’m going to split the approaches to architecture into the following two broad types:

Architecture Approach A

The first approach to architecture involves a lot of planning and discussion and a general effort to deeply understand the problem and map out the system up front. Typically there will be a person or delegation of people responsible for the architecture as a primary role. A separate team responsible for software implementation will build against the architectural vision. It will generally involve lot of meetings and documentation.

Architecture Approach B

The second approach is to transfer the responsibility of architecture to the implementation team and allow them to grow the architecture organically based on the continued learning and insight gained during the build and feedback from stakeholders as they see the software evolve.

The Winning Architectural Approach

I believe both approaches will get there in the end with a determined team (so by simply delivering a project using an approach, you don’t necessarily have proof about whether the choice was right, only that you have a good team), however, I think there are clear cases when one approach is better than the other.

The Architect Who Faces Certainty

An architect that faces certainty usually works in a business whose landscape is unlikely to change and with users and customers that have clear expectations and have requirements that are highly predictable. This is a clear choice for Architectural Approach A – the Certain Approach.

The Architect Who Faces Uncertainty

An architect that faces uncertainty will work in a business environment with a volatile market and whimsical users/customers whose needs are unpredictable. This is a clear choice for Architectural Approach B – the Uncertain Approach.

The Future of Architecture

Choosing the right architectural approach really depends on how much information you have about the future. If you have certainty, making all your decisions up front will get your project where it needs to be efficiently as everything can be mapped out. If you do not have certainty, you need an evolutionary approach to architecture based on gathering the information you need for the next set of decisions. Typically information is gathered by implementing part of your architecture and extracting insight from real results.

If the time taken to plan is spiralling out of control and conversations involve a lot of “what if this happened in the future” and you are hedging your design against uncertainty you are probably trying to up front plan against uncertainty – a sure sign you need to rethink your approach.

Another way of looking at the two approaches is from a scientific point of view. When trying to model something we understand it’s easy to come up with sound model. When modelling something we don’t understand, we must experiment in order to piece the model together.

As computing power continues to increase, and software tools become more powerful, it’s becoming harder and harder to model the systems that our tools are capable of building and predict how users/customers will use them. Companies should be comfortable with both architectural approaches to ensure the best chance of success developing software.

Designing software at the start of a project in a certain world is well understood, and I don’t think there is much more to teach anyone about how to do it well. Evolutionary architecture is a seldom discussed topic in software literature and their are similar sounding labels for other ideas. To feel your way through a project implementing the right parts at the right time to get the right feedback is tough and ultimately an art form with much scope for new thought but already used with great success. I believe it is vital to invest time in exploring the benefits that evolutionary architecture can bring to delivering large scale customer facing web platforms that inherently have many unknowns without being paralysed by the fear of implementation without certainty.

Whatever you decide, uncertainty will be your guide 🙂

How Effective Are Your Demos?

For those who have read my recent experience with undertaking a demo, you’ll know that sometimes you feel like they could’ve gone better. This is my simple yet effective guide that may help you avoid some of the usual pitfalls;

Feedback is important. Especially so when it comes to software development. How does the new functionality meet the expectations of the stakeholders? Does it ‘feel’ right? Does it positively impact on the customer experience? We need to know the answers to these questions to determine the value of any newly developed function.

There are many ways to gather feedback, but chief amongst our arsenal is the ‘demo’. A simple yet effective tool when done right, you get a bunch of stakeholders together (either on site or remotely), and present the new functionality to them. They can then freely discuss what you have shown them, allowing you to gather their thoughts and feelings on whether any further changes are required.

Demos can be extremely powerful, they allow us as the facilitators to guide the stakeholder’s thoughts onto particular areas of a system or site, to consider certain scenarios or UI enhancements. But, do we use them effectively?

We’ve all attended poor demos in the past. You can picture it; the stuffy room, filled with people from all areas of the business, some of whom have only passing knowledge of the subject matter. The monotone voice, droning on. The demo bounces around, showing different pages and functions so quickly that you have no idea what is going on. Or perhaps the facilitator is focusing on something that you know in detail already, and you’re being taught how to suck eggs. You lose interest, allowing your mind to wander. Your eyes glaze over…

A boring presentation

Sounds awful doesn’t it? It’s a waste of your time, the facilitator’s time and the business ultimately sees no value.

Now consider this, how many demos have you given that play out as described above? You sure? How do you know?

If you want to be sure, then follow this guide for delivering great demos and getting that all important, quality feedback;

Step One – Know who you’re delivering to

Who is coming to the demo? What is their background? Do they understand the system in-depth, or have they hardly seen the system before?

The content of your demo needs to take into account the knowledge held by the audience. There is no point going into technical detail for an audience that has never seen the product before, Similarly, don’t waste time demoing basic functions to people who know the product inside out. It’s not always possible, but try to keep the audience limited to those with a similar level of knowledge. You can then tailor the content to fit. These leads us on to point two.

Step Two – Limit the audience

When inviting you audience, consider what you are trying to achieve. Do you really need to invite someone from marketing to a demo of backend systems? Does QA need to be involved in a demo focused on user experience? Try to keep the audience limited to those who will have something to offer in terms of feedback, those whose opinions matter. Don’t invite loads of people; the more people there are, the less likely it is that your audience will feel confident to ask questions or provide opinions.

Step Three – Keep to a predetermined scope

It’s obvious really, but know what you want to demo and what you want to get out of it from the stakeholders. Don’t let your audience get side-lined onto a completely unrelated conversation. If needs be, you can always arrange for a separate meeting or demo to discuss any unrelated issues. Keep your audience focused on the task at hand.

Step Four – Have a run through before-hand

Again, a simple yet effective step. Practice what you intend to demo. If possible, use the same equipment and/or software that you intend to use on the day. Deliver the demo to a colleague; can they understand the information? Was your delivery clear and concise, or were you mumbling? Preparation is key.

If you are doing the demo on site, make sure the room you are delivering in is big enough, has the right number of chairs and has the equipment you need. There is nothing worse than wasting the first ten minutes of a meeting trying to get the projector working.

Make sure you consider different delivery mechanisms. Does it have to be done using a PowerPoint presentation or could you use something more creative or unusual? If it helps engage your audience, then give it a go!

Get creative with your demo deliveries!

Step Five – Gather feedback on the demo

Probably the step that most often gets overlooked. Just because you delivered the demo successfully and got some feedback, it doesn’t mean that it couldn’t have gone better. Try to get some feedback on the demo itself. You can do this in any number of ways, from chats around the coffee machine to asking the audience at the end of the demo. Personally, I have used online questionnaires to great success (Survey Monkey is a free, simple, yet effective tool). Keep in mind that some people don’t feel confident enough to make their opinions known in front of others (particularly if there are loud, brash or opinionated members in the audience), so sometimes you will get honest feedback only by approaching these people on their own or allowing them to answer anonymously.

All of these steps are simple enough, but they will help achieve that ultimate goal of getting useful, valuable feedback.

 

A Learning Customer Experience Platform

The idea that an Enterprise can be modelled on the human mind isn’t new. The term digital nervous system was originally used as far back as 1987 but it was made famous by Bill Gates in his 1999 book, Business @ the Speed of Thought. However even back then The Register pointed out the flaw in this approach.

In trying to apply the concept to computing, you come unstuck very quickly because you can’t validly compare a system where the outcome is determined by logic and the information content, with a system where the outcome is determined by evolution.

It is precisely this argument that has spiked my interest in the idea.

First some context. My interest here is in how the customer facing elements of an on-line enterprise can respond to the individual needs of each customer, now and well into the future. In particular how we can avoid the need for a constant cycle of too-big-to-fail re-platforming projects. Rather have a platform that can evolve. The holy grail here is a platform that not only bounces back every time there is a need to change, but rather bounces back stronger, fitter than it was before, having learnt more about the type of changes it can expect in the future.

Rabbit and tortoise brain.
Illustration by David Plunkert

So how do we create a learning customer experience platform? I’m going to begin with the thesis that if we are to create a platform that can learn, we should model that platform on the way we think. The psychologist Daniel Kahneman (Nobel winner in Economics) who talks about having two systems that we use for thinking. “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical.

System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious
System 2: Slow, effortful, infrequent, logical, calculating, conscious

So what does it mean to take this model and apply these ideas to a customer experience platform?

System 1 is based on rules. Heuristics based on experience of working with real customers. These may not be the most optimal solution, but they will be fast, able to work with very limited information and critically not require the system to know the individual customer history. The response of System 1 is based on the behaviour of the customer in the here and now. Rules will evolve quickly using a basic measure of “fitness”. For an e-commerce site rules that result in larger sales will thrive, rules that have a negative impact will be culled.

System 2 is based on data. This is where analysis of the data held on the customer happens. Analysis is based on historic data. Trends and patterns emerge and are used to refine the experience. Users can be targeted on an individual basis. This is typical to the approach taken by Amazon, e.g. recommendations, related items. Classic big data.

If all your customers login before any other interaction then you can possibly get by with just System 2. However for most enterprises the trick is knowing when to use each system. There can also be a flow of rules being created in System 1 based on the analysis taking place in System 2. The other factor, is that both Systems will require human interaction to help with rule creation and data analysis, However both systems are evolutionary in nature, and while the framework can be “designed” for holding the data, and running the rules, the resulting customer experience will emerge via feedback with real customers.

References

Welsh Water Sitecore Award – Best Customer/User Experience Site & ROI Award

Another successful awards night, this time a double win for Welsh Water, the Best Customer/User Experience Site and the ROI Award.

Welsh WaterAward Welsh WaterAward2

An overhaul of their Sitecore site saw some massive improvements:

Welsh Water has seen visits increase by 43% year-on-year, well above the target of 30%.  Web payments are up 22% on a year-on-year comparison with August 2012 and 2013, and income has also increased by 21%. 65% of direct debit sign-ups are now on-line, compared to 40% a year ago, shifting the burden from the call centre.

Read all about it – http://www.sitecore.net/customers/experience-awards/2013-finalists-uk.aspx

Work, Rework and Maintenance in Software

I thought it would be worth clarifying the terminology used to express the different types of work, rework and maintenance in software development.

Refactoring

Refactoring is the art of making your code clean and refining the design in small incremental steps towards the SOLID principles of software development. Refactoring should be performed many times a day to ensure code is clean.

Prototyping

In order to learn enough about a problem, a prototype is often built that will be thrown away. If the prototype is not thrown away, it is not a prototype. Prototyping should be used to gain knowledge and understanding of a problem. A prototype will generally have a substandard design because the problem is not well understood until the prototype is finished. That’s why you throw a prototype away and redesign what you have just done with increased knowledge. Don’t continue working on a prototype when there is no more learning taking place. Do not mix user interface prototyping with software design prototyping. Always keep the two separate.

Tracer Bullet

A tracer bullet is an implementation of part of a piece of software. A tracer bullet is used to learn enough about the larger piece of software, usually in order to inform estimates and tackle technical risks early. Tracer bullets are risk management for developers. A tracer bullet is only finished if the code is production ready, tested and has been kept clean by constantly refactoring the code during development otherwise it is a prototype and should be thrown away.

Redesign

Redesign is usually undertaken when existing code needs to be drastically remodeled. Redesign is an indication of one or more of the following problems:

  • the code is not clean and does not following the SOLID principles
  • there was inadequate knowledge during development
  • the project was rushed
  • technical risks were not properly flushed out

Redesign is caused by a lack of refactoring, prototyping and tracer bullets.

Conclusion

Every project should have a healthy mix of prototyping to inform design, tracer bullets to inform estimates and reduce risks and refactoring that should be done regularly throughout the day. Redesign implies that you’ve got an unhealthy mix or are ignoring important steps in the software development cycle.

Also, changes to requirements would potentially precipitate one or more of refactoring, prototyping, tracer bullets, or even redesign. The later suggesting quite a major change.