What lies beneath the path of user needs? Well, a whole bunch of tech, and maybe AI…

ian roddis
7 min readApr 26, 2019

--

I recently attended the GDS training course on Artificial Intelligence (AI) and it re-ignited some thoughts I’ve been having recently about ‘what lies beneath’.

I spend a lot of time in the design and content worlds, and this inevitably and rightly focuses on user needs, and things like content strategy and information architecture, and detail like bevels and buttons and of course accessibility matters.

At the moment in my work I don’t have to think too much about underlying technologies (apart from the CMS and workflows that ensure correct mark-up) but in the past I’ve integrated things like CRM systems, chatbots, authentication, identity, registration and payment systems. We also did a bunch of work on linked data at the OU so I understand about mashing data together.

In health it’s a whole other ballgame — and the data, interop and ‘integrating systems’ of the future are a massive challenge. Recently there’s been some great thinking and blogging about the tech aspects of health, and it often mentions user needs in the same sentence, but I’ve been wondering if the user needs aspect is equal to the tech aspects.

So when previously I wrote about ‘The path of user needs, avoiding beautiful nonsense, and the shelves of wisdom’ you might be forgiven for thinking I just care about stuff ‘above the line’ that you can see. I don’t. I care equally as much about the stuff that lies beneath, because that’s what makes services meaningful to those users (drawing on data, writing back to systems, using common standards to ensure data can be shared across systems etc).

Discussions about AI make this even more interesting when designing and delivering end to end services.

Anyway, about the GDS introductory session to AI

You can read the formal description of the GDS AI course and also a ‘behind the scenes’ blog. If you’re eligible and able to go on the session I’d highly recommend it. It’s a really accessible way into the topic of AI, and as ever great to hear examples from across Gov, and to share conversations with colleagues across Gov, Local Gov and Health.

Below are my sketch notes to help remind me of the things I need to do more reading on. Namely evolution of AI (narrow, general, super), categories of machine learning (supervised, un-supervised, reinforced, deep), robotic process automation and some definitions that help me understand the boundaries between logical data interactions, and what might be AI (systems which sense, comprehend, act, learn and the potential to suggest solutions ‘we’d never think of’).

My sketch notes from the GDS AI session

I loved Terence popping up in a video showing how a robot can complete a Rubik’s cube, and the quote along the lines of ‘robots don’t need to outperform the highest performing human, they need to reliably outperform the average human’. This stimulated quite a discussion about (for example) when you have an excellent doctor (or other professional) that is the ‘gold standard’, but if you have a less than excellent professional then an automated service or an AI driven service for some activities may be better than a sub-optimally performing human.

Terence Eden talking about robots and automation

What the AI course added to my ‘what lies beneath’ meme

  • We need a holistic view of Standards. Often people talk about Content, Design, Accessibility and Tech Standards, and we know solutions need all of these. We need to equip our organisations and products with the right tools to think about the whole spectrum of Standards. I guess I’m saying ‘let’s not be seduced by the tech, let’s keep our eyes on everything’, but start with user needs.
  • As we make headway in AI, particularly in health, issues of trust, safeguarding and risk appreciation/management become even more important.
  • We may need to re-think what ‘user needs’ mean in the context of AI (where one of the hypotheses of AI is that it can be intelligent and suggest solutions we might never consider, or discover).
  • We may need to think about how we test services that use AI, if truly self-learning the test scenarios could be endless.
  • We need some exemplar products that marry user needs, content, accessibility and design standards with leading edge tech lying beneath — that bring to life policy statements and demonstrate strategic intent.
  • And of course, we need a Service Standard against which we can asses products — ranging from user needs to tech approach. The current default is the Gov standard, and it may be that the clinical issues around AI may well stretch that too far, which might lead to people (like me) arguing for a Health Service Standard.

And what it added to my perspective as a Product Owner

  • As a Product Owner I believe my job is to have an understanding of <everything>, from business needs to user needs, to accessibility matters, content & design standards, tech dependencies and people and team issues. We need to be able to assimilate information very quickly. Considering how you deliver a product that may have an AI element is a fascinating challenge — particularly as part of an end-to-end service. Great personal development opportunity ahead.
  • At the moment incorporating AI into a service probably augments what we do now, it may be a game-changer in the future but I’m seeing it as the next step on a ‘structured/mashed/linked data’ journey. The exciting new bit is where algorithms learn <stuff>. This will present some tremendous opportunities but it also brings a renewed focus on things like risk, trust, transparency, and on safeguarding users of AI based services.
  • I think it also poses challenges for how we test a service. Even if there’s a million possible outcomes of a more ‘logical system’ you can create test structures. How do you test self-learning systems with potentially an infinite range of outcomes? Particularly in health where the risks are so high — would the acceptance threshold be 100%, or would we accept 99%?

If you’re interested in some great reading

  • Ethics is our competitive advantage: how the NHS can lead the world in AI-based healthtech by Indra Joshi, Digital Health and AI Clinical Lead, NHS England & Jess Morley, Tech Adviser and AI lead at DHSC “You see, in this future, even though you may not all be treated by a doctor, we will have cut through the hype of AI and worked out how we can deploy it to make sure it delivers the outcomes that the healthcare system, and all the people of the UK who trust and rely on it, want. We will have done this while ensuring that the values of the NHS are maintained, patients are treated with respect, and, above all, kept safe.”
  • 29 Thoughts on the Future of Digital Healthcare by Jess Morley — particularly as our work is referenced — “The work that NHS Digital and NHS England have been doing recently about making sure the language used in NHS online content matches the language that people would use in real life, is a fantastic illustration of the fact that sometimes problems created by the use of technology, e.g. chatbots, do not always need very ‘technical’ solutions. This is the value of having multi-disciplinary teams designing healthtech solutions as content designers and social scientists can point out things that developers might not, and vice versa.”
  • Reflections on the World Health Organisation’s (WHO) “Recommendations on Digital Interventions for Health System Strengthening” by Jess Morley — not AI per se — but a nice system type view
  • Principle 7 of the Code of conduct for data-driven health and care technology by Indra Joshi (and many others) “Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision”
  • The Gov Data Ethics framework “Data ethics is an emerging branch of applied ethics which describes the value judgements and approaches we make when generating, analysing and disseminating data. This includes a sound knowledge of data protection law and other relevant legislation, and the appropriate use of new technologies. It requires a holistic approach incorporating good practice in computing techniques, ethics and information assurance.”
  • The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems “As machine learning systems advance in capability and increase in use, we must examine the positive and negative implications of these technologies. We acknowledge the potential for these technologies to be used for good and to promote human rights but also the potential to intentionally or inadvertently discriminate against individuals or groups of people. We must keep our focus on how these technologies will affect individual human beings and human rights. In a world of machine learning systems, who will bear accountability for harming human rights?”
  • The EU Ethics guidelines for trustworthy AI — including 7 principles: Human agency and oversigh; Robustness and safety; Privacy and data governance; Transparency: Diversity, non-discrimination and fairness; Societal and environmental well-being; Accountability. Full report available as PDF, apologies for the PDF (on behalf of the EU…)

--

--

ian roddis
ian roddis

Written by ian roddis

by nature a product manager, working in digital and health

No responses yet