Artificial intelligence (AI) undoubtedly offers tremendous benefits across many industries, including medicine, agriculture, and transportation. However, there is growing evidence of potential for harm from AI as biased and inaccurate algorithms routinely make decisions on job applicants, healthcare, social services, parole and criminal justice, policing, and access to credit and insurance.
The AI researcher, Noel Sharkey, argues that “algorithms are so ‘infected with biases’ that their decision-making processes could not be fair or trusted” and that we should be “testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market”.
We have produced a report on the Responsible use of AI (RAI) to provide guidelines for organisations engaged in the development and deployment of AI-based applications, with particular reference to applications that involve algorithmic decision-making (ADM). The report develops a model of AI development and couches this in the context of a set of responsible AI principles (e.g., fairness, well-being, safety, contestability).
The analytics development and deployment model (AIDD) above highlights key points in the use of AI for algorithmic decision-making, from data acquisition, to modelling, to decision-making (is there a human in the loop?), to sharing of results.
The BCS AI2020 conference held a workshop on the 8th December 2020 exploring the relationship between Operational Research (OR) and AI. There were six speakers, covering a range of topics. I focused on how OR can contribute to the development and deployment of AI and how might OR practitioners and data scientists work together. The talk concluded with three themes:
1. OR bring a wealth of knowledge and experience of traditional OR applications in which AI development is embedded
2. OR provides methods to guide the development of AI applications that create business value
3. OR constitutes an ethical framework and ethical practices for those working in the development, management, and governance of AI applications
The presentation is on slideshare and available as a pdf here.
The textbook, ‘Business Analytics: a management approach’ by Vidgen, Kirshner and Tan has been published.
The book offers an accessible, business-focused overview of the key theoretical concepts underpinning modern data analytics. It provides engaging and practical advice on using the key software tools, including SAS Visual Analytics, R and DataRobot, that are used in organisations to help make effective data-driven decisions. Combining theory with hands-on practical examples, this essential text includes cutting edge coverage of new areas of interest including social media analytics, design thinking and the ethical implications of using big data. A wealth of learning features including exercises, cases, online resources and data sets help students to develop analytic problem-solving skills.
The publisher’s companion site contains materials for instructors (e.g., lecture slides) and the support site maintained by the authors has resources for everyone (e.g., datasets used in the book, a chapter on Python, and a blog for latest news).
Business analytics is playing a greater and greater role in our daily lives, impacting on job applications, medical treatment, parole eligibility, and loans and financial services. There are undoubted benefits to algorithmic decision-making in general, and artificial intelligence (AI) in particular. The potential for harm – intended or unintended – arising from algorithmic decision-making indicates that an ethical dimension is needed when we engage in analytics projects.
Using these five ethical dimensions we created a business ethics canvas (a PowerPoint template is here). The canvas elements, which are addressed going in clockwise order around the canvas, are:
A proposed analytics solution that address the needs of specific customers (we recommend that these are generated in an opportunity canvas)
Identification of stakeholders that can affect or are affected by the proposed analytics solution
An assessment of stakeholder utility of the analytics solution
An assessment of the rights of stakeholders
An assessment of the fairness (justice) of the solution
Implications for the common good
Reflections on the virtue of the proposed analytics solution
In the paper we describe how the business ethics canvas was developed through application in an online travel organization that is considering using analytics to target its customers with ‘days out’ offers. The use of post-it notes allows the canvas to be iterated and refined and the use of colour to highlight areas of concern and areas of opportunity.
Our research shows that ethical analysis should not be seen as a constraint or overhead to analytics development – exploring the ethical dimension and including multiple stakeholders provides a richer insight into business value creation, as well as providing greater confidence about emerging ethical implications and project risk. Our research proposes that we should position the business ethics canvas (BEC) and the opportunity canvas (OC) as counterparts where each shapes and informs the other in a creative tension.
See Ian Randolph at the Operational Research Society’s 2018 Annual Analytics Summit presenting the Business Ethics Canvas.
As part of my MBA course in business analytics we run a NewsHound forum where students post relevant stories about analytics that emerge as the course is running. The big story of 2017/8 has been artificial intelligence (AI). Some useful resources for managers have been identified during the course:
McKinsey have published An Executive’s Guide to AI, which provides history, definitions, and use cases for AI. The guide distinguishes between machine learning and deep learning and provides use cases that are insightful illustrations of how AI can be used. The guide is also packaged in an accessible interactive format.
Davenport and Ronanki’s (2018) article, Artificial Intelligence for the Real World, in HBR is an excellent guide to the management challenges and opportunities of AI. The focus on business benefits and implementation strategies makes it highly relevant to managers.
A longer read on AI is provided in McKinsey Global Institute’s (2017) report Artificial Intelligence: the next digital frontier. A good place to start with this report is with the five case studies, which cover retail, utilities, manufacturing, healthcare, and education.
It’s also worth a look at the April 2018 report “AI in the UK: ready, willing and able?” produced by the House of Lords Select Committee on Artificial Intelligence – this provides an in-depth look at the implications of AI for industry and society.
As part of our MBA course on business analytics we ran a weekend workshop investigating the use of analytics for GoGet, a car-sharing company. MBA students were organized into four teams of 5 or 6 members. The method draws on systems thinking, business model mapping, and design thinking. Outputs from the workshop were hand-drawn using a mix of flip-chart paper, sharpies, and post it notes – as in the rich picture below.
GoGet car-sharing rich picture
The workshop was structured around the following steps:
background investigation (e.g., Web site, news, social media) of the situation, which is represented using a rich picture
business model mapping using the business model canvas
analytics leverage analysis – analytics opportunities are identified and rated according to difficulty and value creation potential
each team then selected a single analytics opportunity to develop and created an opportunity canvas to elaborate on the benefits of the proposed development
the teams then formulated their opportunity as a design challenge and ideated possible solutions
personas relevant to their analytics application were developed (for customers and internal users, as appropriate)
a storyboard was created to show how the analytics application is blended with business processes to create value for the case company
a prototype of the user interface might be incorporated into the storyboard or added as a further deliverable.
In a day and a half each group generated a comprehensive analysis targeting different business analytics applications. The key to this approach is to focus on problems/opportunities and how business value is to be created. The same model (e.g., to predict which customers might have risky driving behaviours) can be blended with different business initiatives (e.g., increase collision excess, vary rates, implement telematics, request that a customer undergoes training) to create a business analytics value innovation project.
Drawing on a diverse and rich set of outputs each team made a five minute pitch to GoGet in which they were able to tell a compelling business value creation story. To do this they focused on the opportunity canvas, the personas, and the storyboard. The video shows the abundance, richness, and depth of thinking produced in the workshop.
Ever wondered how well your organization is doing at business analytics across the board? The business analytics capability assessment (BACA) instrument was developed as part of the research into the management challenges of business analytics (Vidgen et al., 2017).
BACA is available in Qualtrix, an online survey package. You can view and complete the survey here. On completion of the survey you will see a report of all the responses received to date. No identifying data is collected and thus both the organizations surveyed and the respondents are fully anonymous.
We use the survey to identify areas for managers to focus their attention on in developing a data and evidence-based culture.
The survey results can be summarized in a radar chart:
The computational literature review package (clR) is an open source offering, developed in the statistical programming language R. The code has been redeveloped and is now easier to use and greatly more efficient.
The CLR automates analysis of Scopus research articles with analyses of impact (citations), structure (co-authorship networks) and content (topic modeling of abstracts). The CLR software can be used to support three use cases: (1) analysis of the literature for a research area, (2) analysis and ranking of journals, and (3) analysis and ranking of individual scholars and research teams. We are working on adding Web of Science data.
To install the clR package go to GitHub and download vignette.R where instructions on installation and execution of clR are provided.
The article shows how big data and AI/machine learning can be used not just to track sentiment on social media but also how it might have been used to shape sentiment in the Trump and Brexit campaigns. I’ve taken three quotes from what is a long article (I hope I haven’t done it too much of a violence) to show big data analytics at work. The steps are the same as usual – there’s just more data, smarter AI, and more at stake:
First, get lots of data
“On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters – its USP is to use this data to understand people’s deepest emotions and then target them accordingly.”
Second, do some predictive modelling
“These Facebook profiles – especially people’s “likes” – could be correlated across millions of others to produce uncannily accurate results. Michal Kosinski, the centre’s lead scientist, found that with knowledge of 150 likes, their model could predict someone’s personality better than their spouse. With 300, it understood you better than yourself. “Computers see us in a more robust way than we see ourselves,” says Kosinski.”
Third, get actionable insight
“He suspects that Mercer is bringing the brilliant computational skills he brought to finance to bear on another very different sphere. “We make mathematical models of the financial markets which are probability models, and from those we try and make predictions. What I suspect Cambridge Analytica do is that they build probability models of how people vote. And then they look at what they can do to influence that.””
A literature review is a central part of any research project, allowing the existing research to be mapped and new research questions to be asked. However, due to the limitations of human data processing, the literature review can suffer from an inability to handle large volumes of research articles. The computational literature review (CLR) automates the analysis of research articles with analyses of:
impact (citation analysis, e.g., H-index)
structure (co-authorship social network analysis)
content (topic modeling of article abstracts)
The CLR software can be used to support three use cases: (1) analysis of the literature for a research area, (2) analysis and ranking of journals, and (3) analysis and ranking of individual scholars and research teams.
The CLR and is explained and illustrated using a set of 3,386 articles related to the technology acceptance model (TAM) in:
The CLR is an open source offering, developed in the statistical programming language R, and made freely available to researchers to use and develop further.The code for the CLR is available from GitHub.