Ever wondered how well your organization is doing at business analytics across the board? The business analytics capability assessment (BACA) instrument was developed as part of the research into the management challenges of business analytics (Vidgen et al., 2017).
BACA is available in Qualtrix, an online survey package. You can view and complete the survey here. On completion of the survey you will see a report of all the responses received to date. No identifying data is collected and thus both the organizations surveyed and the respondents are fully anonymous.
We use the survey to identify areas for managers to focus their attention on in developing a data and evidence-based culture.
The survey results can be summarized in a radar chart:
The computational literature review package (clR) is an open source offering, developed in the statistical programming language R. The code has been redeveloped and is now easier to use and greatly more efficient.
The CLR automates analysis of Scopus research articles with analyses of impact (citations), structure (co-authorship networks) and content (topic modeling of abstracts). The CLR software can be used to support three use cases: (1) analysis of the literature for a research area, (2) analysis and ranking of journals, and (3) analysis and ranking of individual scholars and research teams. We are working on adding Web of Science data.
To install the clR package go to GitHub and download vignette.R where instructions on installation and execution of clR are provided.
For further details of the CLR approach see:
Mortenson, M., and Vidgen, R., (2016). A computational literature review of the technology acceptance model. International Journal of Information Management. 36: 1248 – 1259.
This article was published in the Observer 26 February 2017:
Robert Mercer: the big data billionaire waging war on mainstream media
The article shows how big data and AI/machine learning can be used not just to track sentiment on social media but also how it might have been used to shape sentiment in the Trump and Brexit campaigns. I’ve taken three quotes from what is a long article (I hope I haven’t done it too much of a violence) to show big data analytics at work. The steps are the same as usual – there’s just more data, smarter AI, and more at stake:
First, get lots of data
“On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters – its USP is to use this data to understand people’s deepest emotions and then target them accordingly.”
Second, do some predictive modelling
“These Facebook profiles – especially people’s “likes” – could be correlated across millions of others to produce uncannily accurate results. Michal Kosinski, the centre’s lead scientist, found that with knowledge of 150 likes, their model could predict someone’s personality better than their spouse. With 300, it understood you better than yourself. “Computers see us in a more robust way than we see ourselves,” says Kosinski.”
Third, get actionable insight
“He suspects that Mercer is bringing the brilliant computational skills he brought to finance to bear on another very different sphere. “We make mathematical models of the financial markets which are probability models, and from those we try and make predictions. What I suspect Cambridge Analytica do is that they build probability models of how people vote. And then they look at what they can do to influence that.””
A literature review is a central part of any research project, allowing the existing research to be mapped and new research questions to be asked. However, due to the limitations of human data processing, the literature review can suffer from an inability to handle large volumes of research articles. The computational literature review (CLR) automates the analysis of research articles with analyses of:
- impact (citation analysis, e.g., H-index)
- structure (co-authorship social network analysis)
- content (topic modeling of article abstracts)
The CLR software can be used to support three use cases: (1) analysis of the literature for a research area, (2) analysis and ranking of journals, and (3) analysis and ranking of individual scholars and research teams.
The CLR and is explained and illustrated using a set of 3,386 articles related to the technology acceptance model (TAM) in:
Mortenson, M., and Vidgen, R., (2016). A computational literature review of the technology acceptance model. International Journal of Information Management, 36: 1248 – 1259
The CLR is an open source offering, developed in the statistical programming language R, and made freely available to researchers to use and develop further.The code for the CLR is available from GitHub.
All organizations have limited resources and have to be mindful of where their time, money, people, and attention are focused. Without a clear business analytics strategy – which must be aligned with the organization’s business strategy and business model – it is unlikely that the potential of business analytics will be achieved (indeed, much time and money are likely to be wasted).
We have been working on a way of developing a business analytics strategy that is aligned with the business strategy and business model, i.e., the creation of a portfolio of analytics developments that will add value to an organization or focal business unit.
AnVIM uses a combination of the business model canvas (BMC), developed by Osterwalder and Peigneur, and systems thinking to provide context and depth to the business model. This analysis is followed by a mapping from business model to analytics opportunities:
AnVIM has been presented and workshopped at Operational Research conferences over the last two years and we are looking for collaborators who would like to experiment with the approach and work with us to develop it further.
Find out more about AnVIM in our white paper.
On Tuesday 21 June 2016 the Operational Research Society’s Annual Analytics Summit takes place with morning presentations from Marks and Spencer, Movement Strategies, the Department for Education, and the Trussell Trust. The plenary talk is by Megan Lucero, Data Journalism Editor at The Times & Sunday Times. In the afternoon we are holding workshops to go deeper into the technology solutions reported in the morning sessions. We will be presenting the geospatial app built for the Trussell Trust.
We worked with the Trussell Trust to build a prototype tool for visualising and analysing food bank usage in the UK. A short report was produced for The Conversation from the full version of the report available on the Trust’s Web site.
The work has also been reported in the Guardian’s blog on 9 May 2016: “How data science is helping charities to fight hunger in the UK“.