DaSilva, C. & Trkman, P. (2014), “Business Model: What It Is and What It Is Not”, Long Range Planning 47, 379–389
DaSilva, C. & Trkman, P. (2014), “Business Model: What It Is and What It Is Not”, Long Range Planning 47, 379–389
How do people who work creatively – writers, artists, musicians, playwrights – make a living? The question is as old as civilisation and the answer has usually involved some mix of patronage by wealthy individuals or institutions, and getting people to pay for something that is scarce – a live performance or a unique painting for example. The technologies developed over the last five hundred years, from printing to the CD, have steadily increased the opportunities for creative work to be distributed, but have also tied that work to a physical artefact. This has allowed the development of copyright and royalties – the creator of the work gets a slice every time one of these artefacts is sold. It worked for a while in a rough and ready sort of way but now we are in a new era. Pretty much anything can be digitised and if it can be digitised, it can be copied at negligible cost.
Industries are scrambling to catch up with this and publishing, in particular, is still dominated by licensing and copyright agreements which seek to restrict access to writing in various ways. However, I (and many others) do wonder if all this is simply missing the point of what it means to be a writer in a digital era. I recently came across a quote from science fiction writer and activist Cory Doctorow which expressed the point well:
“For almost every writer, the number of sales they lose because people never hear of their book is far larger than the sales they’d lose because people can get it for free online. The biggest threat we face isn’t piracy, it’s obscurity.”
Doctorow walks the talk, making much of his work (which I highly recommend) available for free download from his website in multiple formats, on the basis that his work will thus be read more widely, and can be monetised other ways. Besides, he wants people to read his work. There are ways for writers, musicians and artists to make a living today, but trying to do this by selling physical objects which contain copies of your work will only get harder and harder. And writing wasn’t exactly a good way of making money to start with. For every JK Rowling or EL James there are thousands of very talented writers just about making a living and for every one of them there are thousands of others who write and earn little or nothing from it. Clinging on to your copyright and restricting access to your work in the hope of making some money out of your writing one day is a mug’s game. To make money, you are much better off buying a lottery ticket – it’s a lot easier and the odds are better.
That’s the bad news for all of us who like writing, but there is good news too. It has never been easier to be a writer, and to actually find an audience for your writing. No one bothers to count the numbers of blogs out there any more, and then there are countless social media platforms, fiction sites, fan forums and all kinds of other things. Contributing to these requires an internet connection and a bit of time. If you like the idea of a book then it will cost all of £149 to publish your work as a paperback, less as an ebook. Sure, there is a lot of noise out there and it can be hard to be heard, but it is perfectly possible with care and persistence. This humble blog has been going for nearly four years and has clocked over 9,000 views. That’s a long way off the big league, but means that thousands of people have seen my work who would not have done if I kept it to myself, and hopefully some of them have found it interesting or useful.
Needless to say, I do not write this blog to make money and I explained a while ago why I decided to license my blog under Creative Commons. If anyone wants to use my work in any way, I am only too pleased to get a wider audience and would just ask that they acknowledge where it came from. I do not want to hoard my ideas and keep them to myself. I have always trusted that the more ideas I can write about, the more ideas I will have and the easier I will find it to write about them. So far, it has worked for me. F Scott Fitzgerald famously put the point this way:
“You don’t write because you want to say something, you write because you have something to say.”
So, he might have added, the more people who hear it the better. And as so often these days, I notice that all this is something my teenage kids, who have grown up in a connected world, grasp instinctively. My daughter enjoys, and sometimes contributes to, “fan fiction” sites, where people write stories in homage to their favourite writers and characters. My son builds new levels for computer games and sometimes new games on sites which give you the tools to do so. The stories and games can be copied as often as anyone wants and they do it just for fun. They know that creativity is to be shared and celebrated, not hoarded. I’m proud of them.
I have recently finished reading Dancing at the Edge (O’Hara & Leicester, 2012) and it had many useful insights, some of which will no doubt reappear in other blog posts. However, for this post I want to focus on one point they raise – the current crisis around our ideas about knowledge.
The authors argue, and I tend to agree, that our key ideas about knowledge derive from the Enlightenment – the extraordinarily creative period in the late eighteenth-century in Europe and North America. The general approach to knowledge of Enlightenment thinkers was to reduce things to their constituent parts as far as possible, then analyse them to see where they can be improved. Thus we have, for example, Adam Smith’s famous pin factory which is made more efficient by specialised processes and workers. We have analytical approaches to medicine and engineering being introduced.
There can be no doubt that these approaches led to enormous benefits for humanity. New industries arose, economies flourished, the human race was (largely) lifted out of poverty and we now enjoy comforts and conveniences which we unimaginable 250 years ago. But the signs are multiplying that this way of thinking may have run its course and its benefits reducing. To take two examples:
Two of these “top three” conditions, back pain and mental health issues, are conditions which modern medicine struggles to treat. The approaches to medicine we have developed simply do not work very well for our current diseases.
To put this another way, I have previously blogged about Dave Cormier’s use of David Snowden’s Cynefin framework to discuss learning in conditions of uncertainty. The analytical approaches of Enlightenment thinking work very well for problems that are simple or complicated. But more and more of our problems now are complex or chaotic. Cause and effect are unclear, and the problem needs to be approaches more holistically, and with more humility.
If we need a new enlightenment, what might it look like? O’Hara & Leicester think that part of the answer is to recognise that knowledge is not an abstract “thing”. Modern thinking in multiple disciplines, from behavioural economics to neuroscience, is finding that knowledge cannot be separated from experience and that emotion as well as reason plays a part in what we “know”. As the authors put it:
“…we come to recognize that all knowledge is local, colored and framed by culture and context.”
This might sound pure relativism, but I do not think that is what they intend. Of course there are “facts” which, if they are not comprehensively proved by the evidence, get pretty close. But there may be fewer than we think, and in the sphere of human interaction (which includes disciplines like management), there may be barely any.
If we are going to rethink knowledge, we will of course need to rethink education too. One of the many inventions of the Enlightenment is not widely known but had profound implications. In his book Technopoly, Neil Postman mentions that the system of assigning marks to student papers was invented in 1792 by an obscure Cambridge academic called William Farish. The idea of assigning a grade or number to a piece of academic work is now utterly fundamental to the whole way we think about education and whenever I have a conversation with my childrens’ schoolteachers it is all about levels and grades. However, it is worth reflecting that formal education was around for two thousand years or so in many forms without this idea. And when you think about it, the idea of assigning a number to the output of someone’s thought processes is a bit odd. And that is before you consider all the problems it raises such as teaching to the test, grade inflation and cheating.
An education system that recognises emotions, appreciates that knowledge may differ according to context and where we do away with, or at least downplay, the idea of grading? It may seem an impossible vision, but in fact there are many signs that we are making a start. I will aim to explore these in future blog posts.
Office for National Statistics (2014) Full Report: Sickness Absence in the Labour Market, February 2014 [online]. Available at http://www.ons.gov.uk/ons/dcp171776_353899.pdf
O’Hara, M. & Leicester, G. (2012) Dancing at the Edge: Competence, Culture & Organisation in the 21st Century, Triarchy Press, Axminster
Postman, N. (1993) Technopoly: The Surrender of Culture to Technology, Vintage Books, New York
I recently taught a module on my university’s internal teaching qualification, which was a great experience. One of our most interesting debates was about how to shift the mindset of students (and sometimes instructors) about what education actually is. All of us had taught students who simply wanted to know what “the answer” was to certain questions or problems and saw education as a process of accumulating these answers. The fact is that, while such an approach works for exams up to a point, in the real world it will not get you very far.
Our context is that we are “the University of the Professions”, and most of our students aspire to be professionals of some sort. This requires a particular relationship with knowledge. In order to be, say, a lawyer, you must have a body of knowledge, but this is the starting point not the end. The really successful, valuable lawyer is not someone who knows the detail of the law – that can be looked up, after all – but someone who can operate in situations of ambiguity, come up with novel solutions and is not thrown by situations they have not seen before. Too much of our education system does not encourage this sort of thinking.
One of our key study sources for this area was a video talk by Canadian academic Dave Cormier called “Embracing Uncertainty”, which I came across a couple of years ago and found inspirational. It is embedded below and I highly recommend a viewing:
The crux of Cormier’s argument is summarised on this slide (which he has adapted in turn from Dave Snowden), which talks about the different types of knowledge which are applicable in different situations:
I will not repeat the whole argument here, but the key point is that good or best practice is of limited use when you are dealing with situations where cause and effect are unpredictable or unclear. Cormier illustrates this with examples from medicine but they could be drawn from any area of work. And it is increasingly clear that complex and chaotic situations are becoming much more common, and simple and complex ones less so. Any finance professional operating in a world after the financial crisis should understand that, and in any case, if something is rules-based there is a good chance that computers can do it better than people.
This has particular relevance for the discipline I specialise in, which is management. I (and many others) would argue that, beyond a certain very basic level, there is very little good or best practice in management. There are bodies of knowledge, theories and ideas which can be studied and may or may not be helpful to a manager, depending on their context and needs. This is, perhaps, why good managers are so rare – they need to not just have extensive knowledge but be skilled enough to judge how and when to apply it. They must also be able to evaluate what is or is not working, and adapt their practice accordingly. And they will not be able to achieve any of this without cultivating the practice of reflection, self-awareness and self-criticism.
As an aside, I think this is what infuriates me so much about most MOOCs. They started off as a way of exploring ideas, and Cormier himself was influential in their beginnings. However, they have developed into a pre-packaged, dumbed-down experience. I have just tried my third Coursera MOOC and given up because it consists of little more than a series of recordings of an instructor. If I want to know someone’s views I will read their writings or watch them on YouTube. If these are, as Coursera likes to claim, “the world’s best courses”, then heaven help us all.
This, then, was the key take-away from our discussion. As educators, we must constantly challenge our students to think beyond “the answer”, to accept nuance, unpredictability and uncertainty. This is a challenge to our own professional identity too – we like to be seen as the experts and giving answers bolsters that perception. We too need humility, reflective skills and an ability to recognise that what works in one context may not work in another. But this is how we can be most effective as educators and do our best for our students.
“We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.”
T.S. Eliot, from Little Gidding
Eliot’s words are, deservedly, often quoted and I don’t suppose any of us know what he had in mind when he wrote them. But they have had resonance for me recently as, unexpectedly, I find my current professional and academic path bringing me back to some old questions.
As I have mentioned elsewhere in this blog, my undergraduate studies were in Theology, followed up with an MPhil in Hebrew Studies. For reasons that are not worth going into here, I started my career in a very different area. I don’t like to admit it these days, but my first graduate job was in banking, leading to a career that moved though accountancy, financial management, general management and HR before landing up in education in 2007. For the past five years or so, I have increasingly focused on the impact of technology on education, as reflected in most of the content of this blog. One of the fascinating things about educational technology is that you cannot consider it in isolation from the impact of technology on society generally. So I have ended up reading the works of those who have reflected on these big questions.
Which brings me to my recent reading of Technopoly, an extraordinary book by the late American academic Neil Postman. It generated lots of thoughts, some recorded in this blog, but here I want to focus on one particular sentence that gave me a jolt. Postman is here drawing an important distinction about our use of the word “science”. On the one hand, you have physical sciences, which deal with processes, subject to laws of cause and effect which can be established, tested and falsified. On the other hand, you have social sciences, which deal with practices, resulting from human decisions and actions, and all but impossible to test or falsify. He illustrates this point by the difference between a “blink” and a “wink”:
“A blink can be classified as a process; it has physiological causes which can be understood and explained within the context of established postulates and theories. But a wink must be classified as a practice, filled with personal and to some extent unknowable meanings and, in any case, quite impossible to explain or predict in terms of causal relations.” (my italics)
This jolted me because Postman is here presenting as established fact something which I think is arguable. It could be (and, as we shall see below, often is) argued that the only reason human behaviour is “impossible to explain or predict in terms of causal relations” is because we don’t yet understand the complex relationships involved in human behaviour.
This, I think, is the perspective of a group we may call “naturalist atheists”, of whom the most prominent member is the British scientist Richard Dawkins. I have read his works with interest and, if I am understanding him correctly, he would argue that the natural laws account for everything that happens in the universe. That includes the human brain, but the human brain is very, very complex and we are nowhere near understanding how it works. However, genetics and neuroscience are young disciplines. They will, or at least hypothetically could, advance to the point where all human behaviour can be explained in terms of cause and effect, albeit very complex ones. The only mysterious thing about consciousness is its complexity.
This argument has important implications. If the brain can be described as a very complex set of interactions, then it becomes possible to imagine that machines will one day replicate the workings of the human brain. In fact, given how quickly technology is advancing, at some point the machines will “think” much better than we do. This leads to the idea of “The Singularity”, popularised by the American futurist Ray Kurzweil, who speculates about what a world might look like when machines can do everything that humans can, only better, including advancing their own intelligence. According to a recent article in the New Yorker, “virtually everyone in the A.I. field” shares the general belief that machines will overtake humans. Speculation about “uploading” consciousness, and similar ideas, also presupposes this worldview. If consciousness is something qualitatively different to the workings of a computer, then all this is nonsense.
I feel confident in saying that Postman would emphatically reject this view, and yet I’m not sure he could disprove it. This means he (along with those on the other side of the argument) is actually doing something which is familiar to me from my earlier studies – he is making a commitment of faith. He is choosing to believe that, ultimately, human behaviour cannot be explained or predicted, and then living according to that belief. The struggle between the view of the human brain as a machine, albeit a very, very complex machine and the human brain as something qualitatively different to a machine is one we see played out in all kinds of settings (including education). Neither side can prove their argument, and yet it is an issue of great importance. We all choose, explicitly or implicitly, which side of the debate we are on because the consequences will inform our worldview. I can recognise this type of struggle – it is, at least in a sense, theological.
The struggle is made more explicit by another writer on the relationship between technology and society. Jaron Lanier is a technology entrepreneur, musician and now a sort of philosopher. In his book You Are Not A Gadget, Lanier argues passionately that human consciousness is something that will never be replicated by non-human objects, no matter how complex they ever become. He insists that there is “mystery” at the heart of human consciousness, at the same time being careful to say that you should not necessarily extrapolate from this “mystery” particular beliefs about God, the soul, the afterlife, or any of those areas. Nonetheless, the mystery is important, and it feeds into his argument, which echoes Postman’s, that technology must be subordinated to the workings of a just and flourishing society, not the other way round. He also adds, mischievously, that great music makes this case much better than he ever could.
To put the issue another way, do we believe in the mystery that is human choice, or to use the theological term, free will? Holocaust survivor and psychiatrist Viktor Frankl wrote in Man’s Search for Meaning,
“Everything can be taken from a man but one thing: the last of the human freedoms—to choose one’s attitude in any given set of circumstances, to choose one’s own way.”
If the “naturalists” are right, then he is dead wrong. We have no choices, we just think we do and we are actually experiencing a set of complex interactions in our brain. But if we believe in the “mystery” then he is right.
This question cannot be answered with the tools that we have and we must live with it. But, if we are to have a consistent worldview, we need to choose the answer we will live by, our working hypothesis. One way or another, we need to have faith. I am genuinely surprised, and quite gratified, to find that the old idea still has such importance.
When I was a theology undergraduate, I read a book by Julius Wellhausen called Prolegomena to the History of Ancient Israel. Wellhausen was a German biblical scholar who published this, his greatest work, in 1882, setting out the hypothesis that the first five books of the Bible are fundamentally made up of four separate sources, which he analysed. This “documentary hypothesis” had huge influence and is still foundational today. It is quite an achievement, in any field of knowledge, to come up with a hypothesis that survives for so long.
However, what really stuck in my mind was not his main argument but a phrase he used when setting out some more personal beliefs about the religion of the Bible, the Church and the nation. Wellhausen was very much in the individualist Protestant tradition, and highly critical of the Church, but he saw it having a role. He wrote:
“…if the Church has still a task, it is that of preparing an inner unity of practical conviction, and awakening a sentiment, first in small circles, that we belong to each other.”
We belong to each other. That was the phrase that stuck with me and which I have never forgotten. Five small words that say so much about what it means to be human. As has been widely observed by those who study these things, when the ancestors of homo sapiens first appeared, an impartial observer might not have given much for their chances of survival. Humanity had little physical strength, was slower than many predators, had no fur to keep warm and teeth so puny they could not even eat many things without elaborate preparation. And yet they not only survived but flourished, spreading out across the globe, and changing the planet irrevocably.
How did they achieve this? It does not take much reflection to see that the key was, and is, the capacity humankind has for co-operation. As soon as there were people, the evidence suggests, they were organising themselves into groups, dividing up labour and managing shared endeavours. Then came cave painting, religious expression and making artefacts, all requiring elaborate social systems and co-operation. Later on we had building projects like Stonehenge and the pyramids and so on, until today we have developed webs of co-operation that are staggering in their complexity and sophistication. Those of us living in affluent countries use countless tools that we would not have a clue how to build – toasters and kettles, never mind smartphones and laptops. We have access to them because numerous people we will never meet have invented, designed, manufactured, marketed and sold them.
So the really striking feature of humanity, for me, is our interdependence. Alone, we are pretty much helpless. Together, we can do extraordinary things. I am not making a discovery here. This has been observed and noted by countless poets, writers and thinkers down the centuries, from the writers of Genesis (“it is not good that man should be alone”) to John Donne (“no man is an island”), E.M. Forster (“only connect”) and many others. For sheer elegance in expressing this perspective it is hard to match T.S. Eliot:
“We die with the dying:
See, they depart, and we go with them.
We are born with the dead:
See, they return, and bring us with them.” (from Little Gidding)
It is a truth recognised and promoted by the traditional religions, at least when they are at their best. We are all interdependent and need to look after others, whether they are our sort of people or not. This is from the New Testament:
“Do not neglect to show hospitality to strangers, for by doing that some have entertained angels without knowing it.” (Hebrews 12, 2)
And this from the Qur’an
“Do good to parents, relatives, orphans, those in need, neighbours who are near and neighbours who are strangers, the friend by your side as well as the traveller, and what your right hands possess. Allah does not love the arrogant and proud ones.” (4:36)
My personal favourite literary expression of this is Charles Dickens’ masterpiece A Christmas Carol. I have more to say about this novel later, but for the moment I want to observe that the really critical aspect of Scrooge’s transformation is that he comes to understand he is connected to the rest of humanity and needs to cherish and build that connection, not try to isolate himself.
So we all get this, right? The critical fact about humanity is that we belong to each other. We are interdependent and when we work together we achieve so much more than when we work apart. In fact, there is a strong trend in human history that we co-operate in bigger and bigger groups to achieve more and more. In the process, we get better and better at working with, not excluding, those who are different. Surely the trend is clear.
Well, no. Seen in this light, the bizarre thing is how eager so many people are to divide themselves from others. The key drivers here seem to be fear and a sense of superiority. They make us divide ourselves in one way or another from “others”, people who are not like us. Maybe they are from a different country, speak a different language, follow a different religion, have different values or a different skin colour. Even more subtle and fearful are the differences that are not so obvious and less well understood – perhaps the other person is Jewish, or has a different sexual orientation.
For it is ironic and tragic that, as we are reaching new levels of co-operation, we are also seeing a resurgence of xenophobia in many countries around the world. I use the word deliberately. Xenophobia is often used to describe antipathy to foreigners, but its roots are slightly more complex than that. Phobia is, of course, derived from the Greek word for fear and the Greek word “xenos” is usually translated as “stranger”. That word has lots of varying overtones in both English and Greek. A stranger can mean someone we don’t know, but it can also mean someone who is different, possibly because they come from a different country, but maybe for other reasons.
There are many causes, which I will not analyse here, for what seems to be a general increase in xenophobia – fear of people who are different. History has shown that it is a highly destructive force that can easily result in war, persecution and terrible suffering. But it is also a luxury we simply can’t afford any more. Just as we have taken co-operation to new levels, in fact partly because we have taken co-operation to new levels and built a powerful economic machine, we face a challenge that is going to test us to the limit. It is now all but certain that humanity is causing the Earth’s climate to change, with consequences that may well be catastrophic for our civilisation. Humanity has created the problem and needs to solve it. As a species, we are extraordinarily good at solving problems co-operatively. Previous large-scale challenges such as fighting a war or putting a man on the moon have been faced, and often met, at the level of the nation. But no nation can solve climate change on their own. For the first time, we face a severe challenge that can only be resolved by the whole of humanity working together. It is our toughest test yet and the stakes are very, very high.
We belong to each other. This has always been true at the level of poetry and morality. Now it is true at a very practical and literal level as well.
Maybe it is the stage I have reached in my thinking, or maybe a general trend, but I find myself more and more coming across the work of a group I might call “techno-sceptics”. These are people who, while highly proficient in their own use of technology, are sceptical and concerned about its impact on society in varying ways. These concerns have a long history, of course, going back through, to pick a few, George Orwell, Aldous Huxley, H.G.Wells, Mary Shelley and much further if you care to trace it. The modern writers I am reading in this camp include figures like Jaron Lanier, Audrey Watters and Tara Brabazon. I find their work fascinating and provocative, and commend it to anyone with any interest in how technology is affecting society..
One of the writers this group often cites as a critical influence is the late American academic Neil Postman, so I decided to get to grips with what seems to be considered his masterpiece, Technopoly: The Surrender of Culture to Technology. I was not disappointed – it is an accessible, thoughtful, powerful analysis and I can see it inspiring several blog posts. But I will start with one idea in particular that makes modern trends so much more comprehensible.
We hear all the time about suffering “information overload”, at least in Western society. The phenomenon is not completely new – for centuries there has been more information around than the average person can navigate or assimilate. However, the volume of information has increased exponentially throughout history. The invention of writing ratcheted up the amount of information available to people, the printing press gave it another huge boost and then it accelerated through the nineteenth and twentieth century, as we acquired the telegraph, telephone, radio, television and of course, most dramatically, the world wide web. A study in 2003, one of many of its type, showed that 90% of data in the world had been generated in the previous two years. So much is widely observed and understood.
What is less well understood, though fairly obvious once pointed out, is how this affects the role of institutions. Postman, who is here developing the analysis of James Beniger, describes institutions as, at least in part, mechanisms for controlling information. This can be seen very obviously in a court of law, where there are strict rules about what information is permissible in settling a case. This is because there may be any amount of information relevant to a case and to make any sense of the situation, the “allowable information” must be filtered according to agreed criteria. In fact, this role of information filter is true of social institutions generally. Our traditional institutions – government, schools, universities, political parties, the family, even the nation – to a greater or lesser extent control the flow of information to and between their members. Sometimes this control is exercised physically, as churches and governments have banned books from time to time, but more often it is exercised in “soft” ways. An institution delivers messages about what information is important and should be received and what should be ignored. Take the example I work in – a university. A university teaches some subjects and not others, thereby conveying which subjects are worthy of study, in its view. A curriculum will include some writers, ideas and texts and not others. A student is therefore having their information filtered (and, hopefully, being taught to develop their own filters, but that is another story). Families allow their children access to certain information, but not all. Political parties maintain world views that privilege certain information sources over others.
Postman further points out that the flood of information means that all these institutions are under attack, and being systematically weakened. This is surely much more obvious now than when he was writing in 1992. The populist backlash across Europe in the elections held in May of this year demonstrated very clearly the weakness of mainstream political parties as well as the European Union itself. Universities and schools find themselves subject to increasing criticism and external direction. Organised religion is declining fast, even in the US, by far the most religious Western nation. The traditional family is morphing, and certainly becoming less effective at controlling information flows (as a father of two teenagers, I know this for a fact).
Does this matter? Isn’t it a good thing that these institutions cannot control what we read, think or say any more? Maybe. Many years ago I left organised religion because I found its restrictions offensive and incomprehensible, so I appreciate the upside here. However, Postman does make a striking point. The traditional institutions have something in common – they are driven by a sense of moral purpose of some sort. This is very clear for religion, but education has traditionally been driven by a desire to make people into better people, political parties to achieve certain moral ends, and so on. In losing these institutions, we lose this sense, and quite possibly the whole idea of moral purpose itself.
Because, of course, we still need information filters. If we lose the traditional institutions, we need alternatives that will help us judge what information to expose ourselves to, and what to ignore. Postman died in 2003, before “Web 2.0” really came to fruition, but events have probably unfolded pretty much as he would have predicted. We have not abandoned information filters at all, in fact it would be impossible for us to do so. We have just replaced the old filters with new ones. The nineteenth and twentieth centuries saw the rise of “mass media” in various forms, which have been our key filters for a while, but the newspapers, magazines, radio stations and television channels are now suffering in turn. At least in the West, the two most dominant institutions now managing the information we receive seem to me to be Google and Facebook. They are qualitatively different from the traditional institutions in many ways. They do use the language of moral purpose at times, and I am sure there is some idealism among their workforce, but the fact is that they are publicly traded companies, legally answerable to their shareholders who are primarily looking for a financial return. And, although they differ in many ways, Google and Facebook have a similar business model. Their business model is to find out personal information about you through your online behaviour and then sell that information to advertisers. In other words, their ultimate purpose is to sell you stuff. And our ultimate purpose, in their world, is to buy stuff. We don’t have any other function.
The idea has my attention now. This is where, for all the undoubted and wonderful benefits of technology, I start to worry about where we have got to, and where we are going.