Menu
Creative Innovation 2017 (Ci2017)

Ci2017: A Post-Conference Policy Directions and Reflections Paper for Australia’s Future

Wednesday, 18 April 2018

Prepared by Terry Barnes
Policy consultant and media commentator

For three days in November 2017, people from around the world gathered in Melbourne for the latest in the Creative Innovation conference series, Ci2017.

Over 600 delegates and more than 40 speakers joined together at the Sofitel Melbourne On Collins. They came from business, government, academia, not-for-profit organisations, the media and the arts. Over 15 nationalities were represented, and all were treated to a challenge to the mind, to the senses and to the world in which we live.

The theme of Ci2017 was Human Intelligence 2.0: Thriving in the Age of Acceleration. And from the start it was clear to everyone that the future is accelerating at a startling rate.
Moore’s Law of computing says that computing power doubles every two years. In 1982, Buckminister Fuller outlined his knowledge doubling curve: until the 20th century, human knowledge doubled every century; by 1945 it doubled every 25 years; and by 1982 every 12 months. Now, IBM predicts that, because of the “Internet of Things”, human knowledge will double every 12 hours.

Read the Ci2017 Policy Directions and Reflections Paper (PDF)

Back in 2006, IBM said in a paper:

Some observers have likened what is happening to the Industrial Revolution, when economies made the first move away from individual craftsmanship and towards the production line, with its potential for quantum increases in output. Except now it is not pots and pans or cars that are being produced in their thousands, but data bits in their millions, billions and trillions[1].

That’s a heck of a lot of knowledge.

Events like Ci2017 therefore are very important.  They bring leading experts and advocates together to share their own knowledge, and debate diverse points of view about the future and how we become part of it.  They challenge and stimulate the minds of leaders in government, business and academia. They inspire the media and the wider public to think about what’s possible and how they can benefit.

Because above all, understanding conquers fear.

INSIGHTS AND LESSONS FROM Ci2017

Watch the Ci2017 video highlights

The speakers at Ci2017 were diverse, ranging from a real cyborg to an academic ethicist; from entrepreneurs to managers and leaders of large organisations; from theoretical scientists to practical implementers.

The conference sessions and master classes were equally diverse, addressing fundamental questions about human and artificial intelligence, and how they will relate to each other, from a myriad of different angles.

Together, the speakers and programme gave the several hundred Ci2017 delegates a rich vein of ideas and knowledge to contemplate and reflect on the future of human intelligence, and how it will relate to the next technological and AI revolution.

For me, some key reflections emerged from three days of engagement, discussion, and thinking at Ci2017.

The future can’t be stopped

Speakers at Ci2017 made one thing very clear: a revolutionary wave of Artificial Intelligence (AI) and automation is coming.  The world of work, of daily living, we know now will be obsolete, perhaps even unrecognisable, by the middle of this century.

With the exponential expansion of human knowledge, and Moore’s Law, the genie is out of the bottle. There is no way of putting him back.

As perhaps the world’s first genuine human cyborg Neil Harbisson showed the conference, the changes around the corner even include the integration of man and technology into a combined biological entity.

The future cannot be stopped: it already is here.  We can’t raise our hands and stop the tide of AI and automation.

Our challenge is to ride the waves of sweeping technological and institutional change, and do what we can to ensure our society and economy – humanity – benefits from what the ingenuity of human minds is creating now, and what is soon to come.  We must also do what we can to mitigate the fallout from such rapid and profound change, accepting that not everyone can or will benefit.

From robotics to Blockchain, speaker after speaker at Ci2017 highlighted where change is coming, how it can happen and what would be the consequences.  The message: what blows the imagination now will be commonplace in a few decades, even a few years, time.

The future should be both embraced and feared

Chair of Digital Biology at Singularity University in the United States, Raymond McCauley, said at Ci2017 that where he once was totally optimistic about the future, he now has both hopes and fears about what is coming.

There’s no doubt that the explosion of AI and next-wave automation will benefit mankind in a mind-bogglingly number of ways.

At a macro level, whole economies will be transformed as governments and businesses embrace new ideas, new technologies and explore entirely new industries and wealth-generating fields of endeavour.

At a micro level, how we live our daily lives, and how we relate to each other, will change.  Spinoffs for human health and safety, such as a vastly-reduced road toll because autonomous vehicles will not be subject to the human error that is a factor in at least 90 per cent[2] of road crashes.

As psychologist Patrcjya Slawuta reminded both Ci2016 and Ci2017, fear of difference and change is a natural human response to the world around us.  Sometimes that fear is well-founded, and sometimes not, but it is real fear nevertheless.  And there is reason to fear a technological tsunami disrupting, even destroying, much of what we now take as given.  Will more jobs disappear than are created?  What future do people displaced by technology have?  Will technology usurp humanity altogether?

These are all legitimate questions. They must be asked and debated.  There are always going to be winners and losers from disruptive change, and change on this scale will affect millions, even billions of people all over the world.

Creators, disruptors, and regulators all have higher duty to use all their energies to ensure that AI, automation and massive technological change bring the greatest possible good to the greatest possible number or people, and to do all they can to give hope and optimism to humanity, and not give rise to fear and despair.

What events like Ci2017 achieve is help shape the debate about the future, and help predict the winners and losers, the gains and losses from what is just around the corner for mankind.

We face moral and ethical questions that no previous generations ever faced

Not just the possibility of mass-adoption cyborg technology – for which Neil Harbisson was the charismatic face at Ci2017 – which poses ethical and moral questions we never before have faced.

In relation to the technology itself, what rights do people have over machines?  Should AI have control over human lives? Can machines be allowed to have wills of their own?  Is there a point where humans and artificial intelligence merge to create an entirely new entity?  Should AI be given the power and freedom to improve and replicate itself, taking over the creative function of human intelligence?

Should AI be able to defend itself from being “attacked” by its human creators?  If AI means robots can acquire human-like emotions and feelings, as suggested by presenter Aleksandra Przegalinska of the Massachusetts Institute of Technology, will machines be entitled to human rights as well as humans themselves?

When so much of our identity already is in the hands of the digital world, a huge issue for the future is what rights we have over our own data – our own digital DNA – and what are the consequences when we give those rights up voluntarily, or they are taken away involuntarily.  Some of the discussion at Ci2017 went to the heart of the shocking discovery that highly-sensitive personal data held by Facebook was exploited for personal and political gain by the data mining firm Cambridge Analytica.  A number of speakers highlighted the potential for such a mass digital invasion of privacy, not knowing that 300 million people’s details already had been hacked from the world’s biggest social network without their knowledge or consent.

The moral and ethical questions keep coming.  Can and should we put parts of our economy and society out-of-bounds to AI and next-wave automation?  While Universal Basic Income to support people displaced by change has become part of the conversation in Australia and overseas, should we go further to protect existing jobs by restricting how far they can be automated or replaced?  Indeed, do the winners from disruptive change have a social and moral obligation to look after the losers?

And what does such change mean for our economy.  Will competitive capitalism thrive or fail if some have greater access to, and control over, not just the means of production but the means of disruption?  Do our traditional education and training systems prepare our children to prepare either for the workplace of the future, or make the most of the greater leisure and family time that many will have?

Such profound social, moral and ethical questions can’t be ignored.  In the coming brave new world, we need ethicists like the UTAS Vice-Chancellor and Ci2017’s Rufus Black as well as the scientists, inventors and entrepreneurs who are leading humanity’s charge into our new technological future.

We need to anticipate and manage the many risks of what is coming.  But if we keep a positive mindset – seeing the future as an overwhelmingly positive destination – there is every likelihood it will be.

Our politicians and policy-makers are living in the past, not facing the future

In the year between Ci2016 and Ci2017, a major political debate in Australia was not about the future, nor about the benefits and risks of AI and the next wave of automation.

It was about Sunday penalty rates.

The Australian Fair Work Commission wound back Sunday penalty rates for some hospitality and other workers, balancing their interests against the interests f keeping small business that employed them open and viable.  There was outrage on one hand, particularly from the union movement, and support from employer interests.  In the middle the affected workers became political footballs between the Coalition government and the Labor opposition.

That many, perhaps all, of the jobs at the centre of this debate may not exist in a few years’ time, due to the impacts of AI and automation, didn’t rate a mention.

The Sunday penalty rates controversy highlighted, very starkly, a big truth of the attitudes of politicians to disruptive change in the workplace and economy: let’s ignore it and pretend it’s not happening.  Profound workplace change is a bad place to go.

There are no votes in profound change, particularly if it will affect the availability of jobs and the capacity of people to be fully-functioning members of the future world of work.  Politicians instead fight over the crumbs of a twentieth century industrial economy, an economy that already is obsolete let alone in years to come.

There are some honourable exceptions to the willful ignorance of politicians.  Labor’s Ed Husic and the Liberal party’s Josh Frydenberg and Angus Taylor come to mind.  But for all the talk of “innovation nations” and embracing the future, the political reality is it’s far easier to keep living in the past, and fight the political battles of the past.  That’s where the votes are.

After all, Donald Trump won the 2016 American presidential election by appealing to millions of losers in the current post-industrial economy in the “flyover” states, by playing on their disadvantage and resentment, and exploiting their fears of the further risks to lower-level white and blue-collar jobs from yet more automation, and even from free international trade.

Donald Trump’s success, and indeed Hillary Clinton’s failure, highlights how the politics of fear too often trumps the politics of change.  It’s up to our politicians, and the bureaucrats and others who advise them, to be prepared to embrace the future, help educate their peoples about the opportunities and risks of the future, to develop policies and programmes that understand the future and ensue everyone has a place in it.

That means that our leaders must confront the very range of social, economic and ethical questions raised at Ci2017.

If politicians don’t do that, society won’t ride the tiger of the technological future, but be eaten by it.  It’s up to our political class to accept their responsibility to lead, not be intimidated by what is coming, and certainly not exploit it to gain power by creating resentment and fear where understanding and support is what’s needed.

Whether our politicians, and political institutions, are ready and worthy for that difficult but vital challenge, unfortunately is a question for which we cannot, on current performance, expect a positive answer from those who have or want political power.

CONCLUSION: Can human intelligence keep up with its own creations?

A few years ago, IBM CEO Ginny Rometty said:

In a world where value is shifting rapidly from things to knowledge, knowledge workers are the new means of production.  And it follows that the social network is the new production line[3].

But the question Rometty should have asked, which came up repeatedly at Ci2017, is that if we are knowledge workers, how many of us have the education, training and aptitude to be effective knowledge producers?

And as AI progresses at lightning speed, will human intelligence and creativity be swamped by the intelligence and creativity of the technology people invent?

As the massive Facebook data breach has shown, humans can be great inventors and innovators, but we don’t always understand, or even anticipate the consequences of our own inventiveness. It’s clear Facebook creator Mark Zuckerberg never imagined the dark side of his own creation, even as it revolutionized how people relate to each other as members of an online as well as a real-world social network.

As noted earlier, IBM also says that due to the Internet of Things, the sum of human knowledge doubles not every 100 years, but every 12 hours. That’s a quantitative measure: it means nothing for the quality of the knowledge accumulating at such an amazing rate.  But if we, men, women and children, get overwhelmed by the vast quantity of information, or simply be distracted by it (funny cat videos on YouTube are knowledge of a sort but hardly contribute to productive human activity), we will too easily get lost in the technology tsunami.

That’s why issues raised here are so important. If we don’t seek to understand the upheaval that’s imminent, if we don’t face up to their opportunities and risks openly and honesty, human intelligence won’t thrive in the Age of Acceleration.

But if we do anticipate, embrace and debate the future, and seek to understand the good and bad of what is coming very quickly down the track, we can plan now to ensure that the Age of Acceleration is humanity’s servant, not its master.

[1] http://www-935.ibm.com/services/no/cio/leverage/levinfo_wp_gts_thetoxic.pdf
[2] http://cyberlaw.stanford.edu/blog/2013/12/human-error-cause-vehicle-crashes
[3] http://www.duperrin.com/english/2014/03/10/quote-social-business-new-production-line-ginni-rometty-ibm/

Download Ci2017 Program
Ci2017 PARTNERSHIP OPPORTUNITIES

There are a host of incredible opportunities to partner with us on this world class innovation event. If you are interested in becoming a partner or creative collaborator for the upcoming conference, please contact:

Tania de Jong AM // Founder and Executive Producer
Tel: +61 (0)3 8679 6000
Email: Tania@creativeuniverse.com.au

Alrick Pagnon // Creative Innovation Leader
Tel: +61 (0)3 8679 6000
Email: Alrick@creativeuniverse.com.au