Summer reading: 5 Great articles on AI & technology and why they matter

(3 min. read)

The EU ‘gave’ us free roaming! So, despite the books I had with me during my summer holiday, I couldn’t resist spending some time on the internet and social media. But, lucky me, there was a ton of new stuff that blew my mind.

Below are 5 of the articles I think you should definitely read yourself. They provided me great new insights and will most certainly trigger your thoughts as well if you’re involved in data, data analysis, machine learning, data science or technology.


Mapping the human brain

Up until now, machine learning has not been able to perform general tasks. Most of its usage is focused on completion of very specific assignments and it’s really hard to turn those algorithms into more versatile ones.

The solution: digging up every single thing there is to know about the inner workings of our own brains (Research abstract). But that’s easier said than done.

One reason is the vastness of the human brain. A cube of one by one millimetre contains about 100.000 neurons and up to 15 million synapses, connections between those neurons. If we could arrange those in a neat way, they would span the width of Manhattan. Pretty hard to deal with that amount of complexity.

The article below describes the endeavor from several teams to work their way through this enormous challenge, where they are, what they already achieved and what issues they face. https://www.quantamagazine.org/mapping-the-brain-to-build-better-machines-20160406


Feedback in machine learning

For humans, receiving feedback is one of the most important elements for learning (Importance of feedback in learning). So, it comes as no surprise that developments in machine learning focus on incorporating feedback in future structures.

Convolutional neural networks (CNN) are one of the most prominent deep learning techniques being used today. They do not rely on feedback structures, but focus on a feed-forward approach instead where each layer uses the output of the other until the final output is reached.

A reletively new kid on the block is so called reinforcement learning. This technique does use feedback to learn. It is a prominent part of driverless vehicles and the foundation of the AlphaGo algorithm that surprised everybody by beating the world’s best player of a very complicated board game called Go earlier this year (New York Times article).

The article below provides a very accessible description of this technique, how it works and what it’s used for

https://www.technologyreview.com/s/603501/10-breakthrough-technologies-2017-reinforcement-learning/


Overview of Python libraries

It’s decided: I’ll stay with Python and leave my R skills as they are (well, ‘skills’ is maybe a bit overrated).

I did a great little private ‘summer’ project and using the right Python libraries is key. This article really helped me out!

Besides the obvious Panda’s and NumPy, there are 13 others like Gensim, Keras and Theano.

http://www.kdnuggets.com/2017/06/top-15-python-libraries-data-science.html


Beware! Facebook couldn’t understand their AI and shut them down

The fact that Facebook AI Research shut down a pair of smart bots (backed by neural networks) made headlines this summer.

For some (most?) this was yet another example how real the dangers of smart machines are. Suddenly, the apocalyptic stories of thinking machines turned into reality.

The truth, as always, was not that black and white. Researchers simply pulled the plug because they wanted to learn themselves how those bots learned. And with bots creating their own language, they couldn’t achieve one of their main research goals. A simple example of failed scientific research.

Besides, bots talking in their own language is not something new. For some, it even means we embark on new, endless possibilities when they do so (OpenAI build bots the learn their own language).

Nevertheless, to me, this news shows one of the aspects of AI we have to monitor really carefully. The fact that these bots at Facebook developed their own language was a result of the goal they were given: negotiate in an effective way. Creating a new language was part of that. But not expected… and in this case an undesirable outcome.

And that’s what might the worrying part of this story: how many times do we have to deal with unintended consequences from self learning algorithms? And what will those consequences be?

http://www.independent.co.uk/voices/facebook-shuts-down-robots-ai-artificial-intelligence-develop-own-language-common-a7871341.html


The big, big problem of fake news

Ever since Donald Trumps’ election, fake news and internet bubbles are on a lot of people’s minds.

For me, this is not just a one time event, but highlights a fundamental flaw in internet’s economic system. This system is based on clicks, so everything we get to see when searching on Google, browsing on Facebook or sifting through all those tweets, is focused on making us click on it (Blaming fake news not the answer). Each click generates value, either directly when it’s an advertisement, or indirectly through insights in our preferences. This focus even stimulates the emergence of juicy, clickable fake news on our feeds or search results (Google algorithm favours fake news).

The response thus far has been remarkable: censorship. Suddenly, it’s an accepted idea that tech/ social media companies remove posts with content they consider fake. But the underlying problem hasn’t been dealt with. How big the problem of fake news really is? Have a look at this article and then you realize: don’t trust anything you read….

https://www.technologyreview.com/s/608561/first-evidence-that-social-bots-play-a-major-role-in-spreading-fake-news/

Leave a comment