In our March newsletter, we discussed Richard Sutton’s essay in which the founder of reinforcement learning argued that the sheer computational power is the king, and that the only methods worth pursuing, in the long run, are the ones that can be generalized to make use of the ever-increasing computational power. As it often happens, however, bold arguments rarely get to face no opposition. Therefore, it should come as no surprise that another prominent machine learning scientist — Max Welling — now wrote a response to Sutton’s post, challenging the idea of implied dominance of computationally intense methods in AI over specialized model-driven ones and providing a number of examples where that statement doesn’t necessarily hold true.
According to Welling, models that predominantly rely upon the availability of vast computational power would generally dominate only in strictly limited environments, as it was the case with AlphaGo or Deep Blue beating humans in chess and Go. In those cases, there is no strong reliance on external data, and hence the “dumb” power has indeed more potential compared to the models that draw upon human experiences and knowledge, simply because machine intelligence could then come up with better strategies than the ones humankind has found so far. However, when it comes to areas where there are limits to data availability (like with driverless cars), and/or the data can’t be easily simulated based on a set of rules (like with chess and Go), human knowledge is still essential to set up rules and fill in the blank spots in the dataset.
We now live in a world where the amounts of data being generated and exchanged continue to grow exponentially. However, we often have to rely on model-driven methods in the areas where the datasets are simply not large enough yet to yield best returns, although, as soon as the threshold amount of information becomes available for a specific problem area, methods that rely on the sheer computational power typically start to lead the game.
Also, given the incredible success of the latest Avengers’ movie, we thought we’d share this curious piece covering deep learning tech that has been used for image caption generation for the Endgame. Be advised that the article goes deep into the nitty-gritty details of the technology, so if you don’t like the idea of reciting the differences between RNN, CNN and VGG neural networks in your sleep afterward, maybe sit this one out — but if this stuff makes you tick, dive right in!
Somewhat unsurprisingly, the discussion of the ethical usage of AI in April continued to be dominated by multiple controversies. First, it’s now becoming more and more obvious that if left unchecked, facial recognition can represent a dangerous instrument in the wrong hands, and as demonstrated by the recent experiment done by the New York Times team, one can build a fairly powerful system today with very little investment. However, as it turns out, even if the will for regulation and oversight is there, the actual mechanics of that process can represent a problem almost as thorny and controversial as the application of the technology itself: in the Western world, the idea of giving too much decision-making authority to a group of arbitrarily chosen individuals doesn’t sit well with a lot of folks, as it was clearly demonstrated by the recent Google troubles, while in China it is the actions of the government itself that are now being deemed dangerous, which in turn means that the state-led regulation isn’t always the right answer either.
Google cancels AI ethics board in response to an outcry. Link
AI researchers tell Amazon to stop selling ‘flawed’ facial recognition to the police. Link
Facebook’s ad algorithm is a race and gender stereotyping machine, a study suggests. Link
We built an ‘unbelievable’ (but legal) facial recognition machine. Link
Microsoft’s AI research with Chinese military university fuels concerns. Link
One month, 500,000 face scans: how China is using A.I. to profile a minority. Link
Despite some truly impressive progress made in certain areas, it can still be argued that overall the results of the efforts to leverage AI to solve real-world problems fall short of the expectations of the broader public. Many have come to believe that in the very near future AI would allow us to predict crimes before those happen, adjudicate on court claims, or drive cars without any human supervision or involvement — yet the progress in all of those areas has been so far slower than expected, and it would likely take quite some time before any of those opportunities are fully realized.
The interesting thing is that the arising challenges aren’t always tied to the state of technology available today — as mentioned above, ethical dilemmas, for instance, can be a critical blocker in some cases. More broadly, though, the key issue is that when it comes to the issues where the concept of fair judgment leaves a lot of room for interpretation, building AI systems that are both fair and highly capable can be a difficult undertaking.
At the same time, however, AI can do a great deal to help us tackle some of the really important issues in areas like climate change, weather prediction or the search for new materials and chemicals — in other words, areas that have less to do with human constructs of fairness and justice, and more with the hard facts — and it is already delivering amazing value in some of those areas.
Machine Learning in the judicial system Is mostly just hype. Link
Deep learning takes Saturn by storm. Link
How AI researchers used Bing search results to reveal disease knowledge gaps in Africa. Link
Microsoft wants to unleash its AI expertise on climate change. Link
Scientists use Artificial Intelligence to turn brain signals into speech. Link
‘Deep medicine’ will help everyone, especially people like me with a rare disease. Link
How AI and data-crunching can reduce preterm births. Link
Harnessing machine learning to discover new materials. Link
Even though the amount of investments in AI-related projects is still increasing, and we expect this trend to continue in the near future, we are also now starting to see some investors becoming a bit more cautious when it comes to attempts to apply AI to real-world problems. The shutdown of Anki after it failed to raise a new round of financing (after previously raising $200 million in funding) would now serve as a cautionary tale for anyone who still believes that the pitch containing the word “AI” would be enough to get investors’ buy-in. Of course, consumer robotics in general is a particularly challenging area as it is proving hard to meet customer’s expectations without pricing too many of the potential customers out of the market — for instance, another well-funded robotics startup Jibo shut down in early March. But then, maybe it’s time to focus on other segments of the market, and leave consumer robotics be, at least for a while?
On a separate note, the commitment of the industry to eventually bring self-driving technology home seems to be unwavering, which is encouraging, although the amount of capital required to sustain those efforts seems to be pushing more and more players to seek additional partners to split the costs with, as witnessed by the latest investment rounds.
Robotics startup Anki is shutting down after raising $200M. Link
Onfido, which verifies IDs using AI, nabs $50M from SoftBank, Salesforce, Microsoft and more. Link
Daimler acquires a majority stake in Torc Robotics to accelerate autonomous truck development. Link
Uber’s self-driving car unit raises $1B from Toyota, Denso and Vision Fund ahead of spin-out. Link
‘Alexa, find me a doctor’: Amazon Alexa adds new medical skills. Link
Microsoft rolls out Azure custom vision AI developer tools. Link
Introducing TensorFlow privacy: learning with differential privacy for training data. Link
This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI. Link
Google launches an end-to-end AI platform. Link
Microsoft releases Windows Vision Skills preview to streamline computer vision development. Link
Turing-winning AI researcher warns against secretive research and fake ‘self-regulation’. Link
Ben Evans: notes on AI bias. Link
Microsoft Azure CTO Russinovich sees an AI world that sounds a bit like Visual Basic. Link
“The Power of Self-Learning Systems” lecture from DeepMind’s co-founder Demis Hassabis. Link
Other interesting reads:
Toward emotionally intelligent Artificial Intelligence. Link (#technology)
The AI hardware startups are coming. Intel plans to be ready. Link (#startups)
Which Deep Learning framework is growing fastest? Link (#technology)
A survey of the European Union’s artificial intelligence ecosystem. Link (#policy)
Human side of Tesla autopilot. Link (#futureofAI)
Will Artificial Intelligence enhance or hack humanity? Link (#futureofAI)
Amazon’s empire rests on its low-key approach to AI. Link (#companies)
Microsoft: AI’s next frontier is experts teaching machines. Link (#companies)
One of Google’s top A.I. people has joined Apple. Link (#people)
Microsoft’s CEO meets with top execs every week to review AI projects. Link (#companies)
Google reveals HOList, an Environment for Machine Learning of Higher-Order Theorem Proving. Link (#technology)