This is the first issue of Evolution One’s newsletter. In it, we’ll cover some of the key AI-related news and announcements that came out over the first two months of 2019.
Going forward, you can expect to see one post/email per month from us that’d highlight a few select areas where we saw the most significant developments, plus expert opinions and other news we found interesting since our last post.
Machines vs. humans
In the past few years, AI has proven to be quite useful at tackling a wide range of tasks, from speech & image recognition to mining data to text translation. That being said, some areas remained too complex to be successfully tackled with AI, especially when it comes to competing against human operators.
That’s what makes some of the recent advancements exciting, as machines are now becoming increasingly capable of performing on par with, or better than, humans, including in some areas that require dealing with complex environments and imperfect information which before largely remained the domain of humans. DeepMind’s AlphaStar that won a series of matches in StarCraft II against two pro players is perhaps the most impressive example of the past months (especially given that StarCraft has previously proven to be particularly challenging for AI).
Still, some areas, at least for now, remain too difficult for AI to crack, no matter what approach is used, as we’ve witnessed with IBM losing to a human debater in February.
DeepMind’s AlphaStar manages to master StarCraft II and handily beat 2 professional players in a series of matches played in an unrestricted game setting. Link
IBM AI takes a swing at a top human debater for the second time and loses once again. Link
Ethics - deep fakes & biases
The discussion around ethical & responsible usage of AI has been heating up lately, which is unsurprising, really, given the growing capabilities — and thus potential dangers of abuse — of AI.
In particular, the issue of the biased facial recognition models now becoming more widespread and being implemented by police remained in the spotlight, as did the deep fakes controversy (which now extended to text communications as well, not just videos — see the article about OpenAI text generator). Those are hard challenges that won’t be addressed quickly, so having an ongoing discussion on how to deal with those becomes critical.
On a more positive side, organizations around the world increasingly invest in research around the implications of embracing AI in our lives.
OpenAI’s new model capable of generating text on any chosen topic proves to be so good that the company decides against releasing it, citing safety and security concerns. Link
AI-powered website “This Person Does Not Exist” generating hyper-realistic portraits of completely fake people using GANs goes viral. Link
Supposedly ‘fair’ algorithms can instead reinforce and perpetuate discrimination. Link
Facebook and the Technical University of Munich announce new independent TUM Institute for Ethics in Artificial Intelligence. Link
Amazon facial-identification software used by police falls short on tests for accuracy and bias after being trained using sets of images that skew heavily toward white men. Link
First completely AI-generated artwork — meaning work created without human-curated input — heads to auction. Link
While many worry about AI taking over the world and potentially even eliminating humanity (not that we seriously believe this is a legitimate concern at this point), it’s hard to ignore the benefits that technology already brings to the people’s lives.
In particular, tackling the gap between people with disabilities and the regular workforce has been one of the focus areas for some of the major players in the industry for a while now. The recent announcement from Google of the Live Transcribe product is another step towards eliminating this gap by assisting people with hearing problems through providing them with a real-time transcription service.
In contrast, using AI to automate some of the functions that were previously performed by humans remains more controversial — while it could potentially improve the quality of service, this can be viewed as AI taking jobs from human workers, which could justify push back from the workforce.
The announcement of GA version of Microsoft’s Healthcare Bot is one notable example here. With consistently improving quality of algorithms, and the cheapening of computational resources needed to analyze data, we expect companies to shift more and more to automated solutions to manage and execute calls and chats. We find Microsoft’s Healthcare Bot to be an interesting example of a specialized solution, that goes beyond regular customer service scenarios and touches a highly-sensitive area of health-related customer experience.
Google announces real-time continuous transcription service Live Transcribe. Link
How people with disabilities use AI to improve their lives. Link
Microsoft Healthcare Bot service helps healthcare organizations to improve customer services. Link
In general, when it comes to coding, various empowerment tools could significantly improve and optimize the code we write, suggesting a more efficient structure or better optimized functions to use, or searching for potential errors in the code. However, the benefits could go much further than that.
More specifically, with machine learning, the necessity to have programming skills in order to leverage the benefits of neural networks complicates the process and limits the area to those who know how to code. However, while writing and optimizing code is a important component of any efficient ML algorithm, this is one area that could potentially be completely automated, allowing the task of building models to become a plug & play experience, and freeing time that could be then spent working on the architecture and physics of the network, rather than learning Python and debugging code.
Mozilla and Ubisoft partner to develop an AI coding assistant. Link
Uber’s AI toolkit “Ludwig” built on top of TensorFlow is the next step to codeless AI. Link
While AI has been steadily infiltrating more and more areas of our lives, it’s important to recognize the fact that its capabilities are not limitless, and there are still plenty of tasks that AI today can’t successfully tackle or has delivered less than originally expected, with IBM Watson & AI-powered ETFs being just a couple of examples of the recent disappointments.
IBM CEO Ginni Rometty claims that IBM has never overpromised on Watson. Link
AI-Powered ETF failed miserably at beating the market in 2018 — an analysis of the reasons behind this failure. Link
An essay on whether men can aspire to ever build a mind. Link
Many argue that AI will destroy the world, others say that it will save it, but both sides agree that AI is critical for the future. We believe that diversity of opinions is critical for creating an environment that allows new ideas to flourish, and thus we wanted to share a couple of interesting reads featuring expert opinions. The first article discusses Trump’s executive order on AI that we have covered in our recent article, and provides a few points of view on the announcement itself. The second unveils some expert’s opinions on who’s leading the race US or China.
Experts’ opinions on Trump’s executive order concerning AI. Link
Two experts debating which country is winning the AI race — the U.S. or China. Link