AI Newsletter: Issue 3

In our March newsletter, we discussed Richard Sutton’s essay in which the founder of reinforcement learning argued that the sheer computational power is the king, and that the only methods worth pursuing, in the long run, are the ones that can be generalized to make use of the ever-increasing computational power. As it often happens, however, bold arguments rarely get to face no opposition. Therefore, it should come as no surprise that another prominent machine learning scientist — Max Welling — now wrote a response to Sutton’s post, challenging the idea of implied dominance of computationally intense methods in AI over specialized model-driven ones and providing a number of examples where that statement doesn’t necessarily hold true.

According to Welling, models that predominantly rely upon the availability of vast computational power would generally dominate only in strictly limited environments, as it was the case with AlphaGo or Deep Blue beating humans in chess and Go. In those cases, there is no strong reliance on external data, and hence the “dumb” power has indeed more potential compared to the models that draw upon human experiences and knowledge, simply because machine intelligence could then come up with better strategies than the ones humankind has found so far. However, when it comes to areas where there are limits to data availability (like with driverless cars), and/or the data can’t be easily simulated based on a set of rules (like with chess and Go), human knowledge is still essential to set up rules and fill in the blank spots in the dataset.

We now live in a world where the amounts of data being generated and exchanged continue to grow exponentially. However, we often have to rely on model-driven methods in the areas where the datasets are simply not large enough yet to yield best returns, although, as soon as the threshold amount of information becomes available for a specific problem area, methods that rely on the sheer computational power typically start to lead the game.


Also, given the incredible success of the latest Avengers’ movie, we thought we’d share this curious piece covering deep learning tech that has been used for image caption generation for the Endgame. Be advised that the article goes deep into the nitty-gritty details of the technology, so if you don’t like the idea of reciting the differences between RNN, CNN and VGG neural networks in your sleep afterward, maybe sit this one out — but if this stuff makes you tick, dive right in!



Somewhat unsurprisingly, the discussion of the ethical usage of AI in April continued to be dominated by multiple controversies. First, it’s now becoming more and more obvious that if left unchecked, facial recognition can represent a dangerous instrument in the wrong hands, and as demonstrated by the recent experiment done by the New York Times team, one can build a fairly powerful system today with very little investment. However, as it turns out, even if the will for regulation and oversight is there, the actual mechanics of that process can represent a problem almost as thorny and controversial as the application of the technology itself: in the Western world, the idea of giving too much decision-making authority to a group of arbitrarily chosen individuals doesn’t sit well with a lot of folks, as it was clearly demonstrated by the recent Google troubles, while in China it is the actions of the government itself that are now being deemed dangerous, which in turn means that the state-led regulation isn’t always the right answer either.


  • Google cancels AI ethics board in response to an outcry. Link

  • AI researchers tell Amazon to stop selling ‘flawed’ facial recognition to the police. Link

  • Facebook’s ad algorithm is a race and gender stereotyping machine, a study suggests. Link

  • We built an ‘unbelievable’ (but legal) facial recognition machine. Link

  • Microsoft’s AI research with Chinese military university fuels concerns. Link

  • One month, 500,000 face scans: how China is using A.I. to profile a minority. Link

AI applications

Despite some truly impressive progress made in certain areas, it can still be argued that overall the results of the efforts to leverage AI to solve real-world problems fall short of the expectations of the broader public. Many have come to believe that in the very near future AI would allow us to predict crimes before those happen, adjudicate on court claims, or drive cars without any human supervision or involvement — yet the progress in all of those areas has been so far slower than expected, and it would likely take quite some time before any of those opportunities are fully realized.

The interesting thing is that the arising challenges aren’t always tied to the state of technology available today — as mentioned above, ethical dilemmas, for instance, can be a critical blocker in some cases. More broadly, though, the key issue is that when it comes to the issues where the concept of fair judgment leaves a lot of room for interpretation, building AI systems that are both fair and highly capable can be a difficult undertaking.

At the same time, however, AI can do a great deal to help us tackle some of the really important issues in areas like climate change, weather prediction or the search for new materials and chemicals — in other words, areas that have less to do with human constructs of fairness and justice, and more with the hard facts — and it is already delivering amazing value in some of those areas.


  • Machine Learning in the judicial system Is mostly just hype. Link

  • Deep learning takes Saturn by storm. Link

  • How AI researchers used Bing search results to reveal disease knowledge gaps in Africa. Link

  • Microsoft wants to unleash its AI expertise on climate change. Link

  • Scientists use Artificial Intelligence to turn brain signals into speech. Link

  • ‘Deep medicine’ will help everyone, especially people like me with a rare disease. Link

  • How AI and data-crunching can reduce preterm births. Link

  • Harnessing machine learning to discover new materials. Link


Even though the amount of investments in AI-related projects is still increasing, and we expect this trend to continue in the near future, we are also now starting to see some investors becoming a bit more cautious when it comes to attempts to apply AI to real-world problems. The shutdown of Anki after it failed to raise a new round of financing (after previously raising $200 million in funding) would now serve as a cautionary tale for anyone who still believes that the pitch containing the word “AI” would be enough to get investors’ buy-in. Of course, consumer robotics in general is a particularly challenging area as it is proving hard to meet customer’s expectations without pricing too many of the potential customers out of the market — for instance, another well-funded robotics startup Jibo shut down in early March. But then, maybe it’s time to focus on other segments of the market, and leave consumer robotics be, at least for a while?

On a separate note, the commitment of the industry to eventually bring self-driving technology home seems to be unwavering, which is encouraging, although the amount of capital required to sustain those efforts seems to be pushing more and more players to seek additional partners to split the costs with, as witnessed by the latest investment rounds.


  • Robotics startup Anki is shutting down after raising $200M. Link

  • Onfido, which verifies IDs using AI, nabs $50M from SoftBank, Salesforce, Microsoft and more. Link

  • Daimler acquires a majority stake in Torc Robotics to accelerate autonomous truck development. Link

  • Uber’s self-driving car unit raises $1B from Toyota, Denso and Vision Fund ahead of spin-out. Link



  • ‘Alexa, find me a doctor’: Amazon Alexa adds new medical skills. Link

  • Microsoft rolls out Azure custom vision AI developer tools. Link

  • Introducing TensorFlow privacy: learning with differential privacy for training data. Link

  • This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI. Link

  • Google launches an end-to-end AI platform. Link

  • Microsoft releases Windows Vision Skills preview to streamline computer vision development. Link


  • Turing-winning AI researcher warns against secretive research and fake ‘self-regulation’. Link

  • Ben Evans: notes on AI bias. Link

  • Microsoft Azure CTO Russinovich sees an AI world that sounds a bit like Visual Basic. Link

  • “The Power of Self-Learning Systems” lecture from DeepMind’s co-founder Demis Hassabis. Link

Other interesting reads:

  • Toward emotionally intelligent Artificial Intelligence. Link (#technology)

  • The AI hardware startups are coming. Intel plans to be ready. Link (#startups)

  • Which Deep Learning framework is growing fastest? Link (#technology)

  • A survey of the European Union’s artificial intelligence ecosystem. Link (#policy)

  • Human side of Tesla autopilot. Link (#futureofAI)

  • Will Artificial Intelligence enhance or hack humanity? Link (#futureofAI)

  • Amazon’s empire rests on its low-key approach to AI. Link (#companies)

  • Microsoft: AI’s next frontier is experts teaching machines. Link (#companies)

  • One of Google’s top A.I. people has joined Apple. Link (#people)

  • Microsoft’s CEO meets with top execs every week to review AI projects. Link (#companies)

  • Google reveals HOList, an Environment for Machine Learning of Higher-Order Theorem Proving. Link (#technology)

AI Newsletter: Issue 2

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”

Richard S. Sutton

When it comes to AI research, humans have often (and unsurprisingly) been seeking to leverage human knowledge in a particular domain to drive advancements. This approach, while appearing natural and sensible on the surface, can be ineffective and even detrimental to the progress over the long term, as it has been demonstrated time and again over the last decades. Instead, according to Richard S. Sutton, considered one of the founding fathers of modern computational reinforcement learning, the only thing that can truly make a difference, in the long run, is building general-purpose methods that can scale as the amount of available computational power increases (which it always does).

Read more on this topic in Sutton’s new essay “The Bitter Lesson”.



Ethics arguably remains one of the hottest topics surrounding any discussion about artificial intelligence. Today, we are seeing large companies increasingly implementing new boards and processes to ensure AI is being employed ethically, and non-profits such as Open AI expanding their efforts so much that the initial funding of $1 billion becomes insufficient to allow it to deliver on its mission of “stopping AI from ruining the world”. In general, though, while ethics is obviously a fundamental and essential topic to guide AI investments, referring back to Sutton’s essay, it is questionable how humans, whose whole life experiences make them naturally biased, can create something that would be truly unbiased. Even more importantly, it remains unclear how biased human judges could determine whether any such system is indeed biased.


  • Google creates an external advisory board to monitor it for unethical AI use. Link

  • Microsoft will be adding AI ethics to its standard checklist for product release. Link

  • Stanford University launches the Institute for Human-Centered Artificial Intelligence. Link

  • Tech companies must anticipate the looming risks as AI gets creative. Link

  • 2019 applied ethical and governance challenges in AI. Link

  • Is ethical A.I. even possible? Link

AI applications

AI is already spreading across industries, from retail, to manufacturing, to energy generation and so on. We are also now seeing it starting to successfully tackle such hyper-sensitive areas as human health, with bots helping to navigate the steps needed to be taken to address patients’ condition, and AI systems suggesting appropriate medications, or helping to pick the correct formula for the new drug. While often the expectation is to match or at least get closer to human performance level so that you could save on labor, in some cases technology is already at the stage where machines can deliver better performance, with the key challenge now being making those technologies affordable.


  • Fast and accurate medication identification. Link

  • Will machines be able to tell when patients are about to die? Link

  • Machine learning can boost the value of wind energy. Link

  • How AI will invade every corner of Wall Street. Link

  • How Artificial Intelligence is changing science. Link

  • AI to the Rescue: How phones are turning into plant doctors for thousands of farmers. Link


Our February newsletter mentioned speech-related product launches from both Microsoft and Google focusing on helping people with hearing impairment. In March, we saw a further continuation of this trend, with both companies releasing products and features for visually impaired that rely on AI technology. While AI’s role in making the world more accessible for everyone is fast becoming undeniable, somehow it still attracts less attention than things like deep fakes, which seems a bit unfortunate (although, interestingly enough, even this topic can occasionally spark some controversy).


  • Google releases Lookout app that identifies objects for the visually impaired. Link

  • Google rolls out an all-neural on-device speech recognizer. Link

  • Microsoft announces new features in seeing AI. Link


Playing the AI game can prove to be a costly affair, and in March, we saw a few notable examples confirming this notion. In particular, we saw OpenAI choosing to abandon its purely non-profit status and instead restructure as a “capped-profit” company, citing the need to raise substantial capital to be able to serve its mission. Also, Alphabet’s Waymo announced that it’d seek outside investors to raise capital and get external validation for its self-driving cars efforts. On another note, acquisitions in the AI space have been picking up lately, with the acquisition of Dynamic Yield for $300 million being the most notable example occurring in March.


  • OpenAI shifts from nonprofit to ‘capped-profit’ to attract capital. Link

  • Alphabet’s Waymo seeks outside investors. Link

  • McDonald's acquires machine learning startup Dynamic Yield for $300 million. Link



  • Driver behaviours in a world of autonomous mobility. Link

  • China to overtake US in AI research. Link

  • DeepMind and Google: the battle to control artificial intelligence. Link

Other things to read:

  • Three pioneers in Artificial Intelligence win Turing Award. Link

  • Nvidia AI turns sketches into photorealistic landscapes in seconds. Link

  • Google Duplex rolls out to Pixel phones in 43 states. Link

  • Microsoft: business executives adopting AI also want to invest in motivating employees. Link

  • Facial recognition overkill: how deputies cracked a $12 shoplifting case. Link

  • Facebook’s AI couldn’t spot mass murder. Link

  • Inmates in Finland are training AI as part of prison labor. Link

AI Newsletter: Issue 1

This is the first issue of Evolution One’s newsletter. In it, we’ll cover some of the key AI-related news and announcements that came out over the first two months of 2019.

Going forward, you can expect to see one post/email per month from us that’d highlight a few select areas where we saw the most significant developments, plus expert opinions and other news we found interesting since our last post.


Machines vs. humans

In the past few years, AI has proven to be quite useful at tackling a wide range of tasks, from speech & image recognition to mining data to text translation. That being said, some areas remained too complex to be successfully tackled with AI, especially when it comes to competing against human operators.

That’s what makes some of the recent  advancements exciting, as machines are now becoming increasingly capable of performing on par with, or better than, humans, including in some areas that require dealing with complex environments and imperfect information which before largely remained the domain of humans. DeepMind’s AlphaStar that won a series of matches in StarCraft II against two pro players is perhaps the most impressive example of the past months (especially given that StarCraft has previously proven to be particularly challenging for AI).

Still, some areas, at least for now, remain too difficult for AI to crack, no matter what approach is used, as we’ve witnessed with IBM losing to a human debater in February.


  • DeepMind’s AlphaStar manages to master StarCraft II and handily beat 2 professional players in a series of matches played in an unrestricted game setting. Link

  • IBM AI takes a swing at a top human debater for the second time and loses once again. Link

Ethics - deep fakes & biases

The discussion around ethical & responsible usage of AI has been heating up lately, which is unsurprising, really, given the growing capabilities — and thus potential dangers of abuse — of AI.

In particular, the issue of the biased facial recognition models now becoming more widespread and being implemented by police remained in the spotlight, as did the deep fakes controversy (which now extended to text communications as well, not just videos — see the article about OpenAI text generator). Those are hard challenges that won’t be addressed quickly, so having an ongoing discussion on how to deal with those becomes critical.

On a more positive side, organizations around the world increasingly invest in research around the implications of embracing AI in our lives.


  • OpenAI’s new model capable of generating text on any chosen topic proves to be so good that the company decides against releasing it, citing safety and security concerns. Link

  • AI-powered website “This Person Does Not Exist” generating hyper-realistic portraits of completely fake people using GANs goes viral. Link

  • Supposedly ‘fair’ algorithms can instead reinforce and perpetuate discrimination. Link

  • Facebook and the Technical University of Munich announce new independent TUM Institute for Ethics in Artificial Intelligence. Link

  • Amazon facial-identification software used by police falls short on tests for accuracy and bias after being trained using sets of images that skew heavily toward white men. Link

  • First completely AI-generated artwork — meaning work created without human-curated input — heads to auction. Link

AI empowerment

While many worry about AI taking over the world and potentially even eliminating humanity (not that we seriously believe this is a legitimate concern at this point), it’s hard to ignore the benefits that technology already brings to the people’s lives.

In particular, tackling the gap between people with disabilities and the regular workforce has been one of the focus areas for some of the major players in the industry for a while now. The recent announcement from Google of the Live Transcribe product is another step towards eliminating this gap by assisting people with hearing problems through providing them with a real-time transcription service.

In contrast, using AI to automate some of the functions that were previously performed by humans remains more controversial — while it could potentially improve the quality of service, this can be viewed as AI taking jobs from human workers, which could justify push back from the workforce.

The announcement of GA version of Microsoft’s Healthcare Bot is one notable example here. With consistently improving quality of algorithms, and the cheapening of computational resources needed to analyze data, we expect companies to shift more and more to automated solutions to manage and execute calls and chats. We find Microsoft’s Healthcare Bot to be an interesting example of a specialized solution, that goes beyond regular customer service scenarios and touches a highly-sensitive area of health-related customer experience.


  • Google announces real-time continuous transcription service Live Transcribe. Link

  • How people with disabilities use AI to improve their lives. Link

  • Microsoft Healthcare Bot service helps healthcare organizations to improve customer services. Link

Codeless AI

In general, when it comes to coding, various empowerment tools could significantly improve and optimize the code we write, suggesting a more efficient structure or better optimized functions to use, or searching for potential errors in the code. However, the benefits could go much further than that.

More specifically, with machine learning, the necessity to have programming skills in order to leverage the benefits of neural networks complicates the process and limits the area to those who know how to code. However, while writing and optimizing code is a important component of any efficient ML algorithm, this is one area that could potentially be completely automated, allowing the task of building models to become a plug & play experience, and freeing time that could be then spent working on the architecture and physics of the network, rather than learning Python and debugging code.


  • Mozilla and Ubisoft partner to develop an AI coding assistant. Link

  • Uber’s AI toolkit “Ludwig” built on top of TensorFlow is the next step to codeless AI. Link

AI limitations

While AI has been steadily infiltrating more and more areas of our lives, it’s important to recognize the fact that its capabilities are not limitless, and there are still plenty of tasks that AI today can’t successfully tackle or has delivered less than originally expected, with IBM Watson & AI-powered ETFs being just a couple of examples of the recent disappointments.


  • IBM CEO Ginni Rometty claims that IBM has never overpromised on Watson. Link

  • AI-Powered ETF failed miserably at beating the market in 2018 — an analysis of the reasons behind this failure. Link

  • An essay on whether men can aspire to ever build a mind. Link



Many argue that AI will destroy the world, others say that it will save it, but both sides agree that AI is critical for the future. We believe that diversity of opinions is critical for creating an environment that allows new ideas to flourish, and thus we wanted to share a couple of interesting reads featuring expert opinions. The first article discusses Trump’s executive order on AI that we have covered in our recent article, and provides a few points of view on the announcement itself. The second unveils some expert’s opinions on who’s leading the race US or China.


  • Experts’ opinions on Trump’s executive order concerning AI. Link

  • Two experts debating which country is winning the AI race — the U.S. or China. Link


Other things to read:

Fun stuff: