AI Newsletter: Issue 2

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”

Richard S. Sutton

When it comes to AI research, humans have often (and unsurprisingly) been seeking to leverage human knowledge in a particular domain to drive advancements. This approach, while appearing natural and sensible on the surface, can be ineffective and even detrimental to the progress over the long term, as it has been demonstrated time and again over the last decades. Instead, according to Richard S. Sutton, considered one of the founding fathers of modern computational reinforcement learning, the only thing that can truly make a difference, in the long run, is building general-purpose methods that can scale as the amount of available computational power increases (which it always does).

Read more on this topic in Sutton’s new essay “The Bitter Lesson”.



Ethics arguably remains one of the hottest topics surrounding any discussion about artificial intelligence. Today, we are seeing large companies increasingly implementing new boards and processes to ensure AI is being employed ethically, and non-profits such as Open AI expanding their efforts so much that the initial funding of $1 billion becomes insufficient to allow it to deliver on its mission of “stopping AI from ruining the world”. In general, though, while ethics is obviously a fundamental and essential topic to guide AI investments, referring back to Sutton’s essay, it is questionable how humans, whose whole life experiences make them naturally biased, can create something that would be truly unbiased. Even more importantly, it remains unclear how biased human judges could determine whether any such system is indeed biased.


  • Google creates an external advisory board to monitor it for unethical AI use. Link

  • Microsoft will be adding AI ethics to its standard checklist for product release. Link

  • Stanford University launches the Institute for Human-Centered Artificial Intelligence. Link

  • Tech companies must anticipate the looming risks as AI gets creative. Link

  • 2019 applied ethical and governance challenges in AI. Link

  • Is ethical A.I. even possible? Link

AI applications

AI is already spreading across industries, from retail, to manufacturing, to energy generation and so on. We are also now seeing it starting to successfully tackle such hyper-sensitive areas as human health, with bots helping to navigate the steps needed to be taken to address patients’ condition, and AI systems suggesting appropriate medications, or helping to pick the correct formula for the new drug. While often the expectation is to match or at least get closer to human performance level so that you could save on labor, in some cases technology is already at the stage where machines can deliver better performance, with the key challenge now being making those technologies affordable.


  • Fast and accurate medication identification. Link

  • Will machines be able to tell when patients are about to die? Link

  • Machine learning can boost the value of wind energy. Link

  • How AI will invade every corner of Wall Street. Link

  • How Artificial Intelligence is changing science. Link

  • AI to the Rescue: How phones are turning into plant doctors for thousands of farmers. Link


Our February newsletter mentioned speech-related product launches from both Microsoft and Google focusing on helping people with hearing impairment. In March, we saw a further continuation of this trend, with both companies releasing products and features for visually impaired that rely on AI technology. While AI’s role in making the world more accessible for everyone is fast becoming undeniable, somehow it still attracts less attention than things like deep fakes, which seems a bit unfortunate (although, interestingly enough, even this topic can occasionally spark some controversy).


  • Google releases Lookout app that identifies objects for the visually impaired. Link

  • Google rolls out an all-neural on-device speech recognizer. Link

  • Microsoft announces new features in seeing AI. Link


Playing the AI game can prove to be a costly affair, and in March, we saw a few notable examples confirming this notion. In particular, we saw OpenAI choosing to abandon its purely non-profit status and instead restructure as a “capped-profit” company, citing the need to raise substantial capital to be able to serve its mission. Also, Alphabet’s Waymo announced that it’d seek outside investors to raise capital and get external validation for its self-driving cars efforts. On another note, acquisitions in the AI space have been picking up lately, with the acquisition of Dynamic Yield for $300 million being the most notable example occurring in March.


  • OpenAI shifts from nonprofit to ‘capped-profit’ to attract capital. Link

  • Alphabet’s Waymo seeks outside investors. Link

  • McDonald's acquires machine learning startup Dynamic Yield for $300 million. Link



  • Driver behaviours in a world of autonomous mobility. Link

  • China to overtake US in AI research. Link

  • DeepMind and Google: the battle to control artificial intelligence. Link

Other things to read:

  • Three pioneers in Artificial Intelligence win Turing Award. Link

  • Nvidia AI turns sketches into photorealistic landscapes in seconds. Link

  • Google Duplex rolls out to Pixel phones in 43 states. Link

  • Microsoft: business executives adopting AI also want to invest in motivating employees. Link

  • Facial recognition overkill: how deputies cracked a $12 shoplifting case. Link

  • Facebook’s AI couldn’t spot mass murder. Link

  • Inmates in Finland are training AI as part of prison labor. Link