Articles

AI: Arms Race 2.0

On February 11, President Trump signed an executive order outlining the American AI initiative. Among other things, the order discussed the need for the U.S. to maintain its current leadership in AI. This was followed by another announcement from the Department of Defense on February 12th, which released the summary of its strategy on artificial intelligence.

And yet, one can argue that continued U.S. leadership is far from certain: in particular, as we’ve discussed in “Reflections on the State of AI: 2018”, China has already surpassed the U.S. in terms of investments in AI startups, with close to 50% of AI investment dollars going to Chinese startups (in terms of the number of deals, the U.S. is still in the lead, although the share of AI startups hailing from the U.S. has been steadily declining over the last few years).

China is now also challenging the U.S. in terms of both the number of patents and publications in the field. True, the quality of some of those publications might still lag behind the U.S., but China has been catching up, and the rate of its advancements in the field over the last few years has been simply staggering.

The desire to dominate AI space is quite understandable — after all, the idea that AI would one day enable a whole new world of possibilities has been around for decades. Until recently, however, it was largely relegated to the realm of science fiction and the works of select few researchers & futurists. It all started changing in the early 2010s when the technology and, perhaps equally as important, the computational resources, finally caught up, and we got AI (or rather, machine learning) capable of solving real-world problems for the first time.

As it usually happens with any kind of game-changing advancements, however, different countries have found themselves facing new opportunities and challenges offered by AI in vastly different circumstances.

For the rich western democracies, the emergence of machine intelligence offers opportunities to explore new frontiers, build a new generation of successful companies, and further improve their societies. However, it also means having to face the dangers that AI could pose to their citizens if applied recklessly. In the last few years, that meant increasingly prioritizing the “no harm” approach when devising the AI policy — the West, with its emphasis on individualism and strong human rights record, simply has more to lose and less to gain when it comes to AI, compared to other places. While the West, and more specifically, the U.S. might still lead the way in AI research, it’s the implementation that is going to be harder and more challenging, considering a different level of expectations it faces around ethics & privacy concerns.

In contrast, China is facing a different set of challenges altogether: given its historical context and the stage of economic development, the opportunities potentially stemming from AI often outweigh the dangers of its abuse, which in turn has led to embracing AI and executing an aggressive investment and deployment strategy.

It’s also worth noting here that with the broad AI deployments, China and the West might be optimizing for different results. In China, it would often be optimized to deliver the best results for society as a whole, even if it means inadvertently harming minorities in the process. On the contrary, the West focuses on human rights and fair treatment of every person, including any outliers, which in turn creates unique challenges for AI adoption.

As for the rest of the world, most countries today fall somewhere in between the extremes represented by the West and China.

***

Now, let’s dig a bit deeper into the key factors that will determine the leader in the currently unfolding global AI arms race.

Building on what we’ve discussed above, we propose segmenting the world into 3 major groups: the West, China & the rest of the world. Obviously, such segmentation is quite subjective, but we believe it frames the conversation around AI policy in a useful way.

Now, when thinking of any problem that could be tackled using machine learning, there are three building blocks to be considered: data, people, and money.

Source:  Evolution One   Note:  Quantities of each resource here are subjective, and serve the illustrative purposes only; we will elaborate on how we’ve got to those for each section below.

Source: Evolution One

Note: Quantities of each resource here are subjective, and serve the illustrative purposes only; we will elaborate on how we’ve got to those for each section below.

Data.

The last couple of decades has brought us tremendous growth in the amount of data generated, and there is no sign of it slowing down — rather, if anything, it’s been accelerating in the last few years, driven by our ability to generate ever-increasing amounts of information, as well as the explosion in the number of sources for new data, both on the hardware & software side.

According to IDC, today more than 5 billion consumers already interact with data every day, and this number will increase to 6 billion by 2025. Still, while in the early 2010s it was the smartphones that were responsible for the bulk of the growth in the amount of data, going forward the growth will be driven more and more by the IoT devices, which are now expected to generate over 90 zettabytes of data per year by 2025 — over 50% of all data forecasted.

One thing worth underscoring here is that the relationship between the number of devices & the amount of data they generate has never been linear, but nowadays, this is becoming especially true. While in the late 2000s & early 2010s, it was the growing penetration of smartphones, coupled with the declining costs of transferring & storing data, driving the amounts of data produced, there were obvious upper limits on the number of smartphones that can be in use at any given time. However, today, at 3 billion smartphones in the world, the growth is slowing down, yet the amount of data is growing fast as ever.

There are 2 key factors at play here.

First, while smartphone growth is slowing down globally, IoT represents a different story. As of 2018, there were at least 7 billion IoT devices (with other estimates putting this number significantly higher), posed to grow to 21.5 billion by 2025, surpassing all the other categories combined. Perhaps more important than a specific number of devices is the fact that there is no natural limit to the number of IoT devices that can be put out there: it’s quite possible to imagine the world where there are dozens or even hundreds of devices per every living person, measuring everything from the traffic on the roads to the temperature in our apartments (and this is even before accounting for the IoT devices used by enterprises).

Second, the amount of existing data is to a significant extent defined by our willingness and ability to collect, share and store it (be it temporarily, or permanently). And here, the choices we make around what types of data we are willing to collect and retain are becoming crucial — any data that’s not captured today is by definition lost, and this effect is compounding over time.

Imposing restrictions on data collection out of concern for people’s privacy and to prevent potential abuses might be a reasonable thing to do, but in the narrow context of machine learning, those choices affect the amount of data available to train the models on. This, in turn, means that countries less concerned about privacy (with China being a prime example — for instance, see its experiments with AI-powered security cameras to catch criminals) will likely gain an edge when it comes to data.

That being said, it’s also important to recognize that privacy concerns aren’t applicable to every single problem, and there are some fields (such as driverless cars, or machine translation — see some interesting expert opinions here) where the West would actually have better datasets.

People.

People represent the second crucial building block, as it is they who define the approach used to tackle any problem that could be addressed with machine learning.

Here, the situation is somewhat opposite of what we’ve seen in Data — the West, and the U.S. in particular, has a natural advantage, stemming from the fact that it remains one of the most desirable locations to work and live in, and thus has an easier time attracting people from all over the world. It could also be more tolerant towards unorthodox ideas, which provides for a more creative environment and helps to find and nurture innovative ideas.

In fundamental research, the U.S. has also historically had an advantage, thanks to its established system of research universities, not to mention its ability to attract top talent from all over the world. Still, in recent years, China has established a system of top-tier research universities and continues to aggressively invest in it. Today, China is already conferring more doctoral degrees in natural sciences & engineering and produces more articles in peer-reviewed journals than the U.S., according to the Economist. Moreover, in AI-specific research, the U.S. lead is even less certain, as was mentioned before (see CB Insights report for details).

Finally, when it comes to the practitioners who are focusing on implementation (rather than pure research), both the U.S. and China have some unique strengths; two possible proxies to evaluate those are the number of startups founded in each respective country, and the number of professionals joining the field.

The U.S. has the highest number of startups and also an established ecosystem of big tech companies such as Google, Microsoft, and Facebook investing in the field. Still, China is #2 here (#3, if looking at Europe as a whole); moreover, it receives an unprecedentedly high amount of investments (more on that in the section below), and is also a home to select few companies that could rival the biggest players in the U.S. (namely, Alibaba, Tencent & Baidu).

However, in terms of the workforce, China has a clear lead — today, it produces 3 times more college graduates with STEM degrees compared to the U.S. that faces chronic shortage of qualified personnel. Unlike in research, where it is the select few who often matter the most, with the practitioners, numbers do matter, and producing enough engineering and science major becomes crucial to establish and maintain leadership in the field.

Investments

According to CB Insights, investments in Chinese startups contributed 50% of the dollars invested in AI startups in 2017 globally, growing from just 11.6% in 2016. It comes as no surprise then that top 2 most well-funded companies in 2018 — SenseTime and Face++ — were both from China — we have already briefly discussed AI investment landscape of 2018 in our recent article and concluded that China already leads the race today when it comes to early-stage investments.

Still, now that President Trump has announced his American AI Initiative, we feel it might be a good time to go back and consider how this announcement affects the balance of power.

Before we do that, however, let’s pause for a second and think through the funnel that could help analyze the efficiency of the investment strategy and determine its ultimate success or failure.

Source:  Evolution One

Source: Evolution One

The following three steps could help to frame the discussion:

  • First, consider the overall size of the proposed investment, and whether it would be enough to achieve a meaningful difference given the stated goals

  • Second, consider how efficient and developed the ecosystem supposed to absorb the funds already is

  • Finally, determine how focused the proposed strategy is and whether it targets the right areas that have the potential to yield the best possible returns (the areas themselves would differ based on the overall goal — e.g. supporting an already established and well-developed ecosystem might require a different strategy than when building the basic institutions from scratch).

Now, applying this framework to evaluate President Trump’s strategy for the AI, one could safely conclude that it doesn’t really change anything, given how vague and generic it is. That is not to say that the U.S. is falling behind China when it comes to investments — rather, it becomes clear that both countries are equally well positioned in terms of the amount of funding available, the robustness of ecosystems and availability of multiple areas to focus on that pose significant opportunities for advancement.

Conclusion

While today many view AI as a new arms race, where countries are posed to fiercely compete against each other (and the tone of President Trump’s announcement doesn’t help the matter), we believe that collaboration in AI leads to consistently better outcomes for all.

Interestingly enough, the West is particularly likely to benefit from promoting global collaboration (more than its counterpart that is better positioned to thrive in a siloed world), as it was the freedom to think and create that historically made places like U.S. attractive for talent from around the world.

The route to sustainable leadership in AI for the West would likely rely on:

  • Focusing on fostering global collaboration, including researchers and companies from places like China

  • Investing in the development of frameworks for ethical usage of AI, while also paying attention to not putting undue restrictions on the initiative of private businesses

The role of the Western governments should thus be focused on helping to frame and guide the discussion, rather than trying to impose unnecessary restrictions stifling innovation.


Reflections on the State of AI: 2018

Every day, multiple news items discussing various things “related to AI” pile up in our mailboxes, and there are dozens, if not hundreds, articles, opinions and overviews being published every week that have the term “AI” in their titles. However, not everything claimed as related to artificial intelligence is actually meaningful or relevant, and instead can often have nothing to do with the field altogether.

Aiming to democratize the knowledge of machine learning, neural networks and other segments of AI, we’ve decided to launch our efforts by creating a set of focused articles that would cover the latest advancements in the field, taking a look at the key players, and providing some insights into the most promising technology, as well as both the opportunities and dilemmas the industry is facing today.

In this first article, we provide a concise overview of the key developments we saw in 2018, segmented by key contributors, applications and challenges.

***

Today, with hundreds of companies deeply engaged in the AI space, and even more working to figure out their strategy as related to the field, it might be a bit hard to pinpoint specific players that are best positioned to lead the way in the future. Still, if we look at any of the many lists outlining the key players (see here and here, for example), 5 companies — Google, Facebook, Amazon, Microsoft & IBM — inevitably end up on all of them (other companies mentioned almost as often are Apple, Tencent & Baidu). In this article, we’ll focus on Google & Microsoft, as in 2018 those two appeared in the news for the AI space most often; still, the rest of the tech giants were by no means less prolific, and we plan to cover some of them in more detail in our next article focusing on the latest advancements in technology.

Google

Google Pixel: Night Sight capabilities demo from    Google blog

Google Pixel: Night Sight capabilities demo from Google blog

It’s been a fruitful year for Google’s efforts in the AI space, as witnessed by the number of new products the company introduced, as well as some critical improvements made to the existing services.

The largest number of announcements came out of Google I/O, the company’s annual developer conference held in May. Among other things, Google introduced Smart Compose for Gmail, made some really impressive updates to Google Maps and, perhaps most importantly, announced its new artificial intelligence-powered virtual assistant, dubbed Google Duplex (see a good summary of all new products and features introduced at Google I/O in 2018 here).

In the company’s own words:

“[Google Duplex is] a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed toward completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, as they would to another person, without having to adapt to a machine.”


The recording of a phone call to a hair salon made by Duplex was so impressive that it even led some to question whether Google Duplex has passed the Turing test (hint: it hasn’t; at the very least, the person making the judgment has to be aware that she might be talking to a machine). It also sparked a heated conversation about whether it’s appropriate to use technology in such fashion without making people on the receiving end aware that they are not interacting with an actual human being, but rather are talking to a bot. While it might be hard to answer such questions definitively, we’ll probably see more of this discussion soon enough, since Google started rolling out Duplex to some of its smartphones in December.

Another interesting update from Google arrived with the latest update to their Pixel line of smartphones (Pixel 3 & 3 XL), which came with some really impressive new camera capabilities enabled by the use of AI (we’ll touch on it again later in this post in a section dedicated to the advancements in computational photography).

Finally, DeepMind Technologies, fully owned by Alphabet Inc, managed to achieve a major milestone with the latest iteration of its AlphaZero program.

We’ve already seen the impressive achievements of AlphaGo and AlphaGo Zero in the game of Go in 2015 & 2016, when it handily won most games when competing against two of the strongest Go champions; in 2018, however, DeepMind’s team managed to achieve something even more interesting — the newest AlphaZero engine demonstrated its clear superiority over all of the strongest existing engines in chess, shogi and Go.

What is particularly interesting about AlphaZero is that it managed to achieve this feat without having to study any of the logs of the games played by humans; instead, the program self-taught itself how to play all three games, being provided with only the basic rules to start with. As it turns out, by operating without the limitations that come from learning from the games that were previously played has resulted in AlphaZero adopting “a ground-breaking, highly dynamic and “unconventional” style of play” that differed from anything seen before. That, in turn, makes the engine more useful to the community who might then learn new tactics by observing machine-developed strategies. It also creates a promise of the real-world applications for this technology in the future, given AlphaZero’s ability to successfully learn from scratch and tackle perfect information problems.

DeepMind is also working towards creating systems that can deal with imperfect information problems, as it demonstrated with its recent success with AlphaStar that beat a few professional players in StarCraft II (whereas in the past, AI has struggled to successfully play StarCraft due to the game’s complexity).

Microsoft

Julia White, Corporate Vice President, Microsoft Azure Marketing, speaks at the    Conversations on AI event    in San Francisco. Photo by John Brecher for Microsoft

Julia White, Corporate Vice President, Microsoft Azure Marketing, speaks at the Conversations on AI event in San Francisco. Photo by John Brecher for Microsoft

Playing the AI game at full throttle, similarly to Google, Microsoft had a blast launching new AI-related products and services in 2018, as well as improving some of the underlying technologies. A significant part of this work was community-centric, focusing on providing better tools and functionality for developers to build AI-powered solutions on Microsoft’s Cloud Platform.

Interestingly enough, Microsoft’s key developer conference called “Build” is also happening in May, just as Google’s does. In 2018, it’s been a packed event for Microsoft, with the company making a significant number of new announcements, and in particular, announcing Project Brainwave’s integration into Azure Machine Learning.

Project Brainwave (initially dubbed Catapult) was a result of several years of research that started at Bing back in 2010. Brainwave was announced to the community in August 2017 at Hot Chips, one of the top semiconductor conferences. In short, Brainwave is a hardware platform on FPGA chips, designed to accelerate real-time AI calculations, a critical area for services like search engines (which also explains why this project grew out of Bing). Now, with the integration of Brainwave into Azure’s Machine Learning, Microsoft claims Azure to be the most efficient cloud platform for AI.

At Ignite, another big conference held in Orlando last September, Microsoft released Cortana Skills Kit for Enterprise, which represents an exciting attempt to test AI-based assistants in the office space — think of the cases where you can program a bot to be able to schedule cleaning service for the office, or automatically submit a ticket to the help desk guided by your brief voice command.

A few days later, Microsoft also announced the integration of real-time translation feature into SwiftKey, an Android keyboard app acquired by Microsoft back in 2016. Finally, at the end of September, following Google Duplex lead, Microsoft released its Speech Services tool, introducing improved text-to speech synthesis capabilities.

Later in November came another series of interesting announcements, such as Cognitive Services Containers. Cognitive Services allow developers to leverage AI in their apps without requiring them to be experts in data science, or possessing extensive AI-related knowledge. The container story, in turn, is focused around the cases for Edge Computing — a concept when there is no need to send the data to the Cloud to perform calculations and rather process it locally, allowing to reduce latency and in many cases optimize costs. With Cognitive Services in Containers, Microsoft’s customers can now build applications that will run AI at the Edge locations.

Investments

Top 100 startups in AI, from    CB Insights

Top 100 startups in AI, from CB Insights

Investments in AI space have been booming lately, although as was reasonably called out by Crunchbase, it could hard to estimate by how much. CB Insights has built a good infographic of AI space, and sliced and diced top startups by categories in this article.  We see two major takeaways here — first, the largest rounds in AI industry in 2018 were raised by Chinese companies, such as SenseTime and Face++ ($1.6 billion and $0.6 billion, respectively). Second, of 11 unicorns existing today with $20 billion+ estimated valuation, 5 companies are from China and contribute up to 50% of total valuation, with SenseTime leading the group with a stunning $4.5 billion valuation. This underscores a critical point: China seems to be moving at a faster pace compared to other countries. Moreover, with its increasingly large footprint, China is now emerging as the powerhouse in the AI field. (For additional details, check out this summary outlining various national AI strategies that countries around the world are pursuing today).

Ethics, regulation & education

Deep fakes controversy

AI-generated fake clips of President Obama’s speeches, from  The Verge

AI-generated fake clips of President Obama’s speeches, from The Verge

In December 2017, Motherboard published a story about a Reddit user going by the name ‘deepfakes’ who’s been posting hardcore porn videos featuring the faces of celebrities mapped onto the bodies of porn stars. While not perfect, those videos were quite believable, especially considering those were made by a single person. Although Reddit soon banned the videos, the discussion about the legality and potential misuses of this technology has only been heating up ever since.

The tech behind creating fake videos through swapping the actors’ faces has been around for a while, yet the ease of creation and the quality has definitely reached a new level in 2018 (see another example of what a single tech-savvy user could achieve here). Making fake porn videos, while disconcerting, might still be relatively harmless, but as we recently saw, the same technology could be used to generate fake news or create false propaganda materials (or make the president Barack Obama say things he’d never say, at least not in public), which could have serious repercussions for us as a society. Could anything be done about it? That remains to be seen, but the fact is, the deep fakes are here to stay and are likely to only get more difficult to distinguish from the real thing.

AI biases

Photo from Microsoft’s blog post “ Facial recognition: It’s time for action ”

Photo from Microsoft’s blog post “Facial recognition: It’s time for action

In the last few years, both supervised and unsupervised learning approaches have been producing some exciting results (DeepMind’s AlphaZero is one example of what could be achieved through unsupervised learning). Still, a large number of real-world applications require training models on labeled data (which, incidentally, is one of the key issues often holding back further progress).

However, having a large dataset of labeled data to train the model isn’t quite the same as having a good dataset. You see, the neural networks relying on supervised learning are only as good as the data they are initially trained on, so if the underlying dataset has any flaws (such as focusing on one characteristic at the expense of others), chances are that the neural network would pick up those biases and further amplify them. This might not sound too bad in theory, but only until we consider the possible issues stemming from it in real-world applications — and as we’ve seen in 2018, those could be quite profound.

For instance, in the study done by Joy Buolamwini, a researcher at the M.I.T. Media Lab, she demonstrated that the leading face recognition systems from Microsoft, IBM and Megvii misclassified gender of only 1% of white males, but made mistakes in up to 35% of darker-skinned females. The reason? Those models were trained on a biased dataset that contained a larger proportion of white males’ photos and thus got progressively better at correctly recognizing their gender. Considering that face recognition tech is now increasingly being used by law enforcement, and that African Americans have the highest chance to be singled out because they are disproportionately represented in mug-shot databases, such discrepancies in performance could have a very significant negative impact.

Another famous example of the same issue that was made public in 2018 was the case associated with Amazon’s internal AI-powered recruiting tool. Amazon intended to leverage machine learning capabilities to allow for more efficient recruiting processes, and, potentially, to altogether automate some of the steps. Unfortunately, as it turned out, the aforementioned tool was trained on the resumes of people who had previously applied to the company, and the majority of those were males. As a result, the model picked up these biases and in turn trained itself to downgrade female candidates, prioritizing things like “masculine language” to promote males’ applications instead. Amazon eventually scrapped the tool, but there are plenty of other companies trying to leverage AI to help with the recruiting processes, whose models might have similar flaws.

Today, an increasing number of people (and companies) are calling for the authorities to devise regulatory frameworks to govern the usage of face recognition. Will it happen anytime soon? That remains to be seen, but chances are that at least some level of oversight is coming.

Uber’s self-driving car kills pedestrian in Arizona

Uber Volvo XC90 autonomous vehicle, image from  MIT Technology Review article

Uber Volvo XC90 autonomous vehicle, image from MIT Technology Review article

Even the greatest technology is unfortunately bound to occasionally make mistakes when operating in complex, indeterministic environments. And thus, on March 18, 2018, the thing that was eventually bound to happen happened, when an autonomous vehicle belonging to Uber hit and killed pedestrian in Tempe, Arizona. This accident forced the company to suspend all tests for its driverless cars and re-examine both its processes and its tech; it also sparked a heated discussion about the current state of technology behind self-driving cars, as well as the ethical and regulatory challenges that needed to be addressed if the autonomous vehicles were to gain wider acceptance from general public anytime soon.

Nine months later, Uber was allowed to resume its tests of autonomous cars in Pittsburgh, followed by San Franciso and Toronto in December, although in those cases, Uber’s self-driving vehicles remained restricted to “manual mode” (which meant the company would be focusing on exposing the cars’ software to new circumstances, rather than running active tests). To get in good graces of the authorities once again, Uber had, among other things, to agree to additional restrictions on the types of roads and conditions where it was allowed to operate its autonomous vehicles. Moreover, Uber had to switch to a system providing more rigorous training to the drivers (a critical piece, as the investigation of the fatal accident that occurred in March demonstrated that the driver was distracted and thus wasn’t paying attention to the road, as he was supposed to), now called “mission specialists”. Finally, the company introduced a third-party driver monitoring system, and also made additional improvements to its tech.

Still, it seems very unlikely that we’ve seen the end of the discussion about public safety and the necessary regulations when it comes to autonomous vehicles; rather, Uber’s unfortunate accident has only fueled the ongoing debate. We’ll see what the year 2019 will bring us; one thing, however, we can be certain of is the next 2-3 years will likely prove to be critical in shaping the public opinion on the subject of self-driving cars.

For those who are curious about the history and the current state of the autonomous vehicles, we suggest checking out this in-depth guide to self-driving cars from Wired.

MIT invests $1 billion in new AI college

Photo: MIT Dome, by  Christopher Harting

Photo: MIT Dome, by Christopher Harting

On October 15, 2018, MIT announced the creation of a new college, named MIT Stephen A. Schwarzman College of Computing after the co-founder and CEO of Blackstone who made the foundational gift of $350 million. The new college will be focused on addressing the global opportunities and challenges presented by the rise of artificial intelligence.

MIT already boasts a very strong reputation in the field (not to mention that its efforts in AI space could be traced back to the very beginnings of the field in the late 1950s). Still, it’s hard to overestimate the importance of this latest development — for instance, Schwarzman’s gift will allow for the creation of additional 50 faculty positions dedicated to AI research, effectively doubling the number of researchers focused on computing & AI at MIT.

The emphasis on cross-disciplinary collaboration, as well as the research on relevant policy and ethics to ensure responsible implementation of AI, is also noteworthy here — while we’ve seen a number of think tanks and research initiatives focused on these topics created in the last few years, it’s great to see MIT’s commitment here, as there’s still much and more work to do on the subject.

AI applications: computational photography

Image generated by Prisma app, from    Business Insider

Image generated by Prisma app, from Business Insider

Computational photography, in the broadest sense, is an area where in the last few years AI has delivered perhaps the most noticeable advancements, at least from consumer’s perspective. Still, while there’s been a lot of progress in this field in the previous years (such as Google Photos learning how to automatically tag and categorize photos in 2015, or iPhone 7 getting the capability to automatically blur the background in the photos taken in portrait mode in 2016), in 2018 we’ve seen a number of particularly impressive technological feats making it to mass products.

Features like Google Pixel’s Night Sight mode, or Smart HDR capabilities made available on iPhone XS and XS Max, are just a few examples of some of the things that have been made possible through the use of machine intelligence. What’s perhaps even more interesting here is that these new capabilities have now clearly demonstrated the ability of AI to enable improvements that extend beyond the physical limitations of the cameras, thus setting the entire field on a new exciting path. As a result, computational photography today has already proved its value to both those familiar with other advancements in AI space, and the users far removed from the field.

Another aspect of computational photography applications is when the neural network is being used to completely rework the image using an algorithm to adjust the output to look like the artwork of famous artists, like Van Gogh or Monet (e.g. see Prisma app). Similar concepts are used in various areas of machine vision, and benefit, for example, driverless cars.

We will cover more of the specific technologies, such as large scale GANs and video-to-video synthesis, that have recently seen significant advancements, in Evolution One’s next article called “Key recent developments in machine intelligence”, a focused deep-dive into some of today’s hottest areas of artificial intelligence, like natural language processing and computer vision.

 

Evolution One: Beginnings

Over the last few years, the emergence of the machine intelligence in all spheres of human life became impossible to ignore. Today artificial intelligence powers the thermostats and voice assistants in our homes and phones, suggests us best routes while driving, and makes pictures we take look better. Moreover, we ourselves increasingly leverage various AI capabilities to augment our work and daily lives to become more productive. Many things that we take for granted, like receiving suggestions for similar items while shopping, translating texts, or simply searching the web, would not be possible without powerful machine learning algorithms running in the backend.

Still, while there have been a lot of exciting advancements in AI space, or maybe exactly because of how quickly it’s been evolving, it remains hard to familiarize oneself or to stay up-to-date with all the latest developments. While there is already a multitude of AI-related resources on the web, a comprehensive taxonomy of the industry, sliced by key products, people, institutions, and technologies, is yet to be developed. We find that creating such a taxonomy would go a long way to make it easier for industry professionals, technology evangelists, or simply anyone interested in learning more about machine intelligence, to navigate the field.

At Evolution One, we are inspired by this idea of building a comprehensive guide that would aggregate and structure the body of knowledge related to the industry, and serve the AI community. In this work, we focus on three areas: development of the clear-cut taxonomy of the field, constant monitoring of recent developments, and focused deep dives into specific topics.

Coming next:

Monthly newsletter, featuring editorial opinions on the top highlights structured by category (Products, People, Institutions, Technologies); first issue coming end of February, 2019

Articles:

  • “Reflections on the State of AI: 2018”; coming February 9th, 2019

Taxonomy:

  • Key People & Technologies: we start building our taxonomy by mapping key contributors to the key technologies in computer vision (CV) and natural language processing (NLP)

  • Products & Institutions: coming later