Artificial intelligence continues to advance at a rapid pace. Even in 2020, a year that did not lack compelling news, AI advances commanded mainstream attention on multiple occasions. OpenAI’s GPT-3, in particular, showed new and surprising ways we may soon be seeing AI penetrate daily life. Such rapid progress makes prediction about the future of AI somewhat difficult, but some areas do seem ripe for breakthroughs. Here are a few areas in AI that we feel particularly optimistic about in 2021.

人工智能继续快速发展。即使在不乏引人注目的新闻的2020年,人工智能的进步也多次引起了主流关注。尤其是OpenAI的GPT-3,展示了我们可能很快会看到AI渗透到日常生活的新颖而令人惊讶的方式。如此迅速的进展使得对AI的未来进行预测变得有些困难,但某些领域似乎确实有突破的机会。以下是我们对2021年特别乐观的AI领域中的一些领域。

Transformers

Two of 2020’s biggest AI achievements quietly shared the same underlying AI structure. Both OpenAI’s GPT-3 and DeepMind’s AlphaFold are based on a sequence processing model called the Transformer. Although Transformer structures have been around since 2017, GPT-3 and Alphafold demonstrated the Transformer’s remarkable ability to learn more deeply and quickly than the previous generation of sequence models, and to perform well on problems outside of natural language processing.

2020年最大的AI成就中有两项悄悄地共享了相同的底层AI结构。OpenAI的GPT-3和DeepMind的AlphaFold都基于称为Transformer的序列处理模型。尽管Transformer的结构自2017年以来就已经存在,但GPT-3和Alphafold展示了Transformer的非凡能力,它比上一代序列模型更深入,更快速地学习,并且能够很好地处理自然语言处理之外的问题。

Unlike prior sequence modelling structures such as recurrent neural networks and LSTMs, Transformers depart from the paradigm of processing data sequentially. They process the whole input sequence at once, using a mechanism called attention to learn what parts of the input are relevant in relation to other parts. This allows Transformers to easily relate distant parts of the input sequence, a task that recurrent models have famously struggled with. It also allows significant parts of the training to be done in parallel, better leveraging the massively parallel hardware that has become available in recent years and greatly reducing training time. Researchers will undoubtedly be looking for new places to apply this promising structure in 2021, and there’s good reason to expect positive results. In fact, in 2021 OpenAI has already modified GPT-3 to generate images from text descriptions. The transformer looks ready to dominate 2021.

与先前的序列建模结构(例如递归神经网络和LSTM)不同,transformer偏离了顺序处理数据的范式。他们使用称为注意力的机制一次处理整个输入序列,以了解输入的哪些部分与其他部分相关。这使Transformers可以轻松地关联输入序列的遥远部分,这是递归模型难以克服的任务。它还使培训的重要部分可以并行完成,从而更好地利用了近年来可用的大规模并行硬件,并大大减少了培训时间。毫无疑问,研究人员将在2021年寻找新的地方来应用这种有前途的结构,并且有充分的理由期待取得积极的结果。实际上,OpenAI在2021年已经修改了GPT-3,以根据文字描述生成图像。transformer看起来已经准备好统治2021年。

Graph neural networks

Many domains have data that naturally lend themselves to graph structures: computer networks, social networks, molecules/proteins, and transportation routes are just a few examples. Graph neural networks (GNNs) enable the application of deep learning to graph-structured data, and we expected GNNs to become an increasingly important AI method in the future. More specifically, in 2021, we expect that methodological advances in a few key areas will drive broader adoption of GNNs.

图神经网络
许多领域的数据自然都具有图表结构:计算机网络,社交网络,分子/蛋白质和运输路线只是其中的几个例子。图神经网络(GNN)允许将深度学习应用到图结构化数据中,我们希望GNN在将来成为越来越重要的AI方法。更具体地说,我们预计到2021年,一些关键领域的方法学进步将推动GNN的广泛采用。

Dynamic graphs are the first area of importance. While most GNN research to date has assumed a static, unchanging graph, the scenarios above necessarily involve changes over time: For example, in social networks, members join (new nodes) and friendships change (different edges). In 2020, we saw some efforts to model time-evolving graphs as a series of snapshots, but 2021 will extend this nascent research direction with a focus on approaches that model a dynamic graph as a continuous time series. Such continuous modeling should enable GNNs to discover and learn from temporal structure in graphs in addition to the usual topological structure.

动态图是第一个重要的领域。迄今为止,大多数GNN研究都假设一个静态的,不变的图,但上述情况必然会随着时间而发生变化:例如,在社交网络中,成员加入(新节点)而友谊改变(不同边缘)。在2020年,我们看到了一些将时间演化图建模为一系列快照的努力,但是2021年将扩展这一新生的研究方向,重点是将动态图建模为连续时间序列的方法。除了通常的拓扑结构之外,这种连续建模还应使GNN能够发现图中的时间结构并从中学习。

Improvements on the message-passing paradigm will be another enabling advancement. A common method of implementing graph neural networks, message passing is a means of aggregating information about nodes by “passing” information along the edges that connect neighbors. Although intuitive, message passing struggles to capture effects that require information to propagate across long distances on a graph. Next year, we expect breakthroughs to move beyond this paradigm, such as by iteratively learning which information propagation pathways are the most relevant or even learning an entirely novel causal graph on a relational dataset.

消息传递范式的改进将是另一个使人前进的进步。消息传递是实现图神经网络的一种常用方法,它是通过沿连接邻居的边缘“传递”信息来聚集有关节点的信息的一种方法。尽管很直观,但是消息传递却难以捕获需要信息在图形上长距离传播的效果。明年,我们希望突破性突破这一范式,例如通过迭代学习哪些信息传播路径最相关,甚至学习关系数据集上的全新因果图。

Applications

Many of last year’s top stories highlighted nascent advances in practical applications of AI, and 2021 looks poised to capitalize on these advances. Applications that depend on natural language understanding, in particular, are likely to see advances as access to the GPT-3 API becomes more available. The API allows users to access GPT-3’s abilities without requiring them to train their own AI, an otherwise expensive endeavor. With Microsoft’s purchase of the GPT-3 license, we may also see the technology appear in Microsoft products as well.

应用领域
去年的许多头条新闻都强调了AI在实际应用中的新兴进展,并且2021年有望利用这些进展。尤其是,依赖自然语言理解的应用程序可能会随着对GPT-3 API的访问变得更加可用而取得进步。该API允许用户访问GPT-3的功能,而无需他们训练自己的AI,这本来就很昂贵。在Microsoft购买GPT-3许可证后,我们可能还会看到该技术也出现在Microsoft产品中。

Other application areas also appear likely to benefit substantially from AI technology in 2021. AI and machine learning (ML) have spiraled into the cyber security space, but 2021 shows potential of pushing the trajectory a little steeper. As highlighted by the SolarWinds breach, companies are coming to terms with impending threats from cyber criminals and nation state actors and the constantly evolving configurations of malware and ransomware. In 2021, we expect an aggressive push of advanced behavioral analytics AI for augmenting network defense systems. AI and behavioral analytics are critical to help identify new threats, including variants of earlier threats.

在2021年,其他应用领域也可能会从AI技术中受益匪浅,人工智能和机器学习(ML)逐渐进入了网络安全领域,但2021年显示出将轨迹推得更陡的潜力。正如SolarWinds漏洞所突显的那样,公司已经面临来自网络犯罪分子和国家行为者的迫在眉睫的威胁,以及不断发展的恶意软件和勒索软件配置的威胁。在2021年,我们预计将积极推动先进的行为分析AI来增强网络防御系统。人工智能和行为分析对于帮助识别新威胁(包括早期威胁的变体)至关重要。

We also expect an uptick in applications defaulting to running machine learning models on edge devices in 2021. Devices like Google’s Coral, which features an onboard tensor processing unit (TPU), are bound to become more widespread with advancements in processing power and quantization technologies. Edge AI eliminates the need to send data to the cloud for inference, saving bandwidth and reducing execution time, both of which are critical in fields such as health care. Edge computing may also open new applications in other areas that require privacy, security, low latency, and in regions of the world that lack access to high-speed internet.

我们还预计,到2021年,默认情况下将在边缘设备上运行机器学习模型的应用程序将会增加。随着处理能力和量化技术的进步,诸如谷歌的Coral等具有板载张量处理单元(TPU)的设备必将变得更加普及。Edge AI消除了将数据发送到云进行推理的需求,节省了带宽并减少了执行时间,这两者在医疗保健等领域都至关重要。边缘计算还可能在需要隐私,安全性,低延迟的其他区域以及世界上无法访问高速Internet的区域中打开新应用程序

The bottom line

AI technology continues to proliferate in practical domains, and advances in Transformer structures and GNNs are likely to spur advances in domains that haven’t yet readily lent themselves to existing AI techniques and algorithms. We’ve highlighted here several areas that seem ready for advancement this year, but there will undoubtedly be surprises as the year unfolds. Predictions are hard, especially about the future, as the saying goes, but right or wrong, 2021 looks to be an exciting year for the field of AI.
工智能技术在实际领域中继续激增,而Transformer结构和GNN的进步很可能会刺激那些尚未轻易适应现有AI技术和算法的领域中的进步。我们在这里重点介绍了今年似乎有待改进的几个方面,但是随着今年的到来,无疑会有惊喜。俗话说,对未来尤其是对未来的预测很难,但对还是错,2021年对于AI领域来说似乎是令人兴奋的一年。

ADVERTISEMENT

Ben Wiener is a data scientist at Vectra AI and has a PhD in physics and a variety of skills in related topics including computer modeling, optimization, machine learning, and robotics.

Daniel Hannah is a data scientist and researcher with more than 8 years of experience turning messy data into actionable insights. At Vectra AI, he works at the interface of artificial intelligence and network security. Previously, he applied machine learning approaches to anomaly detection as a fellow at Insight Data Science.

Allan Ogwang is a data scientist at Vectra AI with a strong math background and experience in econometrics, statistical modeling, and machine learning.

Christopher Thissen is a data scientist at Vectra AI, where he uses machine learning to detect malicious cyber behaviors. Before joining Vectra, Chris led several DARPA-funded machine learning research projects at Boston Fusion Corporation.

来源:https://venturebeat.com/2021/01/31/heres-where-ai-will-advance-in-2021/

最后修改:2021 年 02 月 22 日 05 : 06 PM
如果觉得我的文章对你有用,请随意赞赏