Best AI Papers of 2020 Broach GPT-3 Large Language Model Concerns 

 Best AI Papers of 2020 Broach GPT-3 Large Language Model Concerns 

By AI Trends Staff  

The Best AI Papers of 2020 were called out by a writer at GitHub, who posts a video explanation link to each one, a link to a more in-depth article and some code. 

Louis-Francois Bouchard, AI research scientist

“In the field of AI, many important aspects were highlighted this year, like the ethical aspects and important biases,” stated Louis-Francois Bouchard of Quebec, Canada, a self-described “master student,” AI research scientist and speaker, in the list posted at GitHub. “Artificial intelligence and our understanding of the human brain and its link to AI is constantly evolving, showing promising applications,” he states.  

Here is a video summary  of the best AI papers of 2020, and here are selected highlights: 

YOLOv4: Optimal Speed and Accuracy of Object Detection 

The main goal of Alexey Bochkovsky and his coauthors in the paper  “YOLOv4: Optimal Speed and Accuracy of Object Detection” is to make a super-fast object detector with high quality and accuracy.  

Many features are said to improve Convolutional Neural Network (CNN) accuracy. Testing of features on large datasets is required, Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. Results included a real-time speed of ~65 frames per second (FPS) on a Tesla V100.  

The authors introduced a new method of data augmentation called Mosaic and Self-adversarial training.  

The authors are Alexey BochkovskiyChien-Yao Wang and Hong-Yuan Mark Liao. 

 

DeepFaceDrawing: Deep Generation of Face Images from Sketches  

Researchers at the Institute of Computing Technology, Chinese Academy of Sciences, did a study on generating deep face drawing from sketches, with zero drawing skills required.   

“Our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch,” the authors state. “Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches,” they add.  

Here is a video demonstration of the deep face drawing technology.   

The authors are: Shu-Yu ChenWanchao SuLin GaoShihong Xia, and Hongbo Fu. 

GPT-3: Language Models are Few-Shot Learners  

The current state-of-the-art natural language processing (NLP) systems struggle to generalize to work on different tasks. They need to be fine-tuned on datasets of thousands of examples while humans only need to see a few examples to perform a new language task. This was the goal behind GPT-3, to improve the task-agnostic characteristic of languag

[...]

Source - Continue Reading: https://www.aitrends.com/ai-research/best-ai-papers-of-2020-broach-gpt-3-large-language-model-concerns/

webmaster

Related post