热门文章

深度学习如何入门?初学者看过来~

AI科技大本营 2018-05-18 12:00
适合入门者看的一篇指导文章~
1156 0 0

斯坦福课程:深度学习理论(附在线观看地址)

AI科技大本营 2018-05-16 16:45
深度学习为什么能够在近几年取得如此大的进展?除了经验主义之外,我们还应该从理论层面去理解深度学习的发展。为此,斯坦福大学于去年秋天推出了一门名为 Theories of Deep Learning(深度学习理论) 的课程。
1417 1 3

机器学习应该准备哪些数学预备知识?

AI科技大本营 2018-05-18 15:31
如果你希望从事于机器学习,但数学多年不用,在阅读算法书籍的过程中,数学部分理解起来有难度。。。
331 0 0

这三个普通程序员,几个月就成功转型AI,他们的经验是...

AI科技大本营 2018-04-03 14:44
动辄50万的毕业生年薪,动辄100万起步价的海归AI高级人才,普通员到底应不应该转型AI工程师,普通程序员到底应该如何转型AI工程师? 以下,AI科技大本营精选了三个特别典型的普通程序员成功转型AI的案例,也是知乎上点赞量相当高的案例: 第一案例为普通程序员,经过六个月从接触机器学习到颇有心得的切身体会。 第二个案例为只懂 ACM 竞赛相关算法的普通程序员,误打误撞接触到了数据挖掘,之后开始系统地
185 0 0

大疆RoboMaster技术总监:我是如何成为一名机器人工程师的

AI科技大本营 2018-05-07 11:34
从大一到研究生毕业,每个阶段建议学什么,楼主帮大家总结好了。。
210 0 0

如何用3个月零基础入门机器学习?

AI科技大本营 2018-05-14 11:17
文章看点:1.指出一些自学的误区2.不过多的推荐资料3.提供客观可行的学习表4.给出进阶学习的建议。
1435 0 1

入行 AI,如何选个脚踏实地的岗位?

TinyMind 2018-05-15 11:50
到底做什么,算是入行AI?
410 0 0

Fast.ai 深度学习实战课程 Lesson0 [中文字幕][免费观看]

AI科技大本营 2018-04-08 18:16
本节课不需要深入研究高水平数学问题的情况下,学习如何建立最先进的深度学习模型。
850 2 0

如何成为一名顶级战斗力的数据分析师?

AI科技大本营 2018-04-08 14:24
本文针对数据科学,来谈一谈如何才能成为一名传说中的10x老司机。
96 0 0

【参赛经验】深度学习入门指南:从零开始TinyMind汉字书法识别——by:Link

Link 2018-04-20 14:13
主要是代码,介绍如何从零开始完成TinyMind汉字书法识别比赛的第一次提交
1765 0 1
查看更多文章
热门论文
dwSun 2018-02-06 17:25

Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation EN

Mark Sandler,Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
发表时间:2018-01-16

In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters

dwSun 2018-01-15 20:43

Residual Attention Network for Image Classification EN

Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, Xiaoou Tang
发表时间:2017-04-23

In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.

查看更多论文
热门算法实现