One weird trick for parallelizing convolutional neural networks

2018-01-10 10:26

I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

{{panelTitle}}

{{item.creationTime | formatDateTime}}
{{ritem.creationTime | formatDateTime}}

{{ritem.contents?ritem.contents:'没有填写内容'}}

{{item.creationTime | formatDateTime}}
{{ritem.creationTime | formatDateTime}}

{{ritem.contents?ritem.contents:'没有填写内容'}}