护士长办公室被躁BD视频,护士交换做爰3,HD欧美FREE性XXX×护士,护士小雪的YIN荡高日记H视频,护士脱了内裤让我爽了一夜视频,护士的色情3在线观看

Distributed deep learning

  報告時間:2018年11月13日 14:30-16:00

  報告地點:446會議室

  主講人:Dr. Wei Zhang         IBM T.J.Watson Research Center

  邀請人:包云崗

  報告摘要:

  Deep learning is a powerful machine learning tool that achieves promising results in image classification, natural language processing, speech recognition, and among many other application domains. Deep learning is particularly useful when the training data is abundant and training parameters are many, thus it demands massive computing resources (e.g., HPC clusters). How to efficiently use computation hardware at a large scale to solve deep learning optimization problem is a fundamental research topic. In this talk, I will first present the fundamentals of distributed deep learning algorithms, then I will present several lessons that we learned in the past three years of research into building large-scale deep learning systems. The covered topics include (i) our work in the study of tradeoff of model accuracy and runtime performance, (ii) how to build scale-up multi-GPU systems in a training as a service scenario on cloud, and (iii) how to build scale-out systems that run at the scale of hundreds of GPUs on HPCs. The resulting systems typically shorten the training time from weeks to hours and maintain or improve the baseline model accuracy. The lessons and experiences drew from several real-world systems -- IBM’s Natural Language Classifier (NLC), one of IBM’s most widely used cognitive services; IBM’s STT (Speech to Text) service, the key speech recognition technology behind IBM’s Jeopardy and the recent Debater project; and the experience of running our systems on CORAL machines (i.e., the precursor of IBM’s Summit super-computer, the fastest HPC machine in the world).

  主講人簡介:

  Dr. Wei Zhang (B.Eng’05, Beijing Unveristy of Technology; MSc’08, Technical University of Denmark; PhD’13, University of Wisconsin, Madison) is a research staff member at IBM T.J.Watson Research Center. Currently, he works in the machine learning acceleration department. His research interests include systems and large-scale optimization. His recent works in distributed deep learning are published in ICDM(2016,2017), IJCAI(2016,2017), MASCOT(2017), DAC(2017), AAAI (2018), NIPS (2017,2018) and ICML (2018). His work won the ICDM’16 best paper award runner-up and MASCOT’17 best paper nominee. His NIPS’17 paper are ICML’18 papers were both invited to present orally at a 20-min length in the conference. Prior to his IBM career, he studied under Prof. Shan Lu at UW-Madison, with a focus on concurrent software system reliability. While at Wisconsin, he published papers in ASPLOS (2010,2011,2013), PLDI(2011), OSDI(2012) and OOPSLA(2013). His PLDI’11 paper won the SIGPLAN Research Highlights Award.

欢迎光临: 哈尔滨市| 莒南县| 犍为县| 疏勒县| 西充县| 全州县| 渑池县| 海淀区| 江川县| 平舆县| 无锡市| 西华县| 马公市| 河间市| 红河县| 光山县| 广饶县| 通海县| 汤阴县| 永和县| 武功县| 姜堰市| 平原县| 石泉县| 二手房| 松原市| 荣昌县| 尤溪县| 华蓥市| 曲松县| 商南县| 孝昌县| 临颍县| 左贡县| 平阳县| 嘉禾县| 资中县| 房山区| 石家庄市| 日喀则市| 拉萨市|