Invited Speakers

当前位置:首页  Invited Speakers  Session 13 Artificial Neural Networks Theories and ApplicationsSession 13 Artificial Neural Networks Theories and Applications

(13) Artificial Neural Networks Theories and Applications: Jinling Liang, Souhteast University, China (Chair)


13.1. Speaker: Prof. Ping GuoBeijing Normal University, China



Talk Title: Toward to AutoML with Pseudoinverse Learning Algorithm


Abstract: On consideration of neural network Structure and learning algorithms together, we propose a non-gradient descent learning scheme for deep feedforward neural networks (DNN). As we known, autoencoder can be used as the building blocks of the multi-layer perceptron (MLP) deep neural network. So, the MLP will be taken as an example to illustrate the proposed scheme of pseudoinverse learning algorithm for autoencoder (PILAE) training. It is worth to note that only few network structure hyperparameters need to be tuned. Hence, the proposed algorithm can be regarded as a quasi-automatic training algorithm which can be utilized in automated machine learning research field.


Biography: Prof. Ping Guo, IEEE senior member, CCF senior member, School of systems science, Beijing Normal University; and Ph. D. supervisor in computer software and theory of Beijing Institute of Technology. Chair of the Key Laboratory of graphics, image and pattern recognition, Beijing Normal University, Chair of IEEE CIS Beijing Chapter (2015-2016). His research interests include computational intelligence theory and its applications in pattern recognition, image processing, software reliability engineering, and astronomical data processing. He has published more than 360 papers, hold 6 patents, and the author of two books: “Computational Intelligence in Software Reliability Engineering”, and “Image Semantic Analysis” received 2012 Beijing municipal government award of science and technology (third rank) entitled Regularization Method and its Application. Professor Guo received his master's degree in optics from the Department of physics, Peking University, and received his Ph.D degree from the Department of computer science and engineering, Chinese University Hong Kong. His personal home page: http://sss.bnu.edu.cn/~pguo.


13.2. Speaker: Prof. Deshuang HuangTongji University, China

Talk Title: Motif Prediction and Analyses in DNA Sequences by Deep Neural Networks


Abstract: Recent biological studies have shown that binding-site motif mining plays a crucial role in the transcription phase of gene expression, so the study of motif will help to understand the complex biomolecular system and explain disease pathogenesis. Generally, how to carry out an in-depth research on motifs through computational methods has always been one of the core issues in the modeling of life system gene regulation processes. In this talk, I will first present the fundamental issue for motif prediction of DNA sequences, then systematically present motif prediction of DNA sequences in combination with the popular emerging technology “Deep Neural Networks”. Firstly, several classical models for deep neural network and the research status of DNA sequence motif prediction will be briefly introduced. Secondly, the existing shortcomings of deep-learning based motif prediction is discussed, and correspondingly a variety of improved motif prediction methods including high-order convolutional neural network architecture, weakly-supervised convolutional neural network architecture, deep-learning based sequence + shape framework and bidirectional recurrent neural network for DNA motif prediction. Finally, some new research problems in this aspect will be pointed out and over-reviewed.


Biography: Prof. De-Shuang Huang is Chaired Professor in Department of Computer Science and Director of Institute of Machine Learning and Systems Biology at Tongji University, China. He is currently the Fellow of the International Association of Pattern Recognition (IAPR Fellow), the Board Member of the International Neural Network Society (INNS) Governors. His main research interest includes neural networks, pattern recognition and bioinformatics.


13.3. Speaker: Prof. Zhanshan WangNortheastern University, China

Talk Title: Stability Analysis of Neural Dynamical Networks with Time-Delays


Abstract: Neural dynamical networks are a kind of optimal model, which can be used to solve a class of optimization problems. In general, the stable equilibrium point of optimal model corresponds to the optimal solution of the concerned problem. Therefore, how to establish some sufficient conditions for the existence and stability of equilibrium point of the neural dynamical networks is a fundamental problem. Neural networks with time delay can bring some new insights for the applications of optimal problems. Therefore, how to establish the stability conditions and reduce the conservativeness of the existing result has become a research branch of academic community. In this report, how to use the information on time delay both in the construction of Lyapunov-Krasovskii functional (LKF) and calculation of the derivative of LKF will be introduced. Especially, the delay decomposition method, flexible terminal method and multiple-integral based LKF method will be mainly introduced for the neural networks with time delays. Then the relationship between stability and synchronization/consensus will be discussed for the collective dynamics of interconnected neural dynamics, in which one can find the evolutionary process of static analysis of a dynamical system.


Biography: Prof. Zhanshan Wang is a currently in the College of Information Science and Engineering, Northeastern University, Shenyang, China. He received the Ph.D. degree in control theory and control engineering from the Northeastern University, Shenyang, China, in 2006. Since 2010, he has been a Professor with Northeastern University. He has authored and coauthored more than 150 journal and conference papers and 6 monographs. He is the holder of ten Chinese patents. His research interests include stability theory, neural networks theory, learning control, fault diagnosis, fault tolerant control, nonlinear control theory, and their applications in smart grid. He received the Excellent Doctoral Dissertation Tutor Award of Chinese Association of Automation in 2018,the Nomination of 100 Excellent Doctoral Dissertation in China in 2009, the Excellent Doctoral Dissertations in Liaoning Province in 2008. He was elected to Ministry of Education's Supporting Plan for Excellent Talents in the New Century in 2010 and the Excellent Postdoctoral Students in Liaoning Province in 2010. He was an Associate Editor for IEEE Transaction on Neural Networks and Learning Systems. He is currently a member of the editorial board of Acta Automatica Sinica.


13.4. Speaker: Prof. Jian WangChina University of Petroleum (East China), China

Talk Title: Group Sparse Neural Networks: Structure Optimization and Fault Tolerant Learning


Abstract: Large-scale high dimension problems have been sharply increased especially in computational intelligence research area. Deep neural network learning models have been widely employed to tackle these tasks such as image processing, commercial prediction, industrial data explanation. More generated high-performance supercomputers can deal with these big data problems to some extent. And the application and theoretical analysis of fault tolerant learning are very important for neural networks. However, it still strongly depends on effective network structure and efficient learning system. Group Lasso regularization plays an essential role in reaching a parsimonious network and feature selection simultaneously. By selecting suitable penalization coefficient, its group sparse characteristic has been demonstrated with competitive performance. To solve the non-differentiable property of Group Lasso penalty, a specific smoothing technique has been adopted during training which also serves for the comprehensive convergent analysis of the proposed algorithms.


Biography: Prof. Jian Wang is from China University of Petroleum (East China), Qingdao, China. He received the Ph.D. degree in computational mathematics from the Dalian University of Technology, Dalian, China, in 2012. From September 2010 to September 2011, he was a Visiting Scholar at the Department of Electrical and Computer Engineering, University of Louisville, United States. He is Associate Editors of the Journal of Applied Computer Science Methods and the IEEE Transactions on Neural Networks and Learning Systems. He served as the Publication Chair of the 24th International Conference on Neural Information Processing and the Program Committee Chair of the 2016 and 2018 International Symposium on New Trends in Computational Intelligence. He is currently the Director for Cross-Media Big Data Joint Laboratory with China University of Petroleum (East China), Qingdao, China. His current research interests include machine learning, regularization theory and neural networks.


13.5. Speaker: Prof. Zenglin XuUniversity of Electronic Science and Technology of China, China

Talk Title: Compressing Neural Networks with Tensor Networks


Abstract: Tensor is an important data structure to represent multiway data, e.g., recommendation systems, face recognition, sensor networks, etc. Tensor networks can be seen as Building blocks of tensors form tensor networks, while building blocks of tensor networks form quantum states. This talk will discuss the connections between tensor networks and deep neural networks. Finally, we will present our recent work of compressing neural networks with tensor network structures, such as block-term Tucker decomposition, and tensor ring decomposition.


Biography: Prof. Zenglin Xu is a currently in School of Computer Science and Engineering at University of Electronic Science and Technology of China(UESTC). He is the founder and director of the Statistical Machine Intelligence and LEarning (SMILE) Lab. He obtained his PhD in Computer Science and Engineering from the Chinese University of Hong Kong. His research interest includes machine learning and its applications on social network analysis, health informatics, and cyber security analytics. He has published over 80 papers in prestigious journals and conferences such as NeurIPS, ICML, IJCAI, AAAI, IEEE PAMI, IEEE TNNLS, etc. He is also the recipient of the APNNS young researcher award, and the best student paper honorable mention of AAAI 2015. Dr. Xu has been a PC member or reviewer to a number of top conferences such as NeurIPS, ICML, AAAI, IJCAI, etc. He regularly servers as a reviewer to IEEE TPAMI, JMLR, PR, IEEE TNNL, etc. He also serves as associate editors of a number of journals, such as Neural Networks, Neurocomputing, and so on.


13.6. Speaker: Prof. Nianyin ZengXiamen University, China

Talk Title: A Novel Deep-Belief-Network-Based Particle Filter (DBN-PF) for Quantitative Analysis of Gold Immunochromatographic Strips


Abstract: In this talk, a novel statistical pattern recognition method is proposed for accurately segmenting test and control lines fromthe gold immunochromatographic strip (GICS) images for the benefits of quantitative analysis. A new dynamic state-space model is established, based on which the segmentation task of test and control lines is transformed into a state estimation problem. Especially, the transition equation is utilized to describe the relationship between contour points on the upper and the lower boundaries of test and control lines, and a new observation equation is developed by combining the contrast of between-class variance and the uniformity measure. Then, an innovative particle filter (PF) with a hybrid proposal distribution, namely, deep-belief-network-based particle filter (DBN-PF) is put forward, where the deep belief network (DBN) provides an initial recognition result in the hybrid proposal distribution, and the particle swarm optimization algorithm moves particles to regions of high likelihood. The performance of proposed DBN-PF method is comprehensively evaluated on not only an artificial dataset but also the GICS images in terms of several indices as compared to the PF and DBN methods. It is demonstrated via experiment results that the proposed approach is effective in quantitative analysis of GICS.


Biography: Prof. Nianyin Zeng is currently in the Department of Instrumental & Electrical Engineering of Xiamen University. From September 2017 to August 2018, he was an ISEF Fellow founded by the Korea Foundation for Advance Studies and also a Visiting Professor at the Korea Advanced Institute of Science and Technology (KAIST). His current research interests include intelligent data analysis, artificial/computational intelligence, time-series modeling and applications. He published over 50 SCI-indexed papers including 8 ESI Highly Cited Papers according to the most recent Clarivate Analytics ESI report, and also won the second prize of the Provincial Natural Science Award.

Dr. Zeng is currently serving as an Associate Editor for Neurocomputing, Editorial Board members for Computers in Biology and Medicine, Biomedical Engineering Online, and also a Guest Editor for Frontiers in Neuroscience. He was awarded the Key Talent in Xiamen City.