Machine learning, with the aim of building intelligent systems by learning model or knowledge from data, has achieved great progress in the past 30 years. However, a huge gap of learning ability still exists between machine learning and human learning. For example, a five-year-old child can identify objects, understand speech and lan-guage via learning from small number of instances or daily communication, whereas machines can hardly match this ability even by learning from big data. In recent years, some researchers have attempted to develop machine learning methods simulating the human learning behavior. Such methods, called as “Human-like Learning”, have some features: learning from small supervised data, interactive, all-time incremental (life-long), exploiting contexts and the correlation between different data sources and tasks, etc. Some existing learning methods, such as incremental learning, active learn-ing, transfer learning, domain adaptation, learning with use, multi-task learning, zero-shot/one-shot learning, can be viewed as special/simplified forms of human-like learning. The future trend is to make learning methods more flexible and active, re-quiring less supervision, exploiting all kinds of data more adequately.
The topics of interest include, but are not limited to:
Submission: January 15th, 2016