迁移学习和元学习的区别

Differences between Transfer Learning and Meta Learning

meta learningtransfer learning有什么区别?

我在 Quora and TowardDataScience 上阅读了 2 篇文章。

Meta learning is a part of machine learning theory in which some algorithms are applied on meta data about the case to improve a machine learning process. The meta data includes properties about the algorithm used, learning task itself etc. Using the meta data, one can make a better decision of chosen learning algorithm(s) to solve the problem more efficiently.

Transfer learning aims at improving the process of learning new tasks using the experience gained by solving predecessor problems which are somewhat similar. In practice, most of the time, machine learning models are designed to accomplish a single task. However, as humans, we make use of our past experience for not only repeating the same task in the future but learning completely new tasks, too. That is, if the new problem that we try to solve is similar to a few of our past experiences, it becomes easier for us. Thus, for the purpose of using the same learning approach in Machine Learning, transfer learning comprises methods to transfer past experience of one or more source tasks and makes use of it to boost learning in a related target task.

这些比较仍然让我感到困惑,因为两者在可重用性方面似乎有很多相似之处。 Meta learning 据说是 "model agnostic",但它使用来自先前学习的任务的元数据(超参数或权重)。它与 transfer learning 相同,因为它可以部分重用经过训练的网络来解决相关任务。我知道还有很多要讨论的,但从广义上讲,我认为两者之间没有太大区别。

人们也使用像 meta-transfer learning 这样的术语,这让我觉得这两种学习方式之间有着密切的联系。

我认为主要区别在于迁移学习期望任务大多彼此相似,而元学习则不然。

在迁移学习中,任何参数都可以传递给下一个任务,但元学习更具选择性,因为传递的参数应该编码如何学习,而不是如何解决先前的任务。

在迁移学习中,我们使用大数据集预训练模型参数,然后使用这些参数作为初始参数来微调其他具有较小数据集的任务。这种经典的预训练方法不能保证学习到一个初始化 有利于微调。 在元学习中,我们学习了一组初始参数,只需几个梯度步骤,就可以在另一个类似任务中轻松微调这些参数。它通过微调过程直接针对此初始化差异化优化性能。