Python知識(shí)分享網(wǎng) - 專業(yè)的Python學(xué)習(xí)網(wǎng)站 學(xué)Python,上Python222
用于課堂增量學(xué)習(xí)的視覺轉(zhuǎn)換器中的局部性保持 PDF 下載
匿名網(wǎng)友發(fā)布于:2025-05-20 09:57:56
(侵權(quán)舉報(bào))
(假如點(diǎn)擊沒反應(yīng),多刷新兩次就OK!)

用于課堂增量學(xué)習(xí)的視覺轉(zhuǎn)換器中的局部性保持 PDF 下載 圖1

 

 

資料內(nèi)容:

 

Deep models are good at capturing the necessary features ofimages for various tasks. In the normal classification task, deepmodels refine features layer by layer to get a compact repre-sentation for each image to be distinguished by the classifier.However, in real-world situations, new concepts increase overtime,and it is necessary to allow machine learning systemsto adapt to new knowledge while keeping the previouslylearned knowledge. Class Incremental Learning (CIL) is ascenario where new concepts incrementally emerge as newclasses. When applied to CIL,current deep models alwayssuffer from catastrophic forgeting [1]. Therefore , researchersaim to balance the model between stabiliry (ability to resistchanges) and plasticity (ability to adapt). Many models andtraining routines are designed to approach this goal. Most ofthem focus on convolutional architectures [2]-[4]. Recently,Vision Transformers [5](ViT) catch researchers' attention dueto their superior performance in image classification. Worksintroducing ViT into CIL mostly focus on the block design [6]and model expansion [7].