Scaling Up Semi-supervised Learning : An Efficient and Effective LLGC Variant. - Université Claude Bernard Lyon 1 Accéder directement au contenu
Communication Dans Un Congrès Année : 2007

Scaling Up Semi-supervised Learning : An Efficient and Effective LLGC Variant.

Résumé

Domains like text classification can easily supply large amounts of unlabeled data, but labeling itself is expensive. Semi- supervised learning tries to exploit this abundance of unlabeled training data to improve classification. Unfortunately most of the theoretically well-founded algorithms that have been described in recent years are cubic or worse in the total number of both labeled and unlabeled training examples. In this paper we apply modifications to the standard LLGC algorithm to improve efficiency to a point where we can handle datasets with hundreds of thousands of training data. The modifications are priming of the unlabeled data, and most importantly, sparsification of the similarity matrix. We report promising results on large text classification problems.

Dates et versions

hal-01511824 , version 1 (21-04-2017)

Identifiants

Citer

Bernhard Pfahringer, Claire Leschi, Peter Reutemann. Scaling Up Semi-supervised Learning : An Efficient and Effective LLGC Variant.. 11th Pacific-Asia Conference on Data Mining (PAKDD 2007), May 2007, Nanjing, China. pp.236-247, ⟨10.1007/978-3-540-71701-0_25⟩. ⟨hal-01511824⟩
104 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More