untitled
<OAI-PMH schemaLocation=http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd> <responseDate>2018-01-15T18:35:06Z</responseDate> <request identifier=oai:HAL:hal-00825414v1 verb=GetRecord metadataPrefix=oai_dc>http://api.archives-ouvertes.fr/oai/hal/</request> <GetRecord> <record> <header> <identifier>oai:HAL:hal-00825414v1</identifier> <datestamp>2018-01-11</datestamp> <setSpec>type:REPORT</setSpec> <setSpec>subject:info</setSpec> <setSpec>collection:CNRS</setSpec> <setSpec>collection:I3S</setSpec> <setSpec>collection:UNIV-AG</setSpec> <setSpec>collection:BNRMI</setSpec> <setSpec>collection:UNICE</setSpec> <setSpec>collection:CEREGMIA</setSpec> <setSpec>collection:LARA</setSpec> <setSpec>collection:UCA-TEST</setSpec> <setSpec>collection:UNIV-COTEDAZUR</setSpec> </header> <metadata><dc> <publisher>HAL CCSD</publisher> <title lang=en>Minimizing Calibrated Loss using Stochastic Low-Rank Newton Descent for large scale image classification</title> <creator>Bel Haj Ali, Wafa</creator> <creator>Barlaud, Michel</creator> <creator>Nock, Richard</creator> <contributor>Laboratoire d'Informatique, Signaux, et Systèmes de Sophia-Antipolis (I3S) / Equipe IMAGES-CREATIVE ; Signal, Images et Systèmes (SIS) ; Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S) ; Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS) - Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS) - Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S) ; Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS) - Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS)</contributor> <contributor>Centre de Recherche en Economie, Gestion, Modélisation et Informatique Appliquée (CEREGMIA) ; Université des Antilles et de la Guyane (UAG)</contributor> <identifier>hal-00825414</identifier> <identifier>https://hal.archives-ouvertes.fr/hal-00825414</identifier> <identifier>https://hal.archives-ouvertes.fr/hal-00825414/document</identifier> <identifier>https://hal.archives-ouvertes.fr/hal-00825414/file/TechnicalReport.pdf</identifier> <source>https://hal.archives-ouvertes.fr/hal-00825414</source> <source>2013</source> <language>en</language> <subject lang=en>Machine Learning</subject> <subject lang=en>Calibrated Loss</subject> <subject lang=en>Newton Descent</subject> <subject lang=en>Large Scale Image Classification</subject> <subject>[INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG]</subject> <subject>[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]</subject> <type>info:eu-repo/semantics/report</type> <type>Reports</type> <description lang=en>A standard approach for large scale image classification involves high dimensional features and Stochastic Gradient Descent algorithm (SGD) for the minimization of classical Hinge Loss in the primal space. Although complexity of Stochastic Gradient Descent is linear with the number of samples these method suffers from slow convergence. In order to cope with this issue, we propose here a Stochastic Low-Rank Newton Descent SLND for minimization of any calibrated loss in the primal space. SLND approximates the inverse Hessian by the best low-rank approximation according to squared Frobenius norm. We provide core optimization for fast convergence. Theoretically speaking, we show explicit convergence rates of the algorithm using these calibrated losses, which in addition provide working sets of parameters for experiments. Experiments are provided on the SUN, Caltech256 and ImageNet databases, with simple, uniform and efficient ways to tune remaining SLND parameters. On each of these databases, SLND challenges the accuracy of SGD with a speed of convergence faster by order of magnitude.</description> <date>2013-04-18</date> </dc> </metadata> </record> </GetRecord> </OAI-PMH>