Solving Association Problems with Convex Co-embeddingReportar como inadecuado




Solving Association Problems with Convex Co-embedding - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Joint Embedding, Relation Learning, Link Prediction, Structured Output Prediction, Multilabel Classification, Knowledge Graph Completion, Convex Optimization, Co-embedding, Association Learning, Association Problems, Embedding Inference, Constrained Co-embedding, Convex Co-embedding, Semantic Embedding

Mirzazadeh, Farzaneh

Supervisor and department: Greiner, Russell Computing Science Schuurmans, Dale Computing Science

Examining committee member and department: Bowling, Michael Computing Science Zemel, Richard Computer Science, University of Toronto Szepesvari, Csaba Computing Science Sander, Joerg Computing Science

Department: Department of Computing Science

Specialization:

Date accepted: 2017-04-05T14:54:33Z

Graduation date: 2017-06:Spring 2017

Degree: Doctor of Philosophy

Degree level: Doctoral

Abstract: Co-embedding is the process of mapping elements from multiple sets into a common latent space, which can be exploited to infer element-wise associations by considering the geometric proximity of their embeddings. Such an approach underlies the state of the art for link prediction, relation learning, multi-label tagging, relevance retrieval and ranking. This dissertation provides contributions to the study of co-embedding for solving association problems. First, a unifying view for solving association problems with co-embedding is presented, which covers both alignment-based and distance-based models. Although current approaches rely on local training methods applied to non-convex formulations, I demonstrate how general convex formulations can be achieved for co-embedding. I then empirically compare convex versus non-convex formulations of the training problem under an alignment model. Surprisingly, the empirical results reveal that, in most cases, the two are equivalent. Second, the connection between metric learning and co-embedding is investigated. I show that heterogeneous metric learning can be cast as distance-based co-embedding, and propose a scalable algorithm for solving the training problem globally. The co-embedding framework allows metric learning to be applied to a wide range of association problems-including link prediction, relation learning, multi-label tagging and ranking. I investigate the relation between the standard non-convex training formulation and the proposed convex reformulation of heterogeneous metric learning, both empirically and analytically. Again, it is discovered that under certain conditions, the objective values achieved by the two approaches are identical. I develop a formal characterization of the conditions under which this equality holds. Finally, a constrained form of co-embedding is proposed for structured output prediction. A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming. Rather than using approximate inference or tailoring a specialized inference method for a particular structure I instead pre-compile prediction constraints directly into the learned representation. By eliminating the need for explicit inference a more scalable approach to structured output prediction can be achieved, particularly at test time. I demonstrate the idea for hierarchical multi-label prediction under subsumption and mutual exclusion constraints, where a relationship to maximum margin structured output prediction can be established. Experiments demonstrate that the benefits of structured output training can still be realized even after inference has been eliminated.

Language: English

DOI: doi:10.7939-R3XG9FP57

Rights: This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for the purpose of private, scholarly or scientific research. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.





Autor: Mirzazadeh, Farzaneh

Fuente: https://era.library.ualberta.ca/


Introducción



Solving Association Problems with Convex Co-embedding by Farzaneh Mirzazadeh A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computing Science University of Alberta c Farzaneh Mirzazadeh, 2017 Abstract Co-embedding is the process of mapping elements from multiple sets into a common latent space, which can be exploited to infer element-wise associations by considering the geometric proximity of their embeddings.
Such an approach underlies the state of the art for link prediction, relation learning, multi-label tagging, relevance retrieval and ranking.
This dissertation provides contributions to the study of co-embedding for solving association problems. First, a unifying view for solving association problems with co-embedding is presented, which covers both alignment-based and distance-based models.
Although current approaches rely on local training methods applied to non-convex formulations, I demonstrate how general convex formulations can be achieved for co-embedding.
I then empirically compare convex versus non-convex formulations of the training problem under an alignment model.
Surprisingly, the empirical results reveal that, in most cases, the two are equivalent. Second, the connection between metric learning and co-embedding is investigated.
I show that heterogeneous metric learning can be cast as distance-based co-embedding, and propose a scalable algorithm for solving the training problem globally.
The co-embedding framework allows metric learning to be applied to a wide range of association problems—including link prediction, relation learning, multi-label tagging and ranking.
I investigate the relation between the standard non-convex training formulation and the proposed convex reformulation of heterogeneous metric learning, both empirically and analytically.
Again, it is discovered that under certain conditions, the objective values achieved by the two approaches are identical....





Documentos relacionados