Published on Sat May 19 2018

Conditional Network Embeddings

Bo Kang, Jefrey Lijffijt, Tijl De Bie

Network Embeddings (NEs) map the nodes of a given network into Euclidean space. CNEs maximally add information with respect to given structural properties (e.g. node degrees, block densities, etc.) We use a simple Bayesian approach to achieve this, and propose a block stochastic gradient descent algorithm.

0
0
0
Abstract

Network Embeddings (NEs) map the nodes of a given network into -dimensional Euclidean space . Ideally, this mapping is such that `similar' nodes are mapped onto nearby points, such that the NE can be used for purposes such as link prediction (if `similar' means being `more likely to be connected') or classification (if `similar' means `being more likely to have the same label'). In recent years various methods for NE have been introduced, all following a similar strategy: defining a notion of similarity between nodes (typically some distance measure within the network), a distance measure in the embedding space, and a loss function that penalizes large distances for similar nodes and small distances for dissimilar nodes. A difficulty faced by existing methods is that certain networks are fundamentally hard to embed due to their structural properties: (approximate) multipartiteness, certain degree distributions, assortativity, etc. To overcome this, we introduce a conceptual innovation to the NE literature and propose to create \emph{Conditional Network Embeddings} (CNEs); embeddings that maximally add information with respect to given structural properties (e.g. node degrees, block densities, etc.). We use a simple Bayesian approach to achieve this, and propose a block stochastic gradient descent algorithm for fitting it efficiently. We demonstrate that CNEs are superior for link prediction and multi-label classification when compared to state-of-the-art methods, and this without adding significant mathematical or computational complexity. Finally, we illustrate the potential of CNE for network visualization.

Sat Nov 11 2017
Machine Learning
Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective
Network embedding can be learned in a unified framework integrating network structure and node content as well as label information. We demonstrate the efficacy of the proposed model with the tasks of semi-supervised node classification and link prediction on a variety of real-world benchmark network datasets.
0
0
0
Tue Nov 26 2019
Machine Learning
Network Embedding: An Overview
Network embedding encompasses various methods for unsupervised,and sometimes supervised, learning of feature representations of nodes and links in a network. We review significant contributions to network embedding in the last decade. We describe each method and list its advantages and shortcomings.
0
0
0
Wed Dec 11 2019
Machine Learning
Beyond Node Embedding: A Direct Unsupervised Edge Representation Framework for Homogeneous Networks
Network representation learning has traditionally been used to find lower dimensional vector representations of the nodes in a network. For applications such as link prediction in homogeneous networks, vector representation of an edge is derived just by using simple aggregations of the embeddings of the end vertices of the edge.
0
0
0
Tue May 16 2017
Machine Learning
Learning Edge Representations via Low-Rank Asymmetric Projections
We propose a new method for embedding graphs while preserving directed edge information. We evaluate our method on a variety of link-prediction task including social networks, collaboration networks, and protein interactions. We show that the representations learned by our method are quite space efficient.
0
0
0
Fri Jul 10 2020
Machine Learning
Next Waves in Veridical Network Embedding
Embedding nodes of a large network into a metric (e.g., Euclidean) space has become an area of active research in statistical machine learning. Network embedding algorithms have been proposed in multiple disciplines, often with domain-specific notations and details.
0
0
0
Wed Mar 04 2020
Machine Learning
EPINE: Enhanced Proximity Information Network Embedding
Unsupervised homogeneous network embedding (NE) represents every vertices of networks into a low-dimensional vector. Adjacency matrices retain most of the network information and charactrize the first-order proximity.
0
0
0