Published May 10, 2025 | Version v1
Publication

Importance weighted directed graph variational auto-encoder for block modelling of complex networks

Contributors

Others:

Description

This paper addresses the fundamental challenges of jointly performing node clustering and representation learning in directed and valued graphs, which need both global and local network structures to be captured. While these two tasks are highly interdependent, they are often treated separately in existing works. We propose the deep zero-inflated latent position block model (Deep-ZLPBM) in the context of directed and valued networks characterized by non-symmetric adjacency matrices with positive integer entries. Our approach leverages a variational autoencoder (VAE) framework, combining a directed graph neural network (DirGNN) encoder designed to handle directed edges and a zero-inflated Poisson (ZIP) block modelling decoder to model sparse, integer-weighted interactions. Recognizing the limitations of the standard evidence lower bound (ELBO) in VAEs, we explore the importance weighted ELBO (iw-ELBO), a tighter bound on the marginal log-likelihood optimized via gradient ascent, to enhance inference. Extensive experiments on synthetic datasets demonstrate that iw-ELBO optimization yields significant performance gains. Moreover, our results validate that Deep-ZLPBM effectively models complex network structures, providing interpretable partial memberships and insightful visualizations for directed, valued graphs.

→ Use footnote for providing further information about author (webpage, alternative address)-not for acknowledging funding agencies.

Preprint. Under review.

Additional details

Identifiers

URL
https://hal.science/hal-05077099
URN
urn:oai:HAL:hal-05077099v1

Origin repository

Origin repository
UNICA