Publications
Working Papers Conference Proceedings Journal Publications
You can also find my articles on my Google Scholar profile.
Working Papers
George Dasoulas, et al.
Conference Proceedings
Alexis Chevalier, Soumya Ghosh, Urvi Awasthi, James Watkins, Julia Bieniewska, Nichita Mitrea, Olga Kotova, Kirill Shkura, Andrew Noble, Michael J. Steinbaugh, Vijay Sadashivaiah, George Dasoulas, Julien Delile, Christoph Meier, Leonid Zhukov, Iya Khalil, Srayanta Mukherjee, Judith Mueller
Claudio Battiloro, Ege Karaismailoglu, Mauricio Tec, George Dasoulas, Michelle Audirac, Francesca Dominici
Tina Vartziotis, Ippolyti Dellatolas, George Dasoulas, Maximilian Schmidt, Florian Schneider, Tim Hoffmann, Sotirios Kotsopoulos, Michael Keckeisen
Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, Marinka Zitnik
Michalis Chatzianastasis, Johannes Lutzeyer, George Dasoulas, Michalis Vazirgiannis
Based on connections with the Partial Information Decomposition framework, we introduce a novel GNN layer, namely the Graph Ordering Attention (GOAT) that imposes neighborhood orderings according to the attention coefficients. Read more
George Dasoulas, Giannis Nikolentzos, Kevin Scaman, Aladin Virmaux, Michalis Vazirgiannis
George Dasoulas, Kevin Scaman, Aladin Virmaux
We derive a theoretical analysis on the Lipschitz continuity of attention and show that enforcing Lipschitz continuity through normalization can significantly improve the performance of deep attention models. Read more
George Dasoulas, Johannes Lutzeyer, Michalis Vazirgiannis
We propose a parametrised graph shift operator (PGSO) to encode graphs, providing a unified view of common GSOs, and improve GNN performance by including the PGSO into the training in an end-to-end manner. Read more
George Dasoulas, Giannis Nikolentzos, Kevin Scaman, Aladin Virmaux, Michalis Vazirgiannis
Moving beyond local interactions, nodes can share structural similarities, based on their position. We investigate feature augmentation methods of graph neural networks using structural entropy measures. Read more
George Dasoulas, Ludovic Dos Santos, Kevin Scaman, Aladin Virmaux
Based on topological criteria and, specifically the separability, we introduce a universal approximation scheme of continuous functions on graphs. It is based on the disambiguation of identical node attributes. Read more
Stratis Limnios, George Dasoulas, Dimitrios M. Thilikos, Michalis Vazirgiannis
We propose a graph-based initialization of neural networks extending graph degeneracy observations. Such an initialization can encourage neurons that have structural importance in the neural network. Read more
Journal Publications
Lev Telyatnikov, Guillermo Bernardez, Marco Montagna, Mustafa Hajij, Martin Carrasco, Pavlo Vasylenko, Mathilde Papillon, Ghada Zamzmi, Michael T. Schaub, Jonas Verhellen, Pavel Snopov, Bertran Miquel-Oliver, Manel Gil-Sorribes, Alexis Molina, Victor Guallar, Theodore Long, Julian Suk, Patryk Rygiel, Alexander Nikitin, Giordan Escalona, Michael Banf, Dominik Filipiak, Max Schattauer, Liliya Imasheva, Alvaro Martinez, Halley Fritze, Marissa Masden, Valentina Sánchez, Manuel Lecha, Andrea Cavallo, Claudio Battiloro, Matt Piekenbrock, Mauricio Tec, George Dasoulas, Nina Miolane, Simone Scardapane, Theodore Papamarkou
George Dasoulas*, Dimitris Vartziotis*, Florian Pausinger
George Dasoulas*, Yasha Ektefaie*, Ayush Noori, Maha Farhat, Marinka Zitnik
Giannis Nikolentzos, George Dasoulas, Michalis Vazirgiannis
We study how we can approximate graph distances by aligning adjacency matrices in a corpus of graphs. In order to allow the differentiable optimization, we suggest the utilization of soft permutation matrices. Read more
Guillaume Salha-Galvan, Johannes Lutzeyer, George Dasoulas, Romain Hennequin, Michalis Vazirgiannis
Solving simultaneously link prediction and community detection is important in recommendation systems. Here, we show how we can extend the information that Graph Auto-encoders process towards this direction. Read more
Giannis Nikolentzos, George Dasoulas, Michalis Vazirgiannis
Standard GNNs use a 1-hop aggregation per layer, limiting their ability to capture graph properties. We iteratively extend the aggregation operator of graph neural networks to increase their receptive field. Read more
