-
The Chinese University of Hong Kong
- Hong Kong, China
Stars
The repository implements the Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification
This is the official github repo of Think-on-Graph. If you are interested in our work or willing to join our research team in Shenzhen, please feel free to contact us by email (xuchengjin@idea.edu.cn)
This project aims to collect the latest "call for reviewers" links from various top CS/ML/AI conferences/journals
Official implementation for Zhong & Le et al., GNNs Also Deserve Editing, and They Need It More Than Once. ICML 2024
Codes and data for KDD 2024 Research Track paper "ProCom: A Few-shot Targeted Community Detection Algorithm"
PyGDA is a Python library for Graph Domain Adaptation
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
🐝 GPTSwarm: LLM agents as (Optimizable) Graphs
A list of papers that studies Novel Class Discovery
A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
[SIGKDD 2024] Rethinking Fair Graph Neural Networks from Re-balancing
[ICML'24] BAT: 🚀 Boost Class-imbalanced Node Classification with <10 lines of Code | 从拓扑视角出发10行代码改善类别不平衡节点分类
Code for IJCAI'24 paper: Gradformer: Graph Transformer with Exponential Decay
The official implement of SIGKDD'24 paper: ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs
PKU-DAIR / RAG-Survey
Forked from hymie122/RAG-SurveyCollecting awesome papers of RAG for AIGC. We propose a taxonomy of RAG foundations, enhancements, and applications in paper "Retrieval-Augmented Generation for AI-Generated Content: A Survey".
Forward-Looking Active REtrieval-augmented generation (FLARE)
This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
✨✨Latest Advances on Multimodal Large Language Models
The collection of resources about LLM for Time series tasks
Continual Learning of Large Language Models: A Comprehensive Survey
Must-read Papers on Knowledge Editing for Large Language Models.
[EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
[IJCAI'2023] "DSL: Denoised Self-Augmented Learning for Social Recommendation"
A curated list of awesome papers on dataset distillation and related applications.
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.