Skip to content

📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥

License

Notifications You must be signed in to change notification settings

xjywhu/Awesome-LLM-Long-Context-Modeling

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 

Repository files navigation

Large Language Model Based Long Context Modeling Papers and Blogs

LICENSE Awesome commit PR

This repo includes papers and blogs about Efficient Transformers, Length Extrapolation, Long Term Memory, Retrieval Augmented Generation(RAG), and Evaluation for Long Context Modeling.

🔥 Must-read papers for LLM-based Long Context Modeling.

Thanks for all the great contributors on GitHub!🔥⚡🔥

Contents

📢 News


📜 Papers

You can directly click on the title to jump to the corresponding PDF link location

1. Survey Papers

  1. Efficient Transformers: A Survey. Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler. Arxiv 2022.

  2. A Survey on Long Text Modeling with Transformers. Zican Dong, Tianyi Tang, Lunyi Li, Wayne Xin Zhao. Arxiv 2023.

  3. Neural Natural Language Processing for Long Texts: A Survey of the State-of-the-Art. Dimitrios Tsirmpas, Ioannis Gkionis, Ioannis Mademlis, Georgios Papadopoulos. Arxiv 2023.

  4. Advancing Transformer Architecture in Long-Context Large Language Models: A Comprehensive Survey. Yunpeng Huang, Jingwei Xu, Zixu Jiang, Junyu Lai, Zenan Li, Yuan Yao, Taolue Chen, Lijuan Yang, Zhou Xin, Xiaoxing Ma. Arxiv 2023.

        GitHub Repo stars

  1. Length Extrapolation of Transformers: A Survey from the Perspective of Position Encoding. Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bing Qin, Ting Liu. Arxiv 2024.

  2. The What, Why, and How of Context Length Extension Techniques in Large Language Models -- A Detailed Survey. Saurav Pawar, S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Aman Chadha, Amitava Das. Arxiv 2024.

2. Efficient Transformers

2.1 Sparse Transformers

  1. Generating Long Sequences with Sparse Transformers. Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever. Arxiv 2019.

  2. Blockwise selfattention for long document understanding. Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, Jie Tang. EMNLP 2020.

        GitHub Repo stars

  1. Longformer: The Long-Document Transformer. Iz Beltagy, Matthew E. Peters, Arman Cohan. Arxiv 2020.

        GitHub Repo stars

  1. ETC: Encoding Long and Structured Inputs in Transformers. Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang. EMNLP 2020.

  2. Big Bird: Transformers for Longer Sequences. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. NeurIPS 2020.

        GitHub Repo stars

  1. Reformer: The efficient transformer. Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. ICLR 2020.

        GitHub Repo stars

  1. Sparse Sinkhorn Attention. Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, Da-Cheng Juan. ICML 2020.

        GitHub Repo stars

  1. Sparse and continuous attention mechanisms. André F. T. Martins, António Farinhas, Marcos Treviso, Vlad Niculae, Pedro M. Q. Aguiar, Mário A. T. Figueiredo. NIPS 2020.

  2. Efficient Content-Based Sparse Attention with Routing Transformers. Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier. TACL 2021.

        GitHub Repo stars

  1. LongT5: Efficient text-to-text transformer for long sequences. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. NAACL 2022.

        GitHub Repo stars

  1. Efficient Long-Text Understanding with Short-Text Models. Maor Ivgi, Uri Shaham, Jonathan Berant. TACL 2023.

        GitHub Repo stars

  1. Parallel Context Windows for Large Language Models. Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham. ACL 2023.

        GitHub Repo stars

  1. Unlimiformer: Long-Range Transformers with Unlimited Length Input. Amanda Bertsch, Uri Alon, Graham Neubig, Matthew R. Gormley. Arxiv 2023.

        GitHub Repo stars

  1. Landmark Attention: Random-Access Infinite Context Length for Transformers. Amirkeivan Mohtashami, Martin Jaggi Arxiv 2023.

        GitHub Repo stars

  1. LONGNET: Scaling Transformers to 1,000,000,000 Tokens. Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei. Arxiv 2023.

        GitHub Repo stars

  1. Adapting Language Models to Compress Contexts. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, Danqi Chen. Arxiv 2023.

        GitHub Repo stars

  1. Blockwise Parallel Transformer for Long Context Large Models. Hao Liu, Pieter Abbeel. Arxiv 2023.

        GitHub Repo stars

  1. MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis. Arxiv 2023.

        GitHub Repo stars

  1. Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers. Sotiris Anagnostidis, Dario Pavllo, Luca Biggio, Lorenzo Noci, Aurelien Lucchi, Thomas Hofmann. Arxiv 2023.

  2. Long-range Language Modeling with Self-retrieval. Ohad Rubin, Jonathan Berant. Arxiv 2023.

  3. Max-Margin Token Selection in Attention Mechanism. Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak. Arxiv 2023.

  4. Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers. Jiawen Xie, Pengyu Cheng, Xiao Liang, Yong Dai, Nan Du. Arxiv 2023.

  5. Sparse Token Transformer with Attention Back Tracking. Heejun Lee, Minki Kang, Youngwan Lee, Sung Ju Hwang. ICLR 2023.

  6. Empower Your Model with Longer and Better Context Comprehension. YiFei Gao, Lei Wang, Jun Fang, Longhua Hu, Jun Cheng. Arxiv 2023.

        GitHub Repo stars

  1. Ring Attention with Blockwise Transformers for Near-Infinite Context. Hao Liu, Matei Zaharia, Pieter Abbeel. Arxiv 2023.

  2. Efficient Streaming Language Models with Attention Sinks. Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis. Arxiv 2023.

        GitHub Repo stars

  1. HyperAttention: Long-context Attention in Near-Linear Time. Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, Amir Zandieh. Arxiv 2023.

  2. Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention. Ziwei He,Jian Yuan,Le Zhou,Jingwen Leng,Bo Jiang. Arxiv 2023.

        GitHub Repo stars

2.2 Linear Transformers

  1. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret. ICML 2020.

        GitHub Repo stars

  1. Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations. Tri Dao, Albert Gu, Matthew Eichhorn, Atri Rudra, Christopher Ré. Arxiv 2019.

        GitHub Repo stars

  1. Masked language modeling for proteins via linearly scalable long-context transformers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, Adrian Weller. Arxiv 2020.

  2. Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller. Arxiv 2020.

        GitHub Repo stars

  1. Linformer: Self-attention with linear complexity. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma. Arxiv 2020.

        GitHub Repo stars

  1. Random Feature Attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong. Arxiv 2021.

        GitHub Repo stars

  1. Luna: Linear unified nested attention. Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer. Arxiv 2021.

        GitHub Repo stars

  1. Fnet: Mixing tokens with fourier transforms. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. Arxiv 2021.

        GitHub Repo stars

  1. Gated Linear Attention Transformers with Hardware-Efficient Training. Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim. Arxiv 2023.

        GitHub Repo stars

2.3 Hierarchical Transformers

  1. Neural Legal Judgment Prediction in English. Ilias Chalkidis, Ion Androutsopoulos, Nikolaos Aletras. ACL 2019.

        GitHub Repo stars

  1. Hierarchical Neural Network Approaches for Long Document Classification. Snehal Khandve, Vedangi Wagh, Apurva Wani, Isha Joshi, Raviraj Joshi. ICML 2022.

  2. Hi-transformer: Hierarchical interactive transformer for efficient and effective long document modeling. Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang. ACL-IJCNLP 2021

  3. Erniesparse: Learning hierarchical efficient transformer through regularized self-attention. Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. Arxiv 2022.

3. Recurrent Transformers

  1. Transformer-XL: Attentive language models beyond a fixed-length context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. ACL 2019.

        GitHub Repo stars

  1. Compressive Transformers for Long-Range Sequence Modelling. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap. Arxiv 2019.

        GitHub Repo stars

  1. Memformer: The memory-augmented transformer. Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou Yu. Arxiv 2020.

        GitHub Repo stars

  1. ERNIE-Doc: A Retrospective Long-Document Modeling Transformer. SiYu Ding, Junyuan Shang, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. ACL-IJCNLP 2021.

  2. Memorizing Transformers. Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy. Arxiv 2022.

        GitHub Repo stars

  1. Recurrent Attention Networks for Long-text Modeling. Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, Qing Li. ACL 2023.

        GitHub Repo stars

  1. RWKV: Reinventing RNNs for the Transformer Era. Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi Saito, Xiangru Tang, Bolun Wang, Johan S. Wind, Stansilaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu. Arxiv 2023.

        GitHub Repo stars

  1. Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model. Yinghan Long, Sayeed Shafayet Chowdhury, Kaushik Roy. Arxiv 2023.

  2. Scaling Transformer to 1M tokens and beyond with RMT. Aydar Bulatov, Yuri Kuratov, Mikhail S. Burtsev. Arxiv 2023.

  3. Block-Recurrent Transformers. DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, Behnam Neyshabur. Arxiv 2023.

        GitHub Repo stars

  1. TRAMS: Training-free Memory Selection for Long-range Language Modeling. Haofei Yu, Cunxiang Wang, Yue Zhang, Wei Bi. Arxiv 2023.

        GitHub Repo stars

4. State Space Models

  1. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. Albert Gu, Tri Dao. Arxiv 2023.

        GitHub Repo stars

  1. MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts. Maciej Pióro, Kamil Ciebiera, Krystian Król, Jan Ludziejewski, Sebastian Jaszczur. Arxiv 2024.

  2. MambaByte: Token-free Selective State Space Model. Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, Alexander M Rush. Arxiv 2024.

  3. LOCOST: State-Space Models for Long Document Abstractive Summarization. Florian Le Bronnec, Song Duong, Mathieu Ravaut, Alexandre Allauzen, Nancy F. Chen, Vincent Guigue, Alberto Lumbreras, Laure Soulier, Patrick Gallinari. Arxiv 2024.

5. Length Extrapolation

  1. RoFormer: Enhanced Transformer with Rotary Position Embedding. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, Yunfeng Liu. Arxiv 2021.

        GitHub Repo stars

  1. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Ofir Press, Noah A. Smith, Mike Lewis. ICLR 2022.

        GitHub Repo stars

  1. KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation. Ta-Chung Chi, Ting-Han Fan, Peter J. Ramadge, Alexander I. Rudnicky. Arxiv 2022.

  2. Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis. Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky, Peter J. Ramadge. ACL 2023.

  3. A Length-Extrapolatable Transformer. Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, Furu Wei. ACL 2023.

        GitHub Repo stars

  1. Randomized Positional Encodings Boost Length Generalization of Transformers. Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, Joel Veness. ACL 2023.

        GitHub Repo stars

  1. The Impact of Positional Encoding on Length Generalization in Transformers. Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy. Arxiv 2023.

        GitHub Repo stars

  1. Focused Transformer: Contrastive Training for Context Scaling. Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, Piotr Miłoś. Arxiv 2023.

        GitHub Repo stars

  1. Extending Context Window of Large Language Models via Positional Interpolation. Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian. Arxiv 2023.

  2. Exploring Transformer Extrapolation. Zhen Qin, Yiran Zhong, Hui Deng. Arxiv 2023.

        GitHub Repo stars

  1. LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models. Chi Han, Qifan Wang, Wenhan Xiong, Yu Chen, Heng Ji, Sinong Wang. Arxiv 2023.

        GitHub Repo stars

  1. YaRN: Efficient Context Window Extension of Large Language Models. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole. Arxiv 2023.

        GitHub Repo stars

  1. PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training. Dawei Zhu,Nan Yang,Liang Wang,Yifan Song,Wenhao Wu,Furu Wei,Sujian Li. Arxiv 2023.

        GitHub Repo stars

  1. LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models. Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, Jiaya Jia. Arxiv 2023.

        GitHub Repo stars

  1. Scaling Laws of RoPE-based Extrapolation. Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, Dahua Lin. Arxiv 2023.

  2. Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation. Ta-Chung Chi,Ting-Han Fan,Alexander I. Rudnicky. Arxiv 2023.

        GitHub Repo stars

  1. CoCA: Fusing position embedding with Collinear Constrained Attention for fine-tuning free context window extending. Shiyi Zhu, Jing Ye, Wei Jiang, Qi Zhang, Yifan Wu, Jianguo Li. Arxiv 2023.

        GitHub Repo stars

  1. Structured Packing in LLM Training Improves Long Context Utilization. Konrad Staniszewski, Szymon Tworkowski, Sebastian Jaszczur, Henryk Michalewski, Łukasz Kuciński, Piotr Miłoś. Arxiv 2024.

  2. LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning. Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu. Arxiv 2024.

  3. Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache. Bin Lin, Tao Peng, Chen Zhang, Minmin Sun, Lanbo Li, Hanyu Zhao, Wencong Xiao, Qi Xu, Xiafei Qiu, Shen Li, Zhigang Ji, Yong Li, Wei Lin. Arxiv 2024.

  4. Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models. Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong. Arxiv 2024.

        GitHub Repo stars

  1. Extending LLMs' Context Window with 100 Samples. Yikai Zhang, Junlong Li, Pengfei Liu. Arxiv 2024.

        GitHub Repo stars

  1. E^2-LLM: Efficient and Extreme Length Extension of Large Language Models. Jiaheng Liu, Zhiqi Bai, Yuanxing Zhang, Chenchen Zhang, Yu Zhang, Ge Zhang, Jiakai Wang, Haoran Que, Yukang Chen, Wenbo Su, Tiezheng Ge, Jie Fu, Wenhu Chen, Bo Zheng. Arxiv 2024.

  2. With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation. Y. Wang, D. Ma, D. Cai. Arxiv 2024.

        GitHub Repo stars

  1. Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation. Zhenyu He, Guhao Feng, Shengjie Luo, Kai Yang, Di He, Jingjing Xu, Zhi Zhang, Hongxia Yang, Liwei Wang. Arxiv 2024.

  2. Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens. Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi. Arxiv 2024.

        GitHub Repo stars

6. Long Term Memory

  1. Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System. Xinnian Liang, Bing Wang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li. Arxiv 2023.

        GitHub Repo stars

  1. MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang. Arxiv 2023.

        GitHub Repo stars

  1. Improve Long-term Memory Learning Through Rescaling the Error Temporally. Shida Wang, Zhanglu Yan. Arxiv 2023.

  2. Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models. Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo. Arxiv 2023.

  3. Empowering Working Memory for Large Language Model Agents. Jing Guo, Nan Li, Jianchuan Qi, Hang Yang, Ruiqiao Li, Yuzhen Feng, Si Zhang, Ming Xu. Arxiv 2024.

  4. Evolving Large Language Model Assistant with Long-Term Conditional Memory. Ruifeng Yuan, Shichao Sun, Zili Wang, Ziqiang Cao, Wenjie Li. Arxiv 2024.

  5. Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement. Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo. Arxiv 2024.

7. RAG

  1. Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading. Howard Chen, Ramakanth Pasunuru, Jason Weston, Asli Celikyilmaz. Arxiv 2023.

  2. Attendre: Wait To Attend By Retrieval With Evicted Queries in Memory-Based Transformers for Long Context Processing. 、Zi Yang, Nan Hua. Arxiv 2024.

8. Compress

  1. Adapting Language Models to Compress Contexts. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, Danqi Chen. Arxiv 2023.

        GitHub Repo stars

  1. Compressing Context to Enhance Inference Efficiency of Large Language Models. Yucheng Li, Bo Dong, Chenghua Lin, Frank Guerin. Arxiv 2023.

        GitHub Repo stars

  1. LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models. Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2023.

        GitHub Repo stars

  1. LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression. Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2023.

        GitHub Repo stars

  1. System 2 Attention (is something you might need too). Jason Weston, Sainbayar Sukhbaatar. Arxiv 2023.

  2. DSFormer: Effective Compression of Text-Transformers by Dense-Sparse Weight Factorization. Rahul Chand, Yashoteja Prabhu, Pratyush Kumar. Arxiv 2023.

  3. Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon. Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou. Arxiv 2024.

        GitHub Repo stars

  1. Flexibly Scaling Large Language Models Contexts Through Extensible Tokenization. Ninglu Shao, Shitao Xiao, Zheng Liu, Peitian Zhang. Arxiv 2024.

        GitHub Repo stars

9. Benchmark and Evaluation

  1. Long Range Arena : A Benchmark for Efficient Transformers. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, Donald Metzler. ICLR 2021.

        GitHub Repo stars

  1. LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation. Jian Guan, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie Fan, Minlie Huang. TACL 2022.

        GitHub Repo stars

  1. SCROLLS: Standardized CompaRison Over Long Language Sequences. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy. EMNLP 2022.

        GitHub Repo stars

  1. MuLD: The Multitask Long Document Benchmark. George Hudson, Noura Al Moubayed. LREC 2022.

        GitHub Repo stars

  1. Lost in the Middle: How Language Models Use Long Contexts. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. Arxiv 2023.

        GitHub Repo stars

  1. L-Eval: Instituting Standardized Evaluation for Long Context Language Models. Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu. Arxiv 2023.

        GitHub Repo stars

  1. LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li. Arxiv 2023.

        GitHub Repo stars

  1. Content Reduction, Surprisal and Information Density Estimation for Long Documents. Shaoxiong Ji, Wei Sun, Pekka Marttinen. Arxiv 2023.

  2. BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models. Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen. Arxiv 2023.

        GitHub Repo stars

  1. Retrieval meets Long Context Large Language Models. Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro. Arxiv 2023.

  2. LooGLE: Long Context Evaluation for Long-Context Language Models. Jiaqi Li, Mengmeng Wang, Zilong Zheng, Muhan Zhang. Arxiv 2023.

        GitHub Repo stars

  1. The Impact of Reasoning Step Length on Large Language Models. Mingyu Jin, Qinkai Yu, Dong shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, Mengnan Du. Arxiv 2024.

  2. DocFinQA: A Long-Context Financial Reasoning Dataset. Varshini Reddy, Rik Koncel-Kedziorski, Viet Dac Lai, Chris Tanner. Arxiv 2024.

  3. LongFin: A Multimodal Document Understanding Model for Long Financial Domain Documents. Ahmed Masry, Amir Hajian. Arxiv 2024.

  4. PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models. Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, Linqi Song. Arxiv 2024.

  5. LongHealth: A Question Answering Benchmark with Long Clinical Documents. Lisa Adams, Felix Busch, Tianyu Han, Jean-Baptiste Excoffier, Matthieu Ortala, Alexander Löser, Hugo JWL. Aerts, Jakob Nikolas Kather, Daniel Truhn, Keno Bressem. Arxiv 2024.

  6. Long-form evaluation of model editing. Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu, Melis Erkan, Yahya Kayani, Satya Deepika Chavatapalli, Frank Rudzicz, Hassan Sajjad. Arxiv 2024.

10. Blogs

  1. Extending Context is Hard…but not Impossible†. kaiokendev. 2023.

  2. NTK-Aware Scaled RoPE. u/bloc97 . 2023.

  3. The Secret Sauce behind 100K context window in LLMs: all tricks in one place. Galina Alperovich. 2023.

  4. Transformer升级之路:7、长度外推性与局部注意力. 苏剑林(Jianlin Su). 2023.

  5. Transformer升级之路:9、一种全局长度外推的新思路. 苏剑林(Jianlin Su). 2023.

  6. Transformer升级之路:12、无限外推的ReRoPE. 苏剑林(Jianlin Su). 2023.

  7. Transformer升级之路:14、当HWFA遇见ReRoPE. 苏剑林(Jianlin Su). 2023.

  8. Transformer升级之路:15、Key归一化助力长度外推. 苏剑林(Jianlin Su). 2023.

Acknowledgements

Please contact me if I miss your names in the list, I will add you back ASAP!

About

📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published