Block or Report
Block or report timeahead
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
- 2021最新外卖霸王餐小程序、H5、微信公众号版外系统源码|霸王餐美团/饿了么系统 粉丝裂变玩源码下载 - 2021外卖cps小程序项目|外卖红包cps带好友返利佣金分销系统程序|饿了么美团联盟源码 - 外卖cps带分销返利后端源码,Laravel8框架。
公众号:OMGA,百度云Svip、免费获取优酷vip、芒果vip、B站大会员、腾讯vip、百度网盘Svip、科学上网不限速节点、白嫖白撸线报、SS、SSR、V2ray、百度网盘资源更新、百度网盘热门电视剧、电影、每天分享最新的百度网SVIP,迅雷超级会员,爱奇艺VIP会员,优酷VIP会员,哔哩哔哩大会员,百度文库VIP,网易云黑胶VIP,喜马拉雅VIP,千图网VIP ,包图网VIP,摄图网V…
知识付费资料整理,并且附带下载方法,包括写作、副业、公众号、电商、短视频、投资、引流、运营、社群、知乎、变现等资源。
Public library of space documents and tutorials
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
Open-source, accurate and easy-to-use video speech recognition & clipping tool, LLM based AI clipping intergrated.
llama3 implementation one matrix multiplication at a time
Awesome-LLM: a curated list of Large Language Model
Enhance Your English Writing for Science Research 写论文英语素材
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Selfhosted alternative to 12ft.io. and 1ft.io bypass paywalls with a proxy ladder and remove CORS headers from any URL
Read medium.com and medium based articles using google web cache.
Bypass Paywalls web browser extension for Chrome and Firefox.
为键盘工作者设计的单词记忆与英语肌肉记忆锻炼软件 / Words learning and English muscle memory training software designed for keyboard workers
北京航空航天大学大数据高精尖中心自然语言处理研究团队开展了智能问答的研究与应用总结。包括基于知识图谱的问答(KBQA),基于文本的问答系统(TextQA),基于表格的问答系统(TableQA)、基于视觉的问答系统(VisualQA)和机器阅读理解(MRC)等,每类任务分别对学术界和工业界进行了相关总结。
A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
A unified evaluation framework for large language models
MiniCPM-2B: An end-side LLM outperforming Llama2-13B.