References

[hu2021lora] Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint arXiv:2106.09685. https://arxiv.org/abs/2106.09685

[dettmers2023qlora] Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs. arXiv preprint arXiv:2305.14314. https://arxiv.org/abs/2305.14314

[sheng2023slora] Sheng, Y., Cao, S., Li, D., Hooper, C., Lee, N., Yang, S., Chou, C., Zhu, B., Zheng, L., Keutzer, K., Gonzalez, J. E., & Stoica, I. (2023). S-LoRA: Serving Thousands of Concurrent LoRA Adapters. arXiv preprint arXiv:2311.03285. https://arxiv.org/abs/2311.03285

[ha2016hypernetworks] Ha, D., Dai, A., & Le, Q. V. (2016). HyperNetworks. arXiv preprint arXiv:1609.09106. https://arxiv.org/abs/1609.09106

[prabhakar2024lorasoups] Prabhakar, A., Li, Y., Narasimhan, K., Kakade, S., Malach, E., & Jelassi, S. (2024). LoRA Soups: Merging LoRAs for Practical Skill Composition Tasks. arXiv preprint arXiv:2410.13025. https://arxiv.org/abs/2410.13025

[pink2025episodic] Pink, M., Wu, Q., Vo, V. A., Turek, J., Mu, J., Huth, A., & Toneva, M. (2025). Position: Episodic Memory is the Missing Piece for Long-Term LLM Agents. arXiv preprint arXiv:2502.06975. https://arxiv.org/abs/2502.06975

[cook2025pbb] Cook, J., Sapora, S., Ahmadian, A., Khan, A., Rocktaschel, T., Foerster, J., & Ruis, L. (2025). Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs. arXiv preprint arXiv:2506.18777. https://arxiv.org/abs/2506.18777

[charakorn2025t2l] Charakorn, R., Cetin, E., Tang, Y., & Lange, R. T. (2025). Text-to-LoRA: Instant Transformer Adaption. arXiv preprint arXiv:2506.06105. https://arxiv.org/abs/2506.06105

[charakorn2026doc2lora] Charakorn, R., Cetin, E., Uesaka, S., & Lange, R. T. (2026). Doc-to-LoRA: Learning to Instantly Internalize Contexts. arXiv preprint arXiv:2602.15902. https://arxiv.org/abs/2602.15902

[liu2026shine] Liu, Y., Wang, X., Mao, Y., Gelbery, Y., Maron, H., & Zhang, M. (2026). SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass. arXiv preprint arXiv:2602.06358. https://arxiv.org/abs/2602.06358

[zhang2025orthogonality] Zhang, A., Ding, X., Wang, H., McDonagh, S., & Kaski, S. (2025). Rethinking Inter-LoRA Orthogonality in Adapter Merging: Insights from Orthogonal Monte Carlo Dropout. arXiv preprint arXiv:2510.03262. https://arxiv.org/abs/2510.03262

[zou2026merging] Zou, J. (2026). Adapter Merging Reactivates Latent Reasoning Traces: A Mechanism Analysis. arXiv preprint arXiv:2601.18350. https://arxiv.org/abs/2601.18350