
Digital Discovery is delighted to welcome papers for its latest themed collection on General purpose models: Large language models and beyond, led by Dr N M Anoop Krishnan (IIT Delhi), Dr Francesca Grisoni (Eindhoven), and Dr Kevin Maik Jablonka (Friedrich Schiller Universität Jena and Helmholtz Center Berlin). If you do not directly work in this field, please do feel free to forward this call for papers to any of your colleagues that might be interested in contributing to this themed collection.
Contributions are welcome in both the theory and applications of general-purpose models (GPMs)-LLMs and beyond. We define a GPM as a model pre-trained on a broad, heterogeneous corpus spanning multiple data modalities (e.g., text, images, graphs) or representations (e.g., common names, 3D coordinates, molecular images). GPMs can be applied to a wide spectrum of downstream tasks – spanning different objectives (classification, regression, generation, reasoning), input formats, and domains (from NLP to chemistry and vision) – with little or no task-specific fine-tuning.
We are particularly interested in work that deepens our understanding of what enables broad capability and generalization, including rigorous benchmarking, careful experimental design, and principled analyses of model and agent behaviour. We will consider methods ranging from near-term, practical systems to more conceptual advances, including architectures that move beyond today’s dominant transformer paradigm.
We encourage submissions on topics including, but by no means limited to:
- Novel benchmarks and evaluation protocols for general-purpose capabilities (including robustness, generalization, and cross-domain transfer)
- Careful ablation studies that yield actionable insight into what drives performance, scaling, and emergent behaviors
- Novel training approaches, objectives, curricula, and data strategies (including alignment- and efficiency-oriented methods)
- Agentic systems and setups, including well-controlled studies of tool use, planning, memory, autonomy, and safety/reliability under deployment constraints
- Multimodal GPMs, spanning text, images, graphs, 3D/structured representations, and domain-specific modalities
- Architectures beyond transformers, such as state-space models, diffusion-based text generation, and other emerging modeling paradigms
The deadline for submissions is 31 August 2026.
If you would like to contribute to this collection, please let us know by email at digitaldiscovery-rsc@rsc.org, and we will set up a submission link for you to contribute your article.
Promotion of the collection is scheduled for promotion in late 2026, with articles published online as soon as they’re accepted. Authors are welcome to submit original research in the form of a Communication or Full Paper. Authors who would like to contribute a Review article should contact the Editorial office with their proposal. The Editorial Office reserves the right to check suitability of submissions for both the journal and the scope of the collection, and inclusion of accepted articles in the final themed collection is not guaranteed.
You can find out more detailed information about our journal scope and our valued editorial board members on our website. If you have any questions about the journal or the collection, please contact us at the above address.