Traditional portfolio models treat the relationships between financial assets as roughly constant over time. In steady markets, that assumption works well enough. In a crisis, it collapses completely. As Dr. Sanjay Agal puts it: even as a good researcher with 100 years of experience, if I am working to design my portfolio in 2019, why will I think that there will be a COVID in future? Anything which is rare is very difficult to predict in AI.
That is exactly the problem the paper set out to address: can a machine learning framework detect regime shifts in financial markets and adjust portfolio weights before the damage arrives?
The architecture layers five components: LSTM networks for volatility forecasting, a regime-switching mechanism for market state transitions, risk-budgeting for dynamic weight adjustment, sparse attention for computational scalability, and SHAP-based attribution so that every decision is explainable to a human risk manager.
On out-of-sample data from 2017 to 2022, the Sharpe ratio hit 1.38, a 55% improvement over traditional risk parity and 23% over contemporary ML benchmarks. During high-stress periods, the improvement over classical methods was 187%. Maximum drawdown was cut by 41%. And during February and March 2020, the model started reducing equity exposure two full weeks before the market hit its bottom. No human overrode it. No human told it to.
The paper was co-authored with Krishna Raulji from the same department and Niyati Dhirubhai Odedra from Dr. Agal’s previous institution. The peer review took six rounds across approximately one year. The first three rounds focused entirely on novelty, the absolute requirement at Q1 level. After publication, emails arrived from MIT and institutions in California. Not congratulations. Collaboration proposals. That mail is my self-concept, Dr. Agal says. That is the real reward. Head here to read ahead the full coverage of Research Paper!
The Synthetic Learner Dataset: Protecting Privacy While Enabling Research
The second confirmed paper in Scientific Reports addresses a problem that is becoming more urgent as educational institutions collect more student data. Learning analytics requires large datasets. But student data is sensitive. The paper, titled A Privacy Preserving Synthetic Learner Dataset for Learning Analytics in Technology Enhanced Higher Education (DOI: 10.1038/s41598-026-44990-8), proposes a method for generating synthetic datasets that retain the statistical properties necessary for research while ensuring that no individual student can be identified. This work connects directly to Dr. Sanjay Agal’s broader educational AI research and the student prediction model that forms the third output. Join a new era of engineering with B.Tech in AI & Data Science (Quantum Technologies).
The Student Prediction Model: 20,000 Profiles, 85 Variables, Four Years
This project began in 2022 and emerged from a practical problem that any large department faces. Dr. Agal’s department manages over 1,000 B.Tech students. The number of high-quality external training slots available is far smaller. Who gets those slots? The answer, he insists, cannot come from any individual person. It has to come from data.
The model tracks 85 variables per student: LMS interaction patterns, assessment scores, attendance, faculty feedback, communication skills, and prior academic history. Students are classified at the point of admission and the classification updates annually based on actual performance. Critically, the system is bidirectional.
A student in the B tier can move to A through genuine improvement. An A-tier student who stops working can drop to B. Every year we will give a chance, he says. The model was accepted for publication on 16 March 2026 and is currently under review at Scientific Reports. It took four years to build. He mentions this timeline in passing, as though sustained patience were ordinary. Build future-ready engineering skills through AI, Data Science, and Quantum Technologies at Parul University. Many such engineering programmes are listed under Parul University’s Faculty of Engineering & Technology; you can even explore courses around AI, data-driven methodologies, introductory quantum computing and the list is endless.
FAQ: Research at Parul University
What journal are these papers published in?
Scientific Reports, a Springer Nature journal. Q1 ranking, impact factor 3.9, and the third most-cited scientific journal in the world. Three research outputs from Dr. Sanjay Agal at Parul University are associated with this journal.
How is the research funded?
Parul University funds every Q1 paper at approximately Rs 3 lakh per publication. Not one rupee comes from the researcher's pocket. Active project budgets include Rs 1.25 crore, Rs 6 crore-plus, and Rs 50 lakh (ISRO). Total university research funding exceeds Rs 25 crore.