Volume no :
9 |Issue no :
2Article Type :
Scholarly ArticleAuthor :
Mr.Pradip.S.Ingle, Avishkar.A.Jadhao,Pranav.V.Dhande,Praniket.P.Kolte, Prem.R.kandarkarPublished Date :
June, 2025Publisher :
Journal of Artificial Intelligence and Cyber Security (JAICS)1] R. Vavekanand and K. Sam, “Llama 3.1: An In-Depth Analysis of the Next Generation Large Language Model,” Datalink Research and Technology Lab, 2024.
2] H. Zheng, A. M. M. Sha, and X. Zhang, “LLAMA: Open and
Efficient Foundation Language Models,” arXiv preprint arXiv:2302.13971, 2023. Available: https://arxiv.org/abs/2302.13971.
3] A. Radford et al., “Language Models are Unsupervised Multitask Learners,” OpenAI, 2019. Available:https://cdn.openai.com/researchpreprints/language_models_a
re_unsupervised_multitask_learners.pdf.
4] A. Dosovitskiy et al., “Evaluating Large Language Models Trained on Code,” OpenAI, 2022. Available:
https://openai.com/research/language-models-trained-oncode.
5] T. Brown et al., “Language Models are Few-Shot Learners,” in Advances in Neural Information Processing Systems, vol. 33, pp. 1877-1901, 2020. [Online]. Available: https://arxiv.org/abs/2005.14165.
6] A. Radford et al., “Learning Transferable Visual Models From Natural Language Supervision,” in Proceedings of the 38th International Conference on Machine Learning, 2021. [Online]. Available: https://arxiv.org/abs/2103.00020.
7] Z. Yang et al., “XLNet: Generalized Autoregressive Pretraining for Language Understanding,” in Advances in Neural Information Processing Systems, vol. 32, 2020. [Online]. Available: https://arxiv.org/abs/1906.08237.
8] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019. [Online]. Available: https://arxiv.org/abs/1810.04805.
9] S. Ruder et al., “Supervised Transfer Learning for Natural Language Processing,” in Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Tutorials, 2019. [Online]. Available: https://arxiv.org/abs/1903.11260.
10] D. Hendrycks et al., “Measuring Massive Multitask Language Understanding,” in International Conference on Learning Representations, 2021. [Online]. Available: https://arxiv.org/abs/2009.03300