This project aims to develop an automated grading framework for the Statics course at a Canadian University, which serves over 1,500 students annually. Grading these complex assignments, often involving handwritten responses and diagrams, currently requires the efforts of over 26 teaching assistants (TAs). The manual system faces challenges including grading inconsistencies, delays in providing feedback, and significant resource strain. To address these limitations, the proposed framework integrates Optical Character Recognition (OCR) and Large Language Models (LLMs) to automate the grading process. The framework begins with anonymization and preprocessing of student submissions, followed by digitization using OCR software such as Mathpix to convert handwritten content into machine-readable formats. The digitized files are evaluated using a grading platform that employs LLMs guided by predefined marking schemes and step-by-step evaluation prompts. The system generates grades, detailed feedback, and consolidated reports for each submission. Preliminary testing demonstrated significant reductions in grading variability and time, while maintaining a strong alignment with human-assigned grades. The findings show that Qwen LLM achieves higher consistency across multiple evaluation metrics, whereas GPT-4 provides superior accuracy in specific scenarios. This approach enhances the scalability and efficiency of grading practices, making it a promising solution in large-scale educational settings.