Football analysis system using computer vision and machine learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Advanced software for analysing player performance and team tactics is now widely used in TV sports coverage, enabling pundits and coaches to provide detailed insights during or after matches. While systems like Hawk-Eye rely on high-frame-rate cameras and multi-view triangulation, our work presents a cost-effective alternative for tracking players, officials, and the ball in standard frame-rate soccer footage. Making use of YOLOv11, an object detection model derived from the GoogleNet Convolutional Neural Network Architecture, and enhanced through open-source transfer learning, our system reliably distinguishes between teams, referees, and the ball. By incorporating transformational geometry, optical flow, perspective transformation, we compensate for camera motion and generate player statistics such as speed and distance covered. Though less sophisticated than broadcast-grade systems, our method performs well on professional match footage, making it viable for lower-tier clubs, semi-professional teams, or fan channels with limited technological resources.
Original languageEnglish
Title of host publication11th International Conference on Mathematics in Sport
Subtitle of host publicationMathSport International 2025
EditorsDries Goossens
Place of PublicationLuxembourg
PublisherMathSport International
Pages117-123
Number of pages7
ISBN (Electronic) 9789083581408
Publication statusPublished - 6 Jun 2025
Event11th International Conference on Mathematics in Sport: MathSport International 2025 - University of Luxembourg, Luxembourg
Duration: 4 Jun 20256 Jun 2025

Publication series

NameMathSport International Conference

Conference

Conference11th International Conference on Mathematics in Sport
Country/TerritoryLuxembourg
Period4/06/256/06/25

Fingerprint

Dive into the research topics of 'Football analysis system using computer vision and machine learning'. Together they form a unique fingerprint.

Cite this