This project implements a GPU-accelerated evaluator for mathematical expressions using Numba CUDA.
Expressions are parsed on the CPU and converted into Reverse Polish Notation (RPN) encoded as integers, which allows them to be efficiently interpreted on the GPU. The program evaluates a large number of expressions over a dataset (loaded from a CSV file), computes the Mean Squared Error (MSE) for each expression, and identifies the best-performing expression. The evaluation is fully parallelized on the GPU using a single CUDA kernel and a 2D grid configuration.
-
Python 3.10+
-
NVIDIA GPU with CUDA support
-
CUDA Toolkit
Before running the program you should execute the setup script:
./setup.shThis script:
- Creates a Python virtual environment in the .venv directory
- Installs all required dependencies from requirements.txt
Note: You only need to run this once.
After the environment is set up, run the program using:
./run.shThis script runs main.py inside the python virtual environment
Note: The previous scripts expects you to be using a Linux environment, if you are on Windows you should create a Python virtual environment and install the dependencies manually. Then you can simply run python src/main.py