Does it have to use Whisper? If so, can't you just run it on that server instead of the Mac? https://github.com/openai/whisper/discussions/1463
If it doesn't, there are a bunch of other speech recognition APIs. Most of them use older techs but might be good enough: https://www.gladia.io/blog/openai-whisper-vs-google-speech-t...
Personally I found Otter.ai works really well for the transcription part, but they don't have an API: https://otter.ai
You can also just upload them all to YouTube in a private playlist and it'll automatically transcribe them for you.
This is a complete shameless plug but I just published some documentation on automatically building Whisper inference engines with TensorRT-LLM which has the batch inference that you're looking for: https://docs.baseten.co/performance/examples/whisper-trt
We use Whisper Large on NLP Cloud (https://nlpcloud.com/home/playground/asr). It works very well and it's simple to set up in my opinion. If you have a batch to process you could simply subscribe to their pay-as-you-go plan for a couple of weeks/months maybe?
Consider "Whisper Large V3" on console.groq.com, imo is fast reliable and cheap ($0.03/hour transcribed).
I transcribed between 3000 to 4000 of 10s-30s short videos, every day for almost 2 years for fun. A cheap desktop linux with second hand x-mining RTX 3060 and 3080Ti, connected over home network using basic Gradio and faster-whisper, so they can be exposed as public API and called from corporate network. Relatively easy and much cheaper compared to commercial offerings at the time. These GPUs are over powered for the task and every day only spent 1 to 2 hours of actual encoding, it's so quick, and it's using the biggest whisper model with audio preprocessing and VAD to improve success rate.