Seems like a waste of money to me.
Training some NN is just matrix multiplications. Why would you need a custom silicon to take advantage of the software ?
I suspect a GPU is optimized for this operation all the way to software level. I can’t imagine apple will get performance benefits.
Maybe it’s cheaper or provides better access to hardware, but I don’t think performance is at play here.
Seems like a waste of money to me. Training some NN is just matrix multiplications. Why would you need a custom silicon to take advantage of the software ?
I suspect a GPU is optimized for this operation all the way to software level. I can’t imagine apple will get performance benefits.
Maybe it’s cheaper or provides better access to hardware, but I don’t think performance is at play here.