I have worked both with the TensorFlow C++ API and the TensorFlow Python API. While the TF Python API is basically only a wrapper around the TF C++ API, it adds a lot of things on top, e.g. many higher-level functions you would want to use to define neural networks, etc. If you know PyTorch, think about torch.nn. Most crucially, calculating the gradients, i.e. doing backprop/autograd, was also purely implemented in Python. Even to define the gradient per each operation was done in Python. The C++ core did not know anything about this. (I'm not exactly sure how much this changed with eager mode and gradient tapes though...)
So, that makes implementing training with only the C++ API quite a big task. You first need to define all the gradients, and then implement backprop / autograd.
If this had come five years ago perhaps TensorFlow could've stood a chance against PyTorch. Switching from TensorFlow to PyTorch was such a breath of fresh air, I definitely could have used something like this.
This would have been amazing years ago. At this point, the terrible ergonomics of tensorflow have moved the industry toward PyTorch and serving PyTorch models from C++ has a much better story (whether in-process or via a serving framework like Triton).
Why? TensorFlow has been abandoned by Google. Open source uses PyTorch, and internally at Google, all new model development is done in Jax. Only TensorFlow-Serving and tfdata are still used parts of TensorFlow.
"Modern C++" is a phrase that makes me intensely wary.
Love the ergonomics: https://github.com/rdabra/txeo/blob/main/examples/txeo_predi...