Not really. The only real example given in the article is when you hook someone up to an fMRI machine, collect data about how the brain looks when it sees a certain image, and then have a computational statistics program (NOT an artificial brain, in any sense) do some number crunching and output the most likely thing it's looking at based on things you specifically trained it to recognize beforehand. We learn precisely nothing from this, no medical or computer science advances are made from it, and it doesn't remotely support the title of the article.
Oversimplifying for brevity (and there is definitely more nuance to this). This is basically the modeling approach:
1. Have a biological brain do a task, record neuronal data + task performance
2. Copy some of those biological features and implement in an ANN
3. Tune the many free parameters in the ANN on task performance
4. Show that the bio-inspired ANN performs better than SOTA and/or shows "signatures" that are more brain-like.
The major criticisms of Yamins' (and similar) groups are either that correlation != causation, or correlation != understanding, or that it is tautological (bio-inspired ANNs will be more biological). I'm not sure how seriously this work is taken vs. true first principles theory.
Yes indeed. Attempts to simulate human neurons have shown that a single neuron can be simulated with a realtively large ANN consisting of several layers. This tells us that human neurons are more computationally complex and capable compared to other animals and orders of magnitude more complex than neurons in ANNs.
This sounds like a conclusion that can be reached only when we understand real brains.
For me the most interesting parallel is from (I think) GANs, and other generative AIs. This is similar to the idea in psychology that we are really doing a lot of projection with some correction based on sensory input - as opposed to actually perceiving everything around us.
Also, real synapses are one of the most abundant features of real brains and are the direct inspiration for NN weights. I'm not sure the artificial brains help understand real ones, but they do seem to validate some ideas we have about real ones.
As a computational neuroscientist, I find myself both terribly disappointed and unfortunately reminded of Gell-Mann amnesia.
Can someone explain to me why "help understand" is grammatically correct?
Why wouldn't help be following by a gerund like on https://www.ef.com/wwen/english-resources/english-grammar/ve... ?
The left hand in that photo should be a humanoid robot hand. Didn’t actually read the article
This article reminded me of looking at clouds, and seeing shapes. Or constellations...
scientific metaphors have been useful for science on all fields, it doesn't mean they are accurate or anything, they just help you think about a thing in a better way