Abstract
In this literature review, we examine several deep learning algorithms in the context of biological plausibility and, in turn, argue that a backprop-like algorithm is the most likely candidate for how learning operates in the brain. Although there are numerous difficulties in how the backpropagation algorithm might be implemented in neural circuitry, we note that slight variations of the algorithm have been found to circumvent biological constraints and that seemingly unrelated algorithms can often be theoretically related to it. In particular, we examine the literature behind feedback alignment, target propagation, and equilibrium propagation, after giving some general background on learning in biology, AI, and their intersection. Ultimately, we acknowledge that there is no true consensus as to which learning algorithm the brain actually uses, but suspect that the answer is backprop-like in nature.
This work is licensed under a Creative Commons Attribution 4.0 International License.