Neural Nets, Connectionism, Perceptrons, etc.
11 Apr 2022 12:13
Old notes from c. 2000
I'm mostly interested in them as a means of machine learning or statistical inference. I am particularly interested in their role as models of dynamical systems (via recurrent nets, generally), and as models of transduction.I need to understand better how the analogy to spin glasses works, but then, I need to understand spin glasses better too.
The arguments that connectionist models are superior, for purposes of cognitive science, to more "symbolic" ones I find unconvincing. (Saying that they're more biologically realistic is like saying that cars are better models of animal locomotion than bicycles, because cars have four appendages in contact with the ground and not two.) This is not to say, of course, that some connectionist models of cognition aren't interesting, insightful and valid; but the same is true of many symbolic models, and there seems no compelling reason for abandoning the latter in favor of the former. (For more on this point, see Gary Marcus.) --- Of course a cognitive model which cannot be implemented in real brains must be rejected; connecting neurobiology to cognition can hardly be too ardently desired. The point is that the elements in connectionist models called "neurons" bear only the sketchiest resemblance to the real thing, and neural nets are no more than caricatures of real neuronal circuits. Sometimes sketchy resemblances and caricatures are enough to help us learn, which is why Hebb, McCulloch and Neural Computation are important for both connectionism and neurobiology.
Reflections circa 2016
I first learned about neural networks as an undergraduate in the early 1990s, when, judging by the press, Geoff Hinton and his students were going to take over the world. (In "Introduction to Cognitive Science" at Berkeley, we trained a three-layer perceptron to classify characters as "Sharks" or "Jets" using back-propagation; I had no idea what those labels meant because I'd never seen West Side Story.) I then lived through neural nets virtually disappearing from the proceedings of Neural Information Processing Systems, and felt myself very retro for including neural nets the first time I taught data mining in 2006. (I dropped them by 2009.) The recent revival, as "deep learning", is a bit weird for me, especially since none of the public rhetoric has changed. The most interesting thing scientifically about the new wave is that it's lead to the discovery of adversarial examples, which I think we still don't understand very well at all. The most interesting thing meta-scientifically is how much the new wave of excitement about neural networks seems to be accompanied by forgetting earlier results, techniques, and baselines.
Reflections in 2022
I would now actually say there are three scientifically interesting phenomena revealed by the current wave of interest in neural networks:
- Adversarial examples (as revealed by Szegedy et al.), and the converse phenomenon of extremely high confidence classification of nonsense images that have no humanly-perceptible resemblance to the class (e.g., Nguyen et al.);
- The ability to generalize to new instances by using humanly-irrelevant features like pixels at the edges of images (e.g., Carter et al.);
- The ability to generalize to new instances despite having the capacity to memorize random training data (e.g., Zhang et al.).
It's not at all clear how specific any of these are to neural networks. (See, Belkin's wonderful "Fit without Fear" for a status report on our progress in understanding my item (3) using other models, going back all the way to margin-based understandings of boosting.) It's also not clear how they inter-relate. But they are all clearly extremely important phenomena in machine learning which we do not yet understand, and really, really ought to understand.
I'd add that I still think there has been a remarkable regression of understanding of the past of our field and some hard-won lessons. When I hear people conflating "attention" in neural networks with attention in animals, I start muttering about "wishful mnemonics", and "did Drew McDermott live and fight in vain?" Similarly, when I hear graduate students, and even young professors, explaining that Mikolov et al. 2013 invented the idea of representing words by embedding them in a vector space, with proximity in the space tracking patterns of co-occurrence, as though latent semantic indexing (for instance) didn't date from the 1980s, I get kind of indignant. (Maybe the new embedding methods are better for your particular application than Good Old Fashioned Principal Components, or even than kernelized PCA, but argue that, dammit.)
I am quite prepared to believe that part of my reaction here is sour grapes, since deep learning swept all before it right around the time I got tenure, and I am now too inflexible to really jump on the bandwagon.
That is my opinion; and it is further my opinion that you kids should get off of my lawn.
- See also:
- Adversarial Examples
- Artificial Intelligence
- Recommended (big picture):
- Maureen Caudill and Charles Butler, Naturally Intelligent Systems
- Patricia Churchland and Terrence Sejnowski, >The Computational Brain
- Chris Eliasmith and Charles Anderson, Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems
- Gary F. Marcus, The Algebraic Mind: Integrating Connectionism and Cognitive Science [On the limits of the connectionist approach to cognition, with special reference to language and grammar. Cf. later papers by Marcus below.]
- Brian Ripley, Pattern Recognition and Neural Networks
- Recommended (close-ups; very misc. and not nearly extensive enough):
- Larry Abbot and Terrence Sejnowski (eds.), Neural Codes and Distributed Representations
- Martin Anthony and Peter C. Bartlett, Neural Network Learning: Theoretical Foundations
- Michael A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks
- Dana Ballard, An Introduction to Natural Computation [Review: Not Natural Enough]
- M. J. Barber, J. W. Clark and C. H. Anderson, "Neural Representation of Probabilistic Information", Neural Computation 15 (2003): 1843--1864, arxiv:cond-mat/0108425
- Suzanna Becker, "Unsupervised Learning Procedures for Neural Networks", International Journal of Neural Systems 2 (1991): 17--33
- Mikhail Belkin, "Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation", arxiv:2105.14368
- Surya Ganguli, Dongsung Huh and Haim Sompolinsky, "Memory traces in dynamical systems", Proceedings of the National Academy of Sciences (USA) 105 (2008): 18970--18975
- Geoffrey Hinton and Terrence Sejnowski (eds.), Unsupervised Learning [A sort of "Neural Computation's Greatest Hits" compilation]
- Anders Krogh and Jesper Vedelsby, "Neural Network Ensembles, Cross Validation, and Active Learning", NIPS 7 (1994): 231--238
- Gary Marcus
- "Deep Learning: A Critical Appraisal", arxiv:1801.00631
- "Innateness, AlphaZero, and Artificial Intelligence", arxiv:1801.05667
- Mathukumalli Vidyasagar, A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems [Extensive discussion of the application of statistical learning theory to neural networks, along with the purely computational difficulties. Mini-review]
- T. L. H Watkin, A. Rau and M. Biehl, "The Statistical Mechanics of Learning a Rule," Reviews of Modern Physics 65 (1993): 499--556
- Achilleas Zapranis and Apostolos-Paul Refenes, Principles of Neural Model Identification, Selection and Adequacy, with Applications to Financial Econometrics [Their English is less than perfect, but they've got very sound ideas about all the important topics]
- Recommended, "your favorite deep neural network sucks":
- Brandon Carter, Siddhartha Jain, Jonas Mueller, David Gifford, "Overinterpretation reveals image classification model pathologies", arxiv:2003.08907
- Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach, "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches", arxiv:1907.06902
- Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin R. Benson, "Combining Label Propagation and Simple Models Out-performs Graph Neural Networks", arxiv:2010.13993
- Anh Nguyen, Jason Yosinski, Jeff Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images", arxiv:1412.1897
- Filip Piekniewski, "Autopsy of a Deep Learning Paper"
- Adityanarayanan Radhakrishnan, Karren Yang, Mikhail Belkin, Caroline Uhler, "Memorization in Overparameterized Autoencoders", arxiv:1810.10333
- Ali Rahimi and Benjamin Recht
- "Reflections on Random Kitchen Sinks" argmin blog, 5 December 2017
- "An Addendum to Alchemy", argmin blog, 11 December 2017
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus, "Intriguing properties of neural networks", arxiv:1312.6199
- Chengxi Ye, Matthew Evanusa, Hua He, Anton Mitrokhin, Tom Goldstein, James A. Yorke, Cornelia Fermüller, Yiannis Aloimonos, "Network Deconvolution". arxiv:1905.11926 [This is just doing principal components analysis, as invented in 1900]
- Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, "Understanding deep learning (still) requires rethinking generalization", Communications of the ACM 64 (2021): 107--115 [previous version: arxiv:1611.03530]
- Recommended, historical:
- Michael A. Arbib, Brains, Machines and Mathematics [1964; a model of clarity in exposition and thought]
- Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory
- Warren S. McCulloch, Embodiments of Mind
- Modesty forbids me to recommend:
- CRS, "Notes on 'Intriguing Properties of Neural Networks', and two other papers (2014)" [On Szegedy et al., Nguyen et al., and Chalupka et al.]
- To read [with abundant thanks to Osame Kinouchi for recommendations]:
- Daniel Amit, Modelling Brain Function
- V. M. Becerra, F. R. Garces, S. J. Nasuto and W. Holderbaum, "An Efficient Parameterization of Dynamic Neural Networks for Nonlinear System Identification", IEEE Transactions on Neural Networks 16 (2005): 983--988
- William Bechtel and Adele Abrahamsen, Connectionism and the Mind: Parallel Processing, Dynamics, and Evolution in Networks
- William Bechtel and Robert C. Richardson, Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research
- Randall Beer, Intelligence as Adaptive Behavior: An Experiment in Computational Neuroethology
- Hugues Berry and Mathias Quoy, "Structure and Dynamics of Random Recurrent Neural Networks", Adaptive Behavior 14 (2006): 129--137
- Dimitri P. Bertsekas and John N. Tsitsiklis, Neuro-Dynammic Programming
- Michael Biehl, Reimer Kühn, Ion-Olimpiu Stamatescu, "Learning structured data from unspecific reinforcement," cond-mat/0001405
- D. Bollé and P. Kozlowski, "On-line learning and generalisation in coupled perceptrons," cond-mat/0111493
- Christoph Bunzmann, Michael Biehl, and Robert Urbanczik, "Efficient training of multilayer perceptrons using principal component analysis", Physical Review E 72 (2005): 026117
- Gail A. Carpenter and Stephen Grossberg (eds.), Pattern Recognition by Self-Organizing Neural Networks
- Nestor Caticha and Osame Kinouchi, "Time ordering in the evolution of information processing and modulation systems," Philosophical Magazine B 77 (1998): 1565--1574
- Axel Cleeremans, Mechanisms of Implicit Learning: Connectionist Models of Sequence Processing
- A. C. C. Coolen, "Statistical Mechanics of Recurrent Neural Networks": part I, "Statics," cond-mat/0006010 and part II, "Dynamics," cond-mat/0006011
- A. C. C. Coolen, R. Kuehn, and P. Sollich, Theory of Neural Information Processing Systems
- A. C. C. Coolen and D. Saad, "Dynamics of Learning with Restricted Training Sets," Physical Review E 62 (2000): 5444--5487
- Mauro Copelli, Antonio C. Roque, Rodrigo F. Oliveira and Osame Kinouchi, "Enhanced dynamic range in a sensory network of excitable elements," cond-mat/0112395
- Valeria Del Prete and Alessandro Treves, "A theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions," cond-mat/0103286
- M. C. P. deSouto, T. B. Ludermir and W. R. deOliveira, "Equivalence Between RAM-Based Neural Networks and Probabilistic Automata", IEEE Transactions on Neural Networks 16 (2005): 996--999
- Eytan Domany, Jan Leonard van Hemmen and Klaus Schulten (eds.), Models of Neural Networks III: Association, Generalization, and Representation
- Viktor Dotsenko, Introduction to the Theory of Spin Glasses and Neural Networks
- Keith L. Downing, Intelligence Emerging: Adaptivity and Search in Evolving Neural Systems
- Ethan Dyer, Guy Gur-Ari, "Asymptotics of Wide Networks from Feynman Diagrams", arxiv:1909.11304
- Liat Ein-Dor and Ido Kanter, "Confidence in prediction by neural networks," Physical Review E 60 (1999): 799--802
- Chris Eliasmith, "A Unified Approach to Building and Controlling Spiking Attractor Networks", Neural Computation 17 (2005): 1276--1314
- Elman et al., Rethinking Innateness
- Frank Emmert-Streib
- "Self-organized annealing in laterally inhibited neural networks shows power law decay", cond-mat/0401633
- "A Heterosynaptic Learning Rule for Neural Networks", cond-mat/0608564
- Andreas Engel and Christian P. L. Van den Broeck, Statistical Mechanics of Learning
- Magnus Enquist and Stefano Ghirlanda, Neural Networks and Animal Behavior
- Michael Feindt, "A Neural Bayesian Estimator for Conditional Probability Densities", physics/0402093
- Gary William Flake, "The Calculus of Jacobian Adaptation" [Not confined to neural nets]
- Leonardo Franco, "A measure for the complexity of Boolean functions related to their implementation in neural networks," cond-mat/0111169
- Jürgen Franke and Michael H. Neumann, "Bootstrapping Neural Networks," Neural Computation 12 (2000): 1929--19949
- Gardenfors, Conceptual Spaces: The Geometry of Thought
- Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning
- F. A. von Hayek, The Sensory Order
- Michiel Hermans and Benjamin Schrauwen, "Recurrent Kernel Machines: Computing with Infinite Echo State Networks", Neural Computation 24 (2012): 104--133
- D. Herschkowitz and M. Opper, "Retarded Learning: Rigorous Results from Statistical Mechanics," cond-mat/0103275
- Dirk Husmeier, Neural Networks for Conditional Probability Estimation
- Jun-ichi Inoue and A. C. C. Coolen, "Dynamics of on-line Hebbian learning with structurally unrealizable restricted training sets," cond-mat/0105004
- Henrik Jacobsson, "Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review", Neural Computation 17 (2005): 1223--1263
- Jim W. Kay and D. M. Titterington (eds.), Statistics and Neural Networks: Advances at the Interface
- I. Kanter, W. Kinzel and E. Kanter, "Secure exchange of information by synchronization of neural networks," cond-mat/0202112
- Alon Keinan, Ben Sandbank, Claus C. Hilgetag, Isaac Meilijson and Eytan Ruppin, "Fair Attribution of Functional Contribution in Artificial and Biological Networks", Neural Computation 16 (2004): 1887--1915
- Beom Jun Kim, "Performance of networks of artificial neurons: The role of clustering", q-bio.NC/0402045
- Osame Kinouchi and Nestor Caticha, "Optimal Generalization in Perceptrons," Journal of Physics A 25 (1992): 6243--6250
- W. Kinzel
- "Statistical Physics of Neural Networks," Computer Physics Communications, 122 (1999): 86--93
- "Phase transitions of neural networks," Philosophical Magazine B 77 (1998): 1455--1477
- W. Kinzel, R. Metzler and I. Kanter, "Dynamics of Interacting Neural Networks," Journal of Physica A 33 (2000): L141--L147
- Konstantin Klemm, Stefan Bornholdt and Heinz Georg Schuster, "Beyond Hebb: XOR and biological learning," adap-org/9909005
- G.A. Kohring, "Artificial Neurons with Arbitrarily Complex Internal Structures," cs.NE/0108009
- Kohonen, Self-organization and associative memory [Start of the huge literature on self-organizing maps, which I ought to get a grip on]
- John F. Kolen (ed.), A Field Guide to Dynamical Recurrent Networks
- Krogh et al., Introduction to the Theory of Neural Computation
- Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman, "Building Machines That Learn and Think Like People", arxiv:1604.00289
- Hannes Leitgeb, "Interpreted Dynamical Systems and Qualitative Laws: From Neural Networks to Evolutionary Systems", Synthese 146 (2005): 189--202 ["Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems C and CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively"]
- Andrea Loettgers, "Getting Abstract Mathematical Models in Touch with Nature", Science in Context 20 (2007): 97--124 [Intellectual history of the Hopfield model and its reception]
- Yonatan Loewenstein, and H. Sebastian Seung, "Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity", Proceedings of the National Academy of Sciences (USA) 103 (2006): 15224--15229 [The abstract promises a result about all possible neural mechanisms having some fairly generic features; this is clearly the right way to do theoretical neuroscience, but rarely done...]
- Wolfgang Maass (ed.), Pulsed Neural Networks
- Wolfgang Maass and Eduardo D. Sontag, "Neural Systems as Nonlinear Filters," Neural Computation 12 (2000): 1743--1772
- M. S. Mainieri and R. Erichsen Jr, "Retrieval and Chaos in Extremely Diluted Non-Monotonic Neural Networks," cond-mat/0202097
- Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia, "Causal interactions and delays in a neuronal ensemble", cond-mat/0609523
- McClelland and Rumelhart (ed.), Parallel Distributed Processing
- Patrick C. McGuire, Henrik Bohr, John W. Clark, Robert Haschke, Chris Pershing and Johann Rafelski, "Threshold Disorder as a Source of Diverse and Complex Behavior in Random Nets," cond-mat/0202190
- Richard Metzler, Wolfgang Kinzel, Liat Ein-Dor and Ido Kanter, "Generation of anti-predictable time series by a Neural Network," cond-mat/0011302
- R. Metzler, W. Kinzel and I. Kanter, "Interacting Neural Networks," Physical Review E 62 (2000): 2555--2565 [abstract]
- Minsky and Papert, Perceptrons
- Seiji Miyoshi, Kazuyuki Hara, and Masato Okada, "Analysis of ensemble learning using simple perceptrons based on online learning theory", Physical Review E 71 (2005): 036116
- Javier R. Movellan, Paul Mineiro, and R. J. Williams, "A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks," Neural Computation 14 (20020: 1507--1544
- Randall C. O'Reilly, "Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Learning," Neural Computation 13 (2001): 1199--1241
- Steven Phillips, "Systematic Minds, Unsystematic Models: Learning Transfer in Humans and Networks", Minds and Machines 9 (1999): 383--398
- Patrick D. Roberts, "Dynamics of Temporal Learning Rules," Physical Review E 62 (2000): 4077--4082
- Fabrice Rossi, Brieuc Conan-Guez, "Functional Multi-Layer Perceptron: a Nonlinear Tool for Functional Data Analysis", arxiv:0709.3642
- Fabrice Rossi, Nicolas Delannay, Brieuc Conan-Guez, Michel Verleysen, "Representation of Functional Data in Neural Networks", arxiv:0709.3641
- Ines Samengo, "Independent neurons representing a finite set of stimuli: dependence of the mutual information on the number of units sampled," Network: Computation in Neural Systems, 12 (2000): 21--31, cond-mat/0202023
- Ines Samengo and Alessandro Treves, "Representational capacity of a set of independent neurons," cond-mat/0201588
- Vitaly Schetinin and Anatoly Brazhnikov, "Diagnostic Rule Extraction Using Neural Networks", cs.NE/0504057
- Philip Seliger, Stephen C. Young, and Lev S. Tsimring, "Plasticity and learning in a network of coupled phase oscillators," nlin.AO/0110044
- Paul Smolensky and Géraldine Legendre, The Harmonic
Mind: From Neural Computation to Optimality-Theoretic Grammar
- Dietrich Stauffer and Amnon Aharony, "Efficient Hopfield pattern recognition on a scale-free neural network," cond-mat/0212601
- Samy Tindel, "The stochastic calculus method for spin systems", Annals of Probability 33 (2005): 561--581 = math.PR/0503652 [One of the kind of spin systems being perceptrons]
- Marc Toussaint
- "On model selection and the disability of neural networks to decompose tasks," nlin.AO/0202038
- "A neural model for multi-expert architectures," nlin.AO/0202039
- T. Uezu and A. C. C. Coolen, "Hierarchical Self-Programming in Recurrent Neural Networks," cond-mat/0109099
- Robert Urbanczik, "Statistical Physics of Feedforward Neural Networks," cond-mat/0201530
- Leslie G. Valiant
- Circuits of the Mind
- "Memorization and Association on a Realistic Neural Model", Neural Computation 17 (2005): 527--555 ["A central open question of computational neuroscience is to identify the data structures and algorithms that are used in mammalian cortex to support successive acts of the basic cognitive tasks of memorization and association. This letter addresses the simultaneous challenges of realizing these two distinct tasks with the same data structure, and doing so while respecting the following four basic quantitative parameters of cortex: the neuron number, the synapse number, the synapse strengths, and the switching times. Previous work has not succeeded in reconciling these opposing constraints, the low values of synapse strengths that are typically observed experimentally having contributed a particular obstacle. In this article, we describe a computational scheme that supports both memory formation and association and is feasible on networks of model neurons that respect the widely observed values of the four quantitative parameters. Our scheme allows for both disjoint and shared representations. The algorithms are simple, and in one version both memorization and association require just one step of vicinal or neighborly influence. The issues of interference among the different circuits that are established, of robustness to noise, and of the stability of the hierarchical memorization process are addressed. A calculus therefore is implied for analyzing the capabilities of particular neural systems and subsystems, in terms of their basic numerical parameters."]
- Frank van der Velde and Marc de Kamps, "Neural blackboard architectures of combinatorial structures in cognition", Behavioral and Brain Sciences 29 (2006): 37--70 [+ peer commentary]
- W. A. van Leeuwen and Bastian Wemmenhove, "Learning by a neural net in a noisy environment - The pseudo-inverse solution revisited," cond-mat/0205550
- Renato Vicente, Osame Kinouchi and Nestor Caticha, "Statistical mechanics of online learning of drifting concepts: A variational approach," Machine Learning 32 (1998): 179--201 [abstract]
- Hiroshi Wakuya and Jacek M. Zurada, "Bi-directional computing architecture for time series prediction," Neural Networks 14 (2001): 1307--1321
- C. Xiang, S. Ding and T. H. Lee, "Geometrical Interpretation and Architecture Selection of MLP", IEEE Transactions on Neural Networks 16 (2005): 84--96 [MLP = multi-layer perceptron]