Fletcher, P. (1989) Nonstandard set theory. Journal of Symbolic Logic, 54, 1000-1008.
Nonstandard set theory is an attempt to generalise nonstandard analysis to cover the whole of classical mathematics. Existing versions (Nelson, Hrbacek, Kawai) are unsatisfactory in that the unlimited idealisation principle conflicts with the wish to have a full theory of external sets.
I re-analyse the underlying requirements of nonstandard set theory and give a new formal system, stratified nonstandard set theory, which seems to meet them better than the other versions.
Fletcher, P. (1991) A self-configuring network. Connection Science, 3, no. 1, 35-60.
Connectionist research today is inhibited by two kinds of rigidity. First, a rigidity in architecture: the connectivity of most networks is fixed at the start by the programmer. This limits the universality of learning procedures. Secondly, a mental rigidity: the widespread assumption that a node's activation must be a real number and that activations should be combined using weighted sums. This paper explores the consequences of relaxing these rigidities.
I describe a neural network for unsupervised pattern learning. Given an arbitrary environment of input patterns it grows into a configuration which allows it to represent the high-level regularities in the input. Like the Boltzmann machine, it runs in two phases: observing the environment and simulating the environment. It continually monitors its own performance and grows new nodes as the need for them is identified.
Simulating the environment involves repeatedly choosing states to satisfy many constraints. The usual method is to maximise a `harmony' function, which leads to a merging or blending of constraints: this lacks a clear semantics. My network uses logical inference to settle into a state consistent with as many as possible of the strongest constraints.
Fletcher, P. (1992) Principles of node growth and node pruning. Connection Science, 4, no. 2, 125-141.
Networks which add and remove nodes need a very clear idea of the role of each node and how it contributes to global performance. This paper presents two approaches to the theory of unsupervised node growth in self-configuring networks: the first sees the function of a node as constraining the probability distribution of dream states, and the second sees the node as a non-terminal in a grammar describing the input patterns.
I also present two ways of deciding whether to remove nodes. The first estimates the contribution of any desired set of nodes, and may be applied to any number of sets simultaneously; the second is precise but may only be applied to one set of nodes at a time. Both are locally implementable.
Fletcher, P. (1993) Neural networks for learning grammars. In Grammatical Inference: Theory, Applications and Alternatives. First International Colloquium on Grammatical Inference, Essex, U.K., 22nd-23rd April 1993. IEE Digest no. 1993/092. Also available as Technical Report 93.07, Keele University, Computer Science Department.
The straightforward mapping of a grammar onto a connectionist architecture is to make each grammar symbol correspond to a node and each rule correspond to a pattern of connections. The grammar then expresses the `competence' of the network. The (unsupervised) grammatical inference problem is therefore: how can a network learn to configure itself to reflect the syntactic structure in its input patterns?
I adopt the following learning principles.
What makes this distinctively connectionist is that all tasks (parsing and generating patterns, sprouting and removing nodes) must be distributed amongst the nodes with no global control or co-ordination. The nodes work in parallel and each node can only interact with its neighbours.
I shall illustrate this approach with a working system that learns recursion-free grammars. Extending this to allow recursion requires a different mapping from grammars to networks; I shall discuss the knowledge representation, specification and design for such a system.
Fletcher, P. (1995) The role of experiments in computer science. Journal of Systems and Software, 30, nos 1-2, 161-163.
It is sometimes suggested that computing research should be conducted as an empirical science, with all its algorithms, designs and models being subjected to quantitative experimental testing. I question the appropriateness of this view of research and argue that much of computing research does not, and should not, embody quantitative empirical claims. Experiments are, however, sometimes appropriate; I point out some ways in which computing experiments can fall short of normal scientific standards of rigour.
Fletcher, P. (1998) Truth, Proof and Infinity: A Theory of Constructions and Constructive Reasoning. Synthese Library (Studies in Epistemology, Logic, Methodology and Philosophy of Science), vol. 276. Dordrecht: Kluwer Academic Publishers. 480 pp. ISBN 0-7923-5262-9.
Constructive mathematics is based on the thesis that the meaning of a mathematical formula is given, not by its truth-conditions, but in terms of what constructions count as a proof of it. However, the meaning of the terms `construction' and `proof' has never been adequately explained (although Kreisel, Goodman and Martin-Lof have attempted axiomatisations). This monograph develops precise (though not wholly formal) definitions of construction and proof, and describes the algorithmic substructure underlying intuitionistic logic. Interpretations of Heyting arithmetic and constructive analysis are given. The philosophical basis of constructivism is explored thoroughly in Part I. The author seeks to answer objections from platonists and to reconcile his position with the central insights of Hilbert's formalism and logicism.
Fletcher, P. (2000) The foundations of connectionist computation. Connection Science, 12, no. 2, 163-196.
This paper presents a formal syntax and semantics for computation in neural networks. The main motivation for this is to provide a foundation for rigorous mathematical analysis of the capabilities of neural networks in relation to other types of computational system. A secondary benefit is that it helps to clarify obscurities and controversial issues in the notion of connectionist computation as it is informally understood. The paper reviews the various informal and formal definitions of connectionism in the literature and attempts to identify common principles and areas of disagreement. Central to connectionism is the idea of a system of simple nodes working together to solve a task, where each node acts in a purely local way on its neighbours: the vague words `simple' and `local' are clarified and defined by my formal system in a precise way, free from arbitrary restrictions. The system also defines the semantics of node growth, node pruning and connectivity change - operations that are used in an increasing number of recent connectionist algorithms but are not taken into account by previous definitions of connectionism.
Fletcher, P. (2001) Connectionist learning of regular graph grammars. Connection Science, 13, no. 2, 127-188.
This paper presents a new connectionist approach to grammatical inference. Using only positive examples, the algorithm learns regular graph grammars, representing two-dimensional iterative structures drawn on a discrete Cartesian grid. This work is intended as a case study in connectionist symbol processing and geometric concept-formation.
A grammar is represented by a self-configuring connectionist network that is analogous to a transition diagram except that it can deal with graph grammars as easily as string grammars. Learning starts with a trivial grammar, expressing no grammatical knowledge, which is then refined, by a process of successive node splitting and merging, into a grammar adequate to describe the population of input patterns.
In conclusion, I argue that the connectionist style of computation is, in some ways, better suited than sequential computation to the task of representing and manipulating recursive structures.
Fletcher, P. (2002) A constructivist perspective on physics. Philosophia Mathematica, Series III, 10, 26-42.
This paper examines the problem of extending the programme of mathematical constructivism to applied mathematics. I am not concerned with the question of whether conventional mathematical physics makes essential use of the principle of excluded middle, but rather with the more fundamental question of whether the concept of physical infinity is constructively intelligible. I consider two kinds of physical infinity: a countably infinite constellation of stars and the infinitely divisible space-time continuum. I argue (contrary to Hellman) that these do not pose any insuperable problem for constructivism, and that constructivism may have a useful new perspective to offer on physics.
Fletcher, P. (2004) Mathematical Theory of Recursive Symbol Systems. Technical Report GEN-0401, Keele University, Computer Science Department.
This paper introduces a new type of graph grammar for representing recursively structured two-dimensional geometric patterns, for the purposes of syntactic pattern recognition. The grammars are noise-tolerant, in the sense that they can accommodate patterns that contain pixel noise, variability in the geometric relations between symbols, missing parts, or ambiguous parts, and that overlap other patterns. Geometric variability is modelled using fleximaps, drawing on concepts from the theory of Lie algebras and tensor calculus.
The formalism introduced here is intended to be generalisable to all problem domains in which complex, many-layered structures occur in the presence of noise.
Fletcher, P. (2007) Infinity. In D. Jacquette (ed.) Philosophy of Logic, vol. 5 of the Handbook of the Philosophy of Science, pp.523-585. Amsterdam: Elsevier. ISBN 0-444-51541-0.
This essay surveys the different types of infinity that occur in pure and applied mathematics, with emphasis on:
Lam, K.P. & Fletcher, P. (2009) Concurrent grammar inference machines for 2-D pattern recognition: a comparison with the level set approach. In Image Processing: Algorithms and Systems VII, edited by J.T. Astola, K.O. Egiazarian, N.M. Nasrabadi & S.A. Rizvi. Proceedings of SPIE-IS&T Electronic Imaging 2009, published by SPIE-IS&T, SPIE vol. 7245, article no. 724515. DOI 10.1117/12.806035.
Parallel processing promises scalable and effective computing power which can handle the complex data structures of knowledge representation languages efficiently. Past and present sequential architectures, despite the rapid advances in computing technology, have yet to provide such processing power and to offer a holistic solution to the problem. This paper presents a fresh attempt in formulating alternative techniques for grammar learning, based on the parallel and distributed model of connectionism, to facilitate the more cognitively demanding task of pattern understanding. The proposed method has been compared with the contemporary approach of shape modelling based on level sets, and demonstrated its potential as a prototype for constructing robust networks on high performance parallel platforms.
This is a survey of several approaches to the framework for working with infinitesimals and infinite numbers, originally developed by Abraham Robinson in the 1960s, and their constructive engagement with the Cantor-Dedekind postulate and the Intended Interpretation hypothesis. We highlight some applications including (1) Loeb's approach to the Lebesgue measure, (2) a radically elementary approach to the vibrating string, (3) true infinitesimal differential geometry. We explore the relation of Robinson's and related frameworks to the multiverse view as developed by Hamkins.