Blog

The Knowledge Modelling Research Group Blog contains our latest published online articles, information about any software resources we produce and general group news and announcements..

Fabricatory Depth

February 23rd, 2006

Download full paper (doc - 72kb)

In brief

This is another idea relating to graph theory and laddering. In order to make an artefact, you need to use tools, materials and processes. Usually, you need to use tools to make those tools, and so forth, through several layers of tools. (The same obviously applied to needing tools and materials to produce the materials.) It’s possible to represent this via graph theory, and if you do this, then you can apply various metrics to the resulting graph. For instance, you can count the number of layers of tools that it takes to produce an artefact, or you can count the number of different tools involved, or you can see how many tools are needed to produce a specified set of artefacts. This is potentially useful for archaeology, since it gives some useful metrics for the technical complexity of different assemblages. For instance, a handaxe can be represented with a very small graph (at most, you might need to modify an antler to produce a soft hammer for fine work), whereas a polished flint axe may require a substantial set of tools for mining the flint used to make the axe. This in turn leads into issues such as co-opting of technologies originally developed for other purposes: for instance, most of the tools required for neolithic mining were already used in agriculture. This approach provides a way of quantifying the complexity of the technology used by a culture without any value judgements being involved. It is also potentially useful for tracing dependencies, so that you can see what the ripple effects will be from a disruption in a given resource.

A related concept is elucidatory depth and breadth: how many layers of explanation do you need to go through before a technical or subjective term bottoms out in terms that lay people can understand? Again, this varies between fields and concepts (we haven’t had to go more than seven layers down yet, and I suspect that we’re unlikely to find any substantially deeper examples, but the reasons for that are a bit too lengthy to unpack here). One interesting question is whether some fields have now reached a stage of elucidatory breadth so great that there isn’t time for a single human being to learn all the important concepts in a lifetime, given the apparently fixed upper rate at which humans can learn things. If that’s the case, what (if anything) should we do about it? The obvious answers are review articles, textbooks and indexing of online materials, but there are problems with this, such as cognitive cloning, which is one reason that my book with Marian Petre on PhD research places a lot of emphasis on finding and processing the relevant information for your own particular study.

Answers to likely FAQs

No, I don’t have a piece of software that I use for this. This is a deliberate choice: I find that software implementations are constricting, and don’t let you do some of the things that you need to do when representing this kind of knowledge. Yes, explanations usually do bottom out, rather than spreading out in an infinite journey across the whole set of human knowledge. Yes, you do sometimes get loops, where two or more concepts are defined in relation to each other, but these haven’t been very common in the areas we’ve looked at.

Download full paper (doc - 72kb)

This entry was posted on Thursday, February 23rd, 2006

 

 

Related Online Articles

Gordon Rugg

Voynich Manuscript

Blog Archives

Full Blog Entry Archive