HAPOC Symposium @IACAP

The HaPoC symposium

On July 3, 2014 we organized a symposium during the 29th IACAP conference in sunny Thessaloniki. It was a long but satisfying day with lots of discussion. For the symposium we decided to work around three fundamental questions:

  • What are programs, algorithms, machines and how do we understand their
  • What is computing/computation?
  • What is the science in computer science?

We invited three speakers for each of these questions coming from diverse backgrounds: history, philosophy, computer science, logic and mathematics. One of our speakers, Gonzalo Genova regretfully had to cancel but we will soon post a recording of the talk he prepared for this symposium.

For the first session we had three speakers:
Ray Turner,
Wilfried Sieg and
Robin Hill.

Ray’s talk on the design and construction of computational artefacts reviewed several fundamental issues related to the ontology and epistemology of computational artefacts. Particular attention was given to the dual nature of these technological artefacts, as being both structural and functionally determined. This is a point of view inspired by the Delft school for the philosophy of technology. Wilfried Sieg, with his reflective talk on the notion of computation, went back to the historically developed notion of computability as we find it in the work of Church, Post and Turing. However, rather than entering into the existing discussions on Church’s thesis he focuses instead on general aspects of computability. These can be expressed by “axioms” for an appropriate abstract concept and a representation theorem can be established. The abstract concept is that of a computable dynamical system ; the representation theorem states that the computations of any model of the axioms can be simulated by a Turing machine. The axioms arise naturally out of a suitable generalization of the Post-Turing-Gandy developments (and not out of the Herbrand-Gödel-Kleene tradition).

Finally, Robin Hill gave a thought-provoking talk on the notion of algorithms by presenting a definition of algorithms which derives from practice rather than from theory.

During the second session,
Barry Cooper,
Nachum Derschowitz and Mate Szabo, each contributed to the fundamental question as what computation is. Despite some initial technological problems with the beamer, Barry was able to give a talk with the thought-provoking title Computing the Rainbow. Several different notions of computation were reviewed and an anti-reductionist view on the natural world, inspired by higher-type computability, was proposed. Computation is linked to embodiment and language is understood as something which reflects and maybe encompasses this structuring of the natural world into types. Nachum Derschowitz shared his views on concurrent computing in quite a dynamical and visually entertaining talk. In a vain similar to Sieg’s way of working, Nachum derives several general characteristics of concurrent computing in order to propose a model that fits into the rich research programme on unconventional models of computing. Finally, Mate Szabo highlighted the work of Emil Post by relating it to Turing’s work. He reviewed how Post in fact anticipated some of the more well-known 30s results and how, on this basis, he proposed some particular views on Turing’s thesis.

During the final session,
Ksenia Tatarchenko and
Simone Martini each offered their reflections on the computer science discipline.
Ksenia focused on two important conference in the history of computer science: the Los Alamos meeting on the history of computing organized by Metropolis and an international gathering in Urgench, Uzbekistan organized by Knuth and Ershov. Both conferences highlight how against the background of the Cold War, computer scientists from both sides of the so-called iron curtain were in fact capable of exchanging ideas, thus contributing to the formation of computer science as an international endeavor. Simone proposed a language-oriented view on computer science, proposing programming languages as the defining modelling language of computer science, with important connections to translation and abstraction levels.

We really think this was a very exciting, high-level and thought provoking meeting which, to us, shows that it is possible to bridge the gap between practitioners, historians and philosophers. Of course, the main questions “what is computing” will not have one answer, but this is exactly what allows for the richness of thought in the history and philosophy of computing.

We end this post with some open questions that came up during the discussion:

– How do we bridge the gap between the practice of computation and theoretical developments?
– Can we derive a notion of computation that encompasses both the historically fluctuating notion of computation as well as the theoretical models that we have?
– What is the ontological status of a program? Is it a technological artefact?
– How can we bridge the gap between history and philosophy of computing as conducted by professional historians and philosophers and the actual practice? And what is the best way to organize the discussion around this issue?
– Why did digital computers dominate analog computers since the late 40s and are we in need to revise this, given recent advances in natural computing?
– What is the difference in the relation between different programming languages and programming languages and machine language?