102

私はプログラミングの人生の比較的遅い時期にカリーハワード同形性に出くわしました、そしておそらくこれは私がそれに完全に魅了されることに貢献しています。これは、すべてのプログラミングコンセプトに対して、形式論理に正確な類似物が存在すること、およびその逆が存在することを意味します。これが私の頭のてっぺんからのそのようなアナロジーの「基本的な」リストです:

program/definition        | proof
type/declaration          | proposition
inhabited type            | theorem/lemma
function                  | implication
function argument         | hypothesis/antecedent
function result           | conclusion/consequent
function application      | modus ponens
recursion                 | induction
identity function         | tautology
non-terminating function  | absurdity/contradiction
tuple                     | conjunction (and)
disjoint union            | disjunction (or)          -- corrected by Antal S-Z
parametric polymorphism   | universal quantification

それで、私の質問に:この同型写像のより興味深い/あいまいな意味のいくつかは何ですか?私は論理学者ではないので、このリストで表面をかじっただけだと確信しています。

たとえば、これは私が論理の中で下品な名前に気付いていないいくつかのプログラミングの概念です:

currying                  | "((a & b) => c) iff (a => (b => c))"
scope                     | "known theory + hypotheses"

そして、ここに私がプログラミング用語で完全に特定していないいくつかの論理的な概念があります:

primitive type?           | axiom
set of valid programs?    | theory

編集:

回答から収集されたいくつかの同等物は次のとおりです。

function composition      | syllogism                -- from Apocalisp
continuation-passing      | double negation          -- from camccann
4

10 に答える 10

36

Since you explicitly asked for the most interesting and obscure ones:

You can extend C-H to many interesting logics and formulations of logics to obtain a really wide variety of correspondences. Here I've tried to focus on some of the more interesting ones rather than on the obscure, plus a couple of fundamental ones that haven't come up yet.

evaluation             | proof normalisation/cut-elimination
variable               | assumption
S K combinators        | axiomatic formulation of logic   
pattern matching       | left-sequent rules 
subtyping              | implicit entailment (not reflected in expressions)
intersection types     | implicit conjunction
union types            | implicit disjunction
open code              | temporal next
closed code            | necessity
effects                | possibility
reachable state        | possible world
monadic metalanguage   | lax logic
non-termination        | truth in an unobservable possible world
distributed programs   | modal logic S5/Hybrid logic
meta variables         | modal assumptions
explicit substitutions | contextual modal necessity
pi-calculus            | linear logic

EDIT: A reference I'd recommend to anyone interested in learning more about extensions of C-H:

"A Judgmental Reconstruction of Modal Logic" http://www.cs.cmu.edu/~fp/papers/mscs00.pdf - this is a great place to start because it starts from first principles and much of it is aimed to be accessible to non-logicians/language theorists. (I'm the second author though, so I'm biased.)

于 2010-07-30T09:22:57.580 に答える
27

You're muddying things a little bit regarding nontermination. Falsity is represented by uninhabited types, which by definition can't be non-terminating because there's nothing of that type to evaluate in the first place.

Non-termination represents contradiction--an inconsistent logic. An inconsistent logic will of course allow you to prove anything, including falsity, however.

Ignoring inconsistencies, type systems typically correspond to an intuitionistic logic, and are by necessity constructivist, which means certain pieces of classical logic can't be expressed directly, if at all. On the other hand this is useful, because if a type is a valid constructive proof, then a term of that type is a means of constructing whatever you've proven the existence of.

A major feature of the constructivist flavor is that double negation is not equivalent to non-negation. In fact, negation is rarely a primitive in a type system, so instead we can represent it as implying falsehood, e.g., not P becomes P -> Falsity. Double negation would thus be a function with type (P -> Falsity) -> Falsity, which clearly is not equivalent to something of just type P.

However, there's an interesting twist on this! In a language with parametric polymorphism, type variables range over all possible types, including uninhabited ones, so a fully polymorphic type such as ∀a. a is, in some sense, almost-false. So what if we write double almost-negation by using polymorphism? We get a type that looks like this: ∀a. (P -> a) -> a. Is that equivalent to something of type P? Indeed it is, merely apply it to the identity function.

But what's the point? Why write a type like that? Does it mean anything in programming terms? Well, you can think of it as a function that already has something of type P somewhere, and needs you to give it a function that takes P as an argument, with the whole thing being polymorphic in the final result type. In a sense, it represents a suspended computation, waiting for the rest to be provided. In this sense, these suspended computations can be composed together, passed around, invoked, whatever. This should begin to sound familiar to fans of some languages, like Scheme or Ruby--because what it means is that double-negation corresponds to continuation-passing style, and in fact the type I gave above is exactly the continuation monad in Haskell.

于 2010-06-03T21:08:01.657 に答える
15

Your chart is not quite right; in many cases you have confused types with terms.

function type              implication
function                   proof of implication
function argument          proof of hypothesis
function result            proof of conclusion
function application RULE  modus ponens
recursion                  n/a [1]
structural induction       fold (foldr for lists)
mathematical induction     fold for naturals (data N = Z | S N)
identity function          proof of A -> A, for all A
non-terminating function   n/a [2]
tuple                      normal proof of conjunction
sum                        disjunction
n/a [3]                    first-order universal quantification
parametric polymorphism    second-order universal quantification
currying                   (A,B) -> C -||- A -> (B -> C), for all A,B,C
primitive type             axiom
types of typeable terms    theory
function composition       syllogism
substitution               cut rule
value                      normal proof

[1] The logic for a Turing-complete functional language is inconsistent. Recursion has no correspondence in consistent theories. In an inconsistent logic/unsound proof theory you could call it a rule which causes inconsistency/unsoundness.

[2] Again, this is a consequence of completeness. This would be a proof of an anti-theorem if the logic were consistent -- thus, it can't exist.

[3] Doesn't exist in functional languages, since they elide first-order logical features: all quantification and parametrization is done over formulae. If you had first-order features, there would be a kind other than *, * -> *, etc.; the kind of elements of the domain of discourse. For example, in Father(X,Y) :- Parent(X,Y), Male(X), X and Y range over the domain of discourse (call it Dom), and Male :: Dom -> *.

于 2010-06-17T09:32:25.253 に答える
14
function composition      | syllogism
于 2010-06-03T20:26:36.887 に答える
13
于 2010-06-03T20:36:46.473 に答える
13

Here's a slightly obscure one that I'm surprised wasn't brought up earlier: "classical" functional reactive programming corresponds to temporal logic.

Of course, unless you're a philosopher, mathematician or obsessive functional programmer, this probably brings up several more questions.

So, first off: what is functional reactive programming? It's a declarative way to work with time-varying values. This is useful for writing things like user interfaces because inputs from the user are values that vary over time. "Classical" FRP has two basic data types: events and behaviors.

Events represent values which only exist at discrete times. Keystrokes are a great example: you can think of the inputs from the keyboard as a character at a given time. Each keypress is then just a pair with the character of the key and the time it was pressed.

Behaviors are values that exist constantly but can be changing continuously. The mouse position is a great example: it is just a behavior of x, y coordinates. After all, the mouse always has a position and, conceptually, this position changes continually as you move the mouse. After all, moving the mouse is a single protracted action, not a bunch of discrete steps.

And what is temporal logic? Appropriately enough, it's a set of logical rules for dealing with propositions quantified over time. Essentially, it extends normal first-order logic with two quantifiers: □ and ◇. The first means "always": read □φ as "φ always holds". The second is "eventually": ◇φ means that "φ will eventually hold". This is a particular kind of modal logic. The following two laws relate the quantifiers:

□φ ⇔ ¬◇¬φ
◇φ ⇔ ¬□¬φ

So □ and ◇ are dual to each other in the same way as ∀ and ∃.

These two quantifiers correspond to the two types in FRP. In particular, □ corresponds to behaviors and ◇ corresponds to events. If we think about how these types are inhabited, this should make sense: a behavior is inhabited at every possible time, while an event only happens once.

于 2013-05-11T04:32:53.203 に答える
8

継続と二重否定の関係に関連して、call/ccのタイプはパースの法則http://en.wikipedia.org/wiki/Call-with-current-continuationです。

CHは通常、直観主義論理とプログラムの間の対応として述べられます。ただし、call-with-current-continuation(callCC)演算子(そのタイプはパースの法則に対応します)を追加すると、古典論理とcallCCを使用したプログラムの間の対応が得られます。

于 2012-09-07T20:23:15.293 に答える
6
2-continuation           | Sheffer stoke
n-continuation language  | Existential graph
Recursion                | Mathematical Induction

One thing that is important, but have not yet being investigated is the relationship of 2-continuation (continuations that takes 2 parameters) and Sheffer stroke. In classic logic, Sheffer stroke can form a complete logic system by itself (plus some non-operator concepts). Which means the familiar and, or, not can be implemented using only the Sheffer stoke or nand.

This is an important fact of its programming type correspondence because it prompts that a single type combinator can be used to form all other types.

The type signature of a 2-continuation is (a,b) -> Void. By this implementation we can define 1-continuation (normal continuations) as (a,a) -> Void, product type as ((a,b)->Void,(a,b)->Void)->Void, sum type as ((a,a)->Void,(b,b)->Void)->Void. This gives us an impressive of its power of expressiveness.

If we dig further, we will find out that Piece's existential graph is equivalent to a language with the only data type is n-continuation, but I didn't see any existing languages is in this form. So inventing one could be interesting, I think.

于 2012-12-22T14:07:35.787 に答える
5

While it's not a simple isomorphism, this discussion of constructive LEM is a very interesting result. In particular, in the conclusion section, Oleg Kiselyov discusses how the use of monads to get double-negation elimination in a constructive logic is analogous to distinguishing computationally decidable propositions (for which LEM is valid in a constructive setting) from all propositions. The notion that monads capture computational effects is an old one, but this instance of the Curry--Howard isomorphism helps put it in perspective and helps get at what double-negation really "means".

于 2010-06-04T02:42:11.403 に答える
4

First-class continuations support allows you to express $P \lor \neg P$. The trick is based on the fact that not calling the continuation and exiting with some expression is equivalent to calling the continuation with that same expression.

For more detailed view please see: http://www.cs.cmu.edu/~rwh/courses/logic/www-old/handouts/callcc.pdf

于 2012-09-27T08:46:32.327 に答える