Alan Turing is often given the unofficial title of being the original hacker. He’s a worthy choice. His 1950 essay, Computing Machinery and Intelligence, is still consulted today by those contemplating the gap between software and human brains.
But Turing was able to use machines that were built on the work of others as decades of hardware progression continued to push toward the semiconductor age. And what of the theory behind the hardware? Some form of it had to come first, of course. George Boole, for one, conceived of devices—Boolean operators—that still dot programming today. Their roots trace to Boole’s 1854 paper, An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities.
Before Boole there were others: Euclid, Euler, Newton and the Bernoullis. They all contributed. Naming one of them the original hacker requires a dash of subjective judgment. A case could be made for a dozen people. But none of them, in this writer’s opinion, quite so fill the role as does Gottfried Leibniz, a German who preceded iOS by 360 years.
Leibniz, like Newton, his contemporary, was a polymath. His knowledge and curiosity spanned the European continent and most of its interesting subjects. On philosophy, Leibniz said, there are two simple absolutes: God and nothingness. From these two, all other things come. How fitting, then, that Leibniz conceived of a calculating language defined by two and only two figures: 0 and 1.
Read the complete article at Forbes.