Reliable Reasoning - Induction and Statistical Learning Theory (Paperback)

,
The implications for philosophy and cognitive science of developments in statistical learning theory. In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni-a philosopher and an engineer-argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors-a central topic in SLT. After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.

R818

Or split into 4x interest-free payments of 25% on orders over R50
Learn more

Discovery Miles8180
Mobicred@R77pm x 12* Mobicred Info
Free Delivery
Delivery AdviceShips in 10 - 15 working days


Toggle WishListAdd to wish list
Review this Item

Product Description

The implications for philosophy and cognitive science of developments in statistical learning theory. In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni-a philosopher and an engineer-argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors-a central topic in SLT. After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.

Customer Reviews

No reviews or ratings yet - be the first to create one!

Product Details

General

Imprint

Bradford Books

Country of origin

United States

Series

Jean Nicod Lectures

Release date

2012

Availability

Expected to ship within 10 - 15 working days

First published

2007

Authors

,

Dimensions

203 x 137 x 6mm (L x W x T)

Format

Paperback - Trade

Pages

120

ISBN-13

978-0-262-51734-8

Barcode

9780262517348

Categories

LSN

0-262-51734-5



Trending On Loot