컴퓨터과학 입문 An Introduction to Computer Science.

lettuceescargatoireAI and Robotics

Nov 7, 2013 (3 years and 7 months ago)

95 views

CH751
퍼지시스템

특강

Uncertainties in Intelligent Systems



2004
년도



1
학기

강의진

소개


담당

교수


조성배
(
공대

䌵ㄵ㬠


㈱㈳
-
㈷㈰㬠獢捨潀捳cy潮o敩.慣a歲k





페이지


桴瑰㨯:獣污戮y潮o敩.慣a歲⽃潵牳敳⼰㑆畓ks



강의

시간




6, 7,


7

⡃㔲〩



면담

시간




8, 9,


9



담당

조교


황금성

Uncertainties in Intelligent Systems


Dealing with
uncertain

and
imprecise

information has been one of
the major issues in almost all intelligent system


Decision making systems, diagnostic systems, intelligent
agent systems, planning systems, data mining, etc


Various approaches to cope with
uncertain
,
imprecise
,
vague
, and
even
inconsistent

information


Bayesian and probabilistic

methods
, belief networks,
softcomputing, etc


Softcomputing


Neural networks, fuzzy theory, approximate reasoning,
derivative
-
free optimization methods (GA), etc


Synergy allows SC to
incorporate human knowledge

effectively, deal with imprecision and uncertainty, and
learn to
adapt to unknown or changing environments

for better
performance


楮瑥汬t来湴g獹獴s浳 瑯t浩浩m⁨畭慮u
楮瑥汬t来湣g 楮 瑨t湫楮本 汥l牮楮本 牥慳潮楮本 整e

Course Schedule (1)

1.
3/2, 3/4 :
과목소개



千⽁䤯䭅I
개요

2.
3/9, 3/11 : Rule
-
based systems, expert systems, fuzzy systems

3.
3/16, 3/18 : Knowledge representation

4.
3/23, 3/25 : Uncertainties in knowledge
-
based systems

5.
3/30, 4/1 : Bayesian Network
특강

6.
4/6, 4/8 : 1


프로그래밍

과제

7.
4/13,
4/15

: Machine learning methods for knowledge engineering

8.
4/20 :
중간시험

9.
4/29 :
프로젝트

제안서

발표

10.
5/6 : Fuzzy sets and fuzzy logic

11.
5/13 : Fuzzy systems

12.
5/20 : Fuzzy system applications

13.
5/27

: Introduction to neural networks

14.
6/3 : Hybrid systems

15.
6/10 :
프로젝트

결과

발표

16.
6/15 :
기말시험

Course Schedule (2)

http://www.csse.monash.edu.au/~annn/443/443.html


3/25 :
Introduction to Bayesian AI


3/30, 4/1 :
Introduction to Bayesian Networks


4/6, 4/8 :
Inference in Bayesian Networks


4/13 :
Decision Networks


4/20 :
중간시험


4/27 :
BN Knowledge Engineering


5/4 :
BN Case Studies


5/11 :
Intro to Machine Learning and Bayesian Confirmation
Theory


5/18 :
Linear Causal Models, Conditional Independence Learning


5/25 :
Parameter learning, Metric Learning: Bayesian, MDL, MML


6/1 :
Search and the Evaluation of BN Learners


6/8, 6/10 :
프로젝트

결과

발표


6/15 :
기말시험


Reasoning Under Uncertainty

Part I : This part of the course will focus on two representations for modelling
and reasoning under uncertainty: Bayesian (or Belief) networks and Markov
Decision Processes. Bayesian networks have rapidly become one of the
leading technologies for applying AI to real world problems. This follows the
work of Pearl, Lauritzen, and others in the late 1980s showing that Bayesian
reasoning in practice could be tractable (although in principle it is NP
-
hard).
We begin with a brief examination of the philosophy of Bayesianism,
motivating the use of probabilities in decision making, agent modeling and
data analysis, and contrasting Bayesian methods with certainty factors, fuzzy
logic and the Dempster
-
Shafer calculus. We introduce Bayesian networks,
their inference techniques and approximation methods. We look at an
extension to Bayesian networks, called decision networks, which support
decision making. Several BN software packages will be introduced and used
throughout the course. We will look at the general problem of "knowledge
engineering" of Bayesian networks, and consider practical issues of eliciting
domain knowledge from experts. These issues will be illustrated with through
the use of several real
-
world case studies, including Bayesian poker,
seabreeze prediction and an intelligent tutoring system for decimal
misconceptions. This part of the course will conclude with a brief look at
another representation of uncertainty, Markov Decision Processes, together
with basic dynamic programming solution methods

Reasoning Under Uncertainty

Part II : There are many difficulties with constructing AI models (such
as BNs or MDPs) using human domain knowledge, including lack of
human domain expertise, difficulties in elicting causal structure and
inconsistent probabilities. This has led to a strong interest in
automating the learning of such models from statistical data, which
is the focus of the second part of the course. We will start with an
introduction to machine learning concepts, including Bayesian
confirmation theory, and their application to classifier systems and
MDPs with reinforcement learning. We with then examine
paremeter learning in the context of Bayesian net parameterization.
These techniques allow much of the difficult part of knowledge
engineering with Bayesian nets to be automated, but leaves the
problem of sorting out Bayesian net structure untouched, so we will
continue with Bayesian net structure learning. Some of the
techniques have been around for a century; we will look briefly at
the tradition of structural equation modeling and causal modeling in
the social sciences. Then we examine very recently developed
Bayesian, MDL and MML methods for learning causal structure