Artificial intelligence

Unit 1

What is AI?

- There are many ways in which we can define the term _Ai_.
- The most important thing is how do we define it.
- We can say it is acting humanly. 
- We can also say thinking humanly. But Think again coincides with [Turings paper](https://en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence)

**Def:** A machine that can think and act humanly and rationally.

What is rational?

- _Maximally achieving pre-defined goals_ 
- We can make the machine think rationally and act rationally.
- Here we can define goals as **Utilities**.
- The world know as **State Space** is so *uncertain*.
- Beging rational means _maximizing expected utility_.

Extras
=====
    - Human Brains are complex and we can't make machines that complex.

History of AI

- Turing paper
- checkers game ( Probably the first AI paper)
- Knowledge Based Approaches
- Statistic Based Approaches

Agent

- The **Main** Guy.
- It is an entity that perceves the environment via sensors  and acts via actuators (_lame uni def_)
- As we are taking about **Rational Agent**, there are 3 aspects that are needed to be considered for the _actions_ of the rational agent
- **PAGE (Percepts, Actions, Goal Test and Environment)**
- The Above three characteristics are important for the action of the agent.

Environments

Types of Agents

-   **Simple Reflex Agents**

    - No History just take percept and do Action.
    - The most basic that we can make.

-    **Model Based Agents**

    - This one holds some internal state and has some histroy **Internal State** about the previous state
    - Based on the Previous states it makes a model an the state space and try to take actions.

-   **Goal Based Agnents**

    - We we only know the past that is use less so we need to have a goal so we introduce a **Goal Test**
    - It will compare the actions which are sutable for goal test.
    - It will try to pick a action that will lead to goal state

-   **Uitility Based agents**
    - It will check if this action is the most optomal action or not (**Performance Measure**)

Unit 2

Agent States

    - Atomic => Its like nodes.
    - Factored => 
    - Structured => 

Problem Solving Agent

    - Here the environment is known and deterministic.
    - The Process of looking for sequence of actions that reaches the goal is called **Search**.
   

State vs Nodes

    - A **State** is a physical configuration
    - A **Node** is a data Structure that consists of _parent, child, depth, path cost_ etc..
    - A State does not have the above.

Search Strategies

- Strategies are evaluated based on the below four configurations
    
    - Completeness => Does it always find a solution if one exits?
    - Time Complexity => No of nodes Generated or expanded
    - Space Complexity => Maximum no of nodes in memory
    - Optimality => Does it always find a least cost.

- The and space complexities are measured in
    
    - _b_ -> Maximum Branching Factor
    - _d_ -> depth of least cost solution
    - _m_ -> Maximum depth of State Space

Types of Seach Stategies

- Uninformed
    
    - BFS
    - DFS
    - UCS
    - DLS
    - IDS
    - BDS

- Informed

Breadth First Search (Tree)

- **Expand Each and Every Node From left to right** 
- Complete => if _b_ is finite.
- Time Complexity => _1 + b + b^2 + b^3... = O(b^d)_ 
- Space Complexity => _O(b^d)_ Puts every node in memory
- Optimal => Not so optimal.

Uniform Cost Search (Tree)

- **Expand Least Unexpanded Node (path Cost)**
- In this we consider the entire path cost of nodes like if 5 is expanded then 5+6 , 5+10 etcc . We need to add in such way.
- After finding the goal state we back track to the initial state for the path. 
- Complete => Yes
- Optimal => Yes

Depth First Search(Tree)

- **Just go Expanding a depth**
- Here a Loop is possible so we need to eliminate that (What shoud I use?).
- Complete => Hell no
- Time Complexcity => _O(b^m)_ not so optimal.
- Space Complexcity => _O(bm)_ linear space
- Optimal => Hell Noooo
- It is Depth first Search with a depth limit _l_
- Time Complexity => _O(b^l)_
- Space Complexity => _O(bl)_
- Optimal => I would Say not if _l>d_
- Here we Change the limit for every iteration.
- Complete => Yes
- Time Complexity => _O(b^d)_
- Space Complexity => _O(bd)_
- Optimal => Yes, if Step Cost = 1
- Here we will start from initial state and final state and they meet in the middle.
- Here we use **BFS** in the bidirectinal Search
- Complete => Yes, if b is finite in both directions.
- Time Complexity => _O(b^d/2)_
- Space Complexcity => _O(B^d/2)_
- Optimal => Yes, If Step costs are identical.

Extras
=====

    - Tree Based -> Has no Explored node 
    - Graph Based  -> Has a explored set

Informed Searches

- Here we will have a hueristic => That estimates how close a state is to a goal state.
- Picks up the shortet node in the given nodes
- Has a priority queue in th descending order.
- Has some _f(n)_ function (Huristic)
- This can be called as Greedy Best First as it **doesn't** consider the path cost and only heurestic.
- It is fast just direclty to the goal state.
- Complete => No, Might get Stuck in loops
- Time => _O(b^m)_
- Space => _O(b^m)_
- Optimal => No

A* Search Algorithm

- Here it considers both path cost and hurestic => _f(n) = g(n) + h(n)_
- Here it might not give the optimal solution every time.
- Its is optimal only when _h(n)_ is admissable i.e it does not over estimate the cost to reach goal.
- To find a optimal solution we need to have the heurstic  < the actual cost.
- A* is optimal with tree search if  h(n) is admissible;  A* is optimal with graph search if h(n) is consistent.
- Complete => Yes, unless f < f(G) => f -> countor | lemma
- Time => Exponential
- Space => Keeps all modes
- Optimal => Yes.

Extras
=====

    - There is someting called **path max** where take the maximum of f so that the countore is not decreasing.
    - _f(n') = max{g(n') + h(n'), f(n)}_

Iterative Deeping A*

Slide Bar

- Two types of heuristics
    - Number of mispalced tiles(h1)
    - Total Manhatten Distance(h2)

- Here h2 is better because it generate less nodes. Only if you can move to the ajacent square
- What if you can move to any square then h1 is the best.
-