Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
optimization_set [2016/05/20 21:39]
nikolaj
optimization_set [2016/05/20 22:04]
nikolaj
Line 3: Line 3:
 | @#55CCEE: context ​    | @#55CCEE: $ B $ | | @#55CCEE: context ​    | @#55CCEE: $ B $ |
 | @#55CCEE: context ​    | @#55CCEE: $ \langle Y, \le \rangle $ ... Non-strict partially ordered set | | @#55CCEE: context ​    | @#55CCEE: $ \langle Y, \le \rangle $ ... Non-strict partially ordered set |
-| @#55CCEE: context ​    | @#55CCEE: $ s:B\to Y $ | +| @#55CCEE: context ​    | @#55CCEE: $ r:B\to Y $ | 
-| @#FF9944: definition ​ | @#FF9944: $ O_s := \{\beta\in B\mid \forall(b\in X).\,s(\beta)\le{s(b)}\}$ |+| @#FF9944: definition ​ | @#FF9944: $ O_r := \{\beta\in B\mid \forall(b\in X).\,r(\beta)\le{r(b)}\}$ |
  
 ----- -----
-If ${\mathrm{min}(s)}\subseteq B$ denote the minimum values of $s$, then +If ${\mathrm{min}(r)}\subseteq B$ denote the minimum values of $r$, then 
  
-$O_s s^{-1}({\mathrm{min}(s)})$+$O_r R^{-1}({\mathrm{inf}(r)})$
  
-with $s^{-1}:​{\mathcal P}Y\to{\mathcal P}B$.+with $r^{-1}:​{\mathcal P}Y\to{\mathcal P}B$.
  
 Compare with [[Solution set]]. Compare with [[Solution set]].
- 
-== Example == 
-For  
- 
-$s:{\mathbb R}\to{\mathbb R}$ 
- 
-$s(x):​=(x-7)^2$ 
- 
-we get  
- 
-$O_s=\{7\}$ 
  
 === Parametrized regression === === Parametrized regression ===
Line 33: Line 22:
 where $x_0$ somehow depends on $y_0$. where $x_0$ somehow depends on $y_0$.
  
-Use $B$-family of fit functions ​(hypothesis)+Use $B$-family of fit functions
  
 $f:​B\to(X\to Y)$ $f:​B\to(X\to Y)$
 +
 +(the indexed subspace of $X\to Y$ is called hypotheses space)
  
 and find from this set find the optimal fit (given by optimal $\beta\in B$) w.r.t. loss function $V:Y\to Y$ by optimizing and find from this set find the optimal fit (given by optimal $\beta\in B$) w.r.t. loss function $V:Y\to Y$ by optimizing
  
-$s(\beta):​=V(f(\beta,​x),​y)$+$r(\beta):​=V(f(\beta,​x),​y)$
  
 As a remark, given a function $f$ (resp. a $\beta$), the value $V(f(\beta,​x_0),​y_0)$ (or a multiple thereof) is called "​empirical risk" in Statistical learning theory. As a remark, given a function $f$ (resp. a $\beta$), the value $V(f(\beta,​x_0),​y_0)$ (or a multiple thereof) is called "​empirical risk" in Statistical learning theory.
Link to graph
Log In
Improvements of the human condition