===== Optimization set ===== ==== Set ==== | @#55CCEE: context | @#55CCEE: $ B $ | | @#55CCEE: context | @#55CCEE: $ \langle Y, \le \rangle $ ... Non-strict partially ordered set | | @#55CCEE: context | @#55CCEE: $ r:B\to Y $ | | @#FF9944: definition | @#FF9944: $ O_r := \{\beta\in B\mid \forall(b\in B).\,r(\beta)\le{r(b)}\}$ | ----- >todo >#tag >If p are parameters and c_p(x) curves with x_min(c_p)=f(p) known, try to find x_min(c') by fitting c_p to c'. Now what's p here. Is there a scheme so that we can extend the list p to have guaranteed that there are parameters so that eventually c_p=c'? If ${\mathrm{min}(r)}\subseteq Y$ denote the minimum values of $r$, then $O_r = r^{-1}({\mathrm{min}(r)})$ with $r^{-1}:{\mathcal P}Y\to{\mathcal P}B$. Compare with [[Solution set]]. === Parametrized regression === Consider a test set $\langle x_0,y_0\rangle \in X\times Y$, where $x_0$ somehow depends on $y_0$. Use $B$-family of fit functions $f:B\to(X\to Y)$ (the indexed subspace of $X\to Y$ is called hypotheses space) and find from this set find the optimal fit (given by optimal $\beta\in B$) w.r.t. loss function $V:Y\times Y\to Y$ by optimizing $r(\beta):=V(f(\beta,x),y)$ As a remark, given a function $f$ (resp. a $\beta$), the value $V(f(\beta,x_0),y_0)$ (or a multiple thereof) is called "empirical risk" in Statistical learning theory. == Linear regression w.r.t. least square == $f(\beta,x):=\beta_0+\sum_{i=1}^N\beta_ix_i$ with loss function $V({\hat y},y)=({\hat y}-y)\cdot({\hat y}-y)$ In practice, $x_i$ may be vectors and then $V$ is taken to be an inner product. === Reference === ----- === Context === [[Non-strict partial order]] === Related === [[Solution set]]