Optimization set
Set
context | B |
context | ⟨Y,≤⟩ … Non-strict partially ordered set |
context | r:B→Y |
definition | Or:={β∈B∣∀(b∈B).r(β)≤r(b)} |
todo
#tag
If p are parameters and c_p(x) curves with x_min(c_p)=f(p) known, try to find x_min(c') by fitting c_p to c'. Now what's p here. Is there a scheme so that we can extend the list p to have guaranteed that there are parameters so that eventually c_p=c'?
If min(r)⊆Y denote the minimum values of r, then
Or=r−1(min(r))
with r−1:PY→PB.
Compare with Solution set.
Parametrized regression
Consider a test set
⟨x0,y0⟩∈X×Y,
where x0 somehow depends on y0.
Use B-family of fit functions
f:B→(X→Y)
(the indexed subspace of X→Y is called hypotheses space)
and find from this set find the optimal fit (given by optimal β∈B) w.r.t. loss function
V:Y×Y→Y
by optimizing
r(β):=V(f(β,x),y)
As a remark, given a function f (resp. a β), the value V(f(β,x0),y0) (or a multiple thereof) is called “empirical risk” in Statistical learning theory.
Linear regression w.r.t. least square
f(β,x):=β0+∑Ni=1βixi
with loss function
V(ˆy,y)=(ˆy−y)⋅(ˆy−y)
In practice, xi may be vectors and then V is taken to be an inner product.