강화 학습에서 Bellman의 방정식 도출


답변:


7

이것은 그 뒤에 깨끗하고 구조화 된 수학에 대해 궁금해하는 모든 사람들을위한 답입니다 (즉, 임의의 변수가 무엇인지 아는 사람들의 그룹에 속하고 임의의 변수에 밀도가 있다는 것을 보여 주거나 가정해야한다면 이것은 다음과 같습니다) 당신을위한 답 ;-)) :

우선 우리는 Markov 의사 결정 프로세스에 유한 한 수의 보상이 있어야합니다. 즉 , 각각 변수에 속하는 유한 밀도의 세트가 있어야 합니다. 예 : 모든 및 맵 대해 (즉, MDP 뒤에있는 오토마타에는 무한히 많은 상태가있을 수 있지만 상태 사이에 무한히 천이 될 수있는 보상 분포는 무한히 많음)L 1L1 E EL 1 L1R x e ( x ) d x < Rxe(x)dx<e E eEF : A × S E F:A×SEp ( r t | a t , s t ) = F ( a t , s t ) ( r t )

p(rt|at,st)=F(at,st)(rt)
L 1L1

정리 1 : (즉, 적분 실수 랜덤 변수)를 로하고 가 공통 밀도를 갖도록 를 또 다른 랜덤 변수로 하자.X L 1 ( Ω ) XL1(Ω)Y YX , Y X,YE [ X | Y = y ] = R x p ( x | y ) d x

E[X|Y=y]=Rxp(x|y)dx

증명 : 본질적으로 입증 여기 스테판 한센.

정리 2 : 하고 를 가 공통 밀도를 갖도록 여기서 는 범위입니다 .X L 1 ( Ω ) XL1(Ω)Y , Z Y,ZX , Y , Z X,Y,ZE [ X | Y = y ] = Z p ( z | y ) E [ X | Y = y , Z = z ] d z

E[X|Y=y]=Zp(z|y)E[X|Y=y,Z=z]dz
ZZ ZZ

증명 : E [ X | Y = y ]= R x p ( x | y ) d x    (Thm. 1까지)= R x p ( x , y )P ( Y ) (D)X= R x Z p ( x , y , z ) d zP ( Y ) (D)X= ZR x p ( x , y , z )P ( Y ) (D)X(D)Z= ZR x p ( x | y , z ) p ( z | y ) d x d z= Z p ( z | y ) R x p ( x | y , z ) d x d z= Z p ( z | y ) E [ X | Y = y , Z = z ] d z    (Thm. 1까지)

E[X|Y=y]=Rxp(x|y)dx    (by Thm. 1)=Rxp(x,y)p(y)dx=RxZp(x,y,z)dzp(y)dx=ZRxp(x,y,z)p(y)dxdz=ZRxp(x|y,z)p(z|y)dxdz=Zp(z|y)Rxp(x|y,z)dxdz=Zp(z|y)E[X|Y=y,Z=z]dz    (by Thm. 1)

넣어 넣고 그러면 수렴하고 함수 이후에 MDP에 유한 한 보상 만 있다는 사실을 사용하여아직 (즉, 적분)을 가진자는 또한 그 (조건부 기대 [의 인수 분해]에 대한 정의 방정식에서 단조 수렴 정리의 일반적인 조합하고 지배 융합을 사용하여) 표시 할 수 이제 우리는 G t = k = 0 γ k R t + kGt=k=0γkRt+k G ( K ) t = K k = 0 γ k R t + kG(K)t=Kk=0γkRt+k L 1 L1G ( K ) tG(K)tk = 0 γ k | R t + k | k=0γk|Rt+k|L 1 ( Ω ) L1(Ω)K E[ G ( K ) t | S t = s t ] = E [ G t | S t = s t ]

limKE[G(K)t|St=st]=E[Gt|St=st]
E [ G ( K ) t | S t = s t ] = E [ R t | S t = s t ] + γ S p ( s t + 1| s t ) E [ G ( K - 1 ) t + 1 | S t + 1 = s t + 1 ] d s t + 1
이자형[ G( K)| 에스= s] = E[ R| 에스= s] + γ에스p ( t + 1| 에스) 전자[ G( K1 )t + 1| 에스t + 1= st + 1] d에스t + 1
사용한 , Thm. 2 위의 Thm. 1 다음 간단한 소외 대전 한 프로그램을 사용하는 모든 대해 . 이제 방정식의 양변에 한계 를 적용해야합니다 . 상태 공간 의 적분으로 한계를 풀 려면 몇 가지 추가 가정을해야합니다.G ( K ) t = R t + γ G ( K - 1 ) t + 1 명( K)= R+ γ( K1 )t + 1 E [ G ( K - 1 ) t + 1 | S t + 1 = s ' , S t = s t ] E[G(K1)t+1|St+1=s,St=st]p ( r q | s t + 1 , s t ) = p (rq|st+1)p(rq|st+1,st)=p(rq|st+1)qt+1qt+1KKSS

상태 공간이 유한 ( 이고 합이 유한함) 모든 보상이 모두 양수이거나 (그런 다음 모노톤 수렴을 사용함) 모든 보상이 음수입니다 (그런 다음 방정식과 모노톤 수렴을 다시 사용) 또는 모든 보상이 제한됩니다 (그런 다음 지배적 수렴을 사용합니다). 그런 다음 ( 위의 부분 / 유한 벨만 방정식의 양쪽에 를 적용하여 )S=SS=SlimKlimK

E[Gt|St=st]=E[G(K)t|St=st]=E[Rt|St=st]+γSp(st+1|st)E[Gt+1|St+1=st+1]dst+1

E[Gt|St=st]=E[G(K)t|St=st]=E[Rt|St=st]+γSp(st+1|st)E[Gt+1|St+1=st+1]dst+1

나머지는 일반적인 밀도 조작입니다.

고지 : 매우 간단한 작업에서도 상태 공간은 무한 할 수 있습니다! 한 가지 예는 '극점 균형'작업입니다. 상태는 본질적으로 극의 각도 ( 의 값 , 셀 수없이 무한대입니다!)[0,2π)[0,2π)

비고 : 사람들은 ' 의 밀도를 직접 사용하고 '...하지만 ... 내 질문은 :GtGtp(gt+1|st+1,st)=p(gt+1|st+1)p(gt+1|st+1,st)=p(gt+1|st+1)

  1. 에 밀도가 있다는 것을 어떻게 알 수 있습니까?Gt+1Gt+1
  2. 어떻게 당신도 알고 것이라 과 공통의 밀도를 함께 가지고 ?Gt+1Gt+1St+1,StSt+1,St
  3. 어떻게 추론 합니까? 이것은 Markov 속성 일뿐입니다. Markov 속성은 한계 분포에 대한 정보 만 제공하지만 반드시 전체 분포를 결정하지는 않습니다 (예 : 다변량 가우시안 참조)!p(gt+1|st+1,st)=p(gt+1|st+1)p(gt+1|st+1,st)=p(gt+1|st+1)

10

시간 이후 할인 된 총 보상 금액을 다음과 같이합시다 . tt
Gt=Rt+1+γRt+2+γ2Rt+3+...Gt=Rt+1+γRt+2+γ2Rt+3+...

상태부터의 이용 가치는, 시간에서 기대 합에 상당 할인 보상 정책의 실행 상태부터 전방으로한다. 정의에 선형성의 법칙에 의해 법률에 따라sstt
RRππss
Uπ(St=s)=Eπ[Gt|St=s]Uπ(St=s)=Eπ[Gt|St=s]
=Eπ[(Rt+1+γRt+2+γ2Rt+3+...)|St=s]=Eπ[(Rt+1+γRt+2+γ2Rt+3+...)|St=s]GtGt
=Eπ[(Rt+1+γ(Rt+2+γRt+3+...))|St=s]=Eπ[(Rt+1+γ(Rt+2+γRt+3+...))|St=s]
=Eπ[(Rt+1+γ(Gt+1))|St=s]=Eπ[(Rt+1+γ(Gt+1))|St=s]
=Eπ[Rt+1|St=s]+γEπ[Gt+1|St=s]=Eπ[Rt+1|St=s]+γEπ[Gt+1|St=s]
=Eπ[Rt+1|St=s]+γEπ[Eπ(Gt+1|St+1=s)|St=s]=Eπ[Rt+1|St=s]+γEπ[Eπ(Gt+1|St+1=s)|St=s]총 기대치 정의 선형성의 법칙
=Eπ[Rt+1|St=s]+γEπ[Uπ(St+1=s)|St=s]=Eπ[Rt+1|St=s]+γEπ[Uπ(St+1=s)|St=s]UπUπ
=Eπ[Rt+1+γUπ(St+1=s)|St=s]=Eπ[Rt+1+γUπ(St+1=s)|St=s]

프로세스 만족 마르코프 재산권 있다고 가정 :
확률 상태에서 끝나는의 상태에서 시작하는 데 취해진 조치 , 및 보상 상태에서 끝나는의 상태에서 기동하고있는 액션 촬영 , PrPrssssaa
Pr(s|s,a)=Pr(St+1=s,St=s,At=a)Pr(s|s,a)=Pr(St+1=s,St=s,At=a)
RRssssaa
R(s,a,s)=[Rt+1|St=s,At=a,St+1=s]R(s,a,s)=[Rt+1|St=s,At=a,St+1=s]

따라서 위의 유틸리티 방정식을 다음과 같이 다시 쓸 수 있습니다.
=aπ(a|s)sPr(s|s,a)[R(s,a,s)+γUπ(St+1=s)]=aπ(a|s)sPr(s|s,a)[R(s,a,s)+γUπ(St+1=s)]

어디에; : 액션 복용의 가능성 때 상태에서 확률 적 정책. 결정적 정책의 경우π(a|s)π(a|s)aassaπ(a|s)=1aπ(a|s)=1


단지 몇 가지 참고 사항 : 확률 적 정책에서도 의 합 은 1과 같지만 결정 론적 정책에서는 전체 가중치 (예 : 와 나머지 를받는 액션이 ​​하나뿐입니다. 가중치가 0이므로 항이 방정식에서 제거됩니다. 또한 총 기대 법칙을 사용한 행에서 조건의 순서가 반대로됩니다.πππ(a|s)=1π(a|s)=1
Gilad Peleg

1
나는이 대답이 틀렸다는 것을 확신한다. 총 기대 법칙을 포함하는 선까지만 방정식을 따르자. 이어서 왼쪽에는 의존하지 않기 오른쪽이 동시에 수행 ... 즉 방정식은 다음 올바른지하는 경우 그들이 정정 있습니까? 당신은 통합을 통해 어떤 종류가 있어야 이미 그 단계에서. 그 이유는 아마도 (임의 변수)와 인수 분해 (결정적 함수) 의 차이점에 대한 오해 일 것입니다 .ssssssE[X|Y]E[X|Y]E[X|Y=y]E[X|Y=y]
Fabian Werner

@FabianWerner 나는 이것이 올바르지 않다는 것에 동의합니다. Jie Shi의 답변이 정답입니다.
teucer

@teucer이 답변은 일부 "기호화"가 없어서 수정할 수 있습니다. 예 : E [ A | C = c ] = 범위 ( B ) p ( b | c ) E [ A | B = b , C = c ] d P B ( b ) 그러나 여전히 질문은 Jie Shis 답변과 동일합니다. 왜 E [ G t + 1 | S t +E[A|C=c]=range(B)p(b|c)E[A|B=b,C=c]dPB(b)1 = s t + 1 , S t = s t ]=E[ G t + 1 | S t + 1 = s t + 1 ]? 때문에뿐만 아니라 마르코프 속성입니다 G의 t는 + 1 조차 수렴 않습니다 정말 복잡 RV는? 그렇다면 어디서? 공통 밀도p는 무엇입니까( g t + 1 , s t + 1 , s tE[Gt+1|St+1=st+1,St=st]=E[Gt+1|St+1=st+1]Gt+1) ? 우리는 유한 합 (복잡한 회선)에 대해서만이 표현을 알고 있지만 무한 경우에 대해서도 알고 있습니까? p(gt+1,st+1,st)
Fabian Werner

@ FabianWerner 모든 질문에 대답 할 수 있는지 잘 모르겠습니다. 일부 포인터 아래. G t + 1 의 수렴의 경우 , 할인 된 보상의 합계 인 경우 계열이 수렴한다고 가정하는 것이 합리적입니다 (할인 계수가 < 1 이고 수렴이 실제로 중요하지 않은 곳). 밀도에 대해서는 신경 쓰지 않습니다 (임의의 변수가있는 한 항상 관절 밀도를 정의 할 수 있음). 잘 정의되어 있고 그 경우에만 중요합니다. Gt+1<1
teucer

8

여기 내 증거가 있습니다. 조건부 분포의 조작을 기반으로하므로 쉽게 따라갈 수 있습니다. 이것이 당신을 돕기를 바랍니다. v π ( s )= E [ G t | S t = s ]= E [ R t + 1 + γ G t + 1 | S t = s ]= Σ S ' Σ R Σ g t + 1 Σ P ( S ' , R , g t + 1 , | s의 ) ( R + γ g t + 1 )= a p ( a | s ) s r g t + 1 p ( s ' , r , g t + 1 | a , s ) ( r + γ g t + 1 )= a p ( a | s ) s r g t + 1 p ( s , r | a , s ) p ( g t + 1 | s '' , r , a , s ) ( r + γ g t + 1 )참고  p는 ( g t + 1 | 이야 ' , R , , 이야 ) = P ( g t + 1 | 이야 ' )  MDP의 가정하여= a p ( a | s ) s ′ ′r p ( s , r | a , s ) g t + 1 p ( g t + 1 | s ' ) ( r + γ g t + 1 )= Σ P ( A는 | 이야 ) Σ S ' Σ의 R의 P ( S ' , R | , 이야 ) ( R + γ Σ g t + 1 , P ( g t + 1 | 이야 ' ) g t + 1 )= Σ P ( A는 | 이야 ) Σ S ' Σ의 R의 P ( ' , R은 | A는 , s의 ) ( R + γ 브이 π ( S ' ) )

vπ(s)=E[Gt|St=s]=E[Rt+1+γGt+1|St=s]=srgt+1ap(s,r,gt+1,a|s)(r+γgt+1)=ap(a|s)srgt+1p(s,r,gt+1|a,s)(r+γgt+1)=ap(a|s)srgt+1p(s,r|a,s)p(gt+1|s,r,a,s)(r+γgt+1)Note that p(gt+1|s,r,a,s)=p(gt+1|s) by assumption of MDP=ap(a|s)srp(s,r|a,s)gt+1p(gt+1|s)(r+γgt+1)=ap(a|s)srp(s,r|a,s)(r+γgt+1p(gt+1|s)gt+1)=ap(a|s)srp(s,r|a,s)(r+γvπ(s))
이 유명한 벨만 방정식이다.


Do you mind explaining this comment 'Note that ...' a little more? Why do these random variables Gt+1Gt+1 and the state and action variables even have a common density? If so, why do you know this property that you are using? I can see that it is true for a finite sum but if the random variable is a limit... ???
Fabian Werner

To Fabian: First let's recall what is Gt+1Gt+1. Gt+1=Rt+2+Rt+3+Gt+1=Rt+2+Rt+3+. Note that Rt+2Rt+2 only directly depends on St+1St+1 and At+1At+1 since p(s,r|s,a)p(s,r|s,a) captures all the transition information of a MDP (More precisely, Rt+2Rt+2 is independent of all states, actions, and rewards before time t+1t+1 given St+1St+1 and At+1At+1). Similarly, Rt+3Rt+3 only depends on St+2St+2 and At+2At+2. As a result, Gt+1Gt+1 is independent of StSt, AtAt, and RtRt given St+1St+1, which explains that line.
Jie Shi

Sorry, that only 'motivates' it, it doesn't actually explain anything. For example: What is the density of Gt+1Gt+1? Why are you sure that p(gt+1|st+1,st)=p(gt+1|st+1)p(gt+1|st+1,st)=p(gt+1|st+1)? Why do these random variables even have a common density? You know that a sum transforms into a convolution in densities so what... Gt+1Gt+1 should have an infinite amount of integrals in the density??? There is absolutely no candidate for the density!
Fabian Werner

To Fabian: I do not get your question. 1. You want the exact form of the marginal distribution p(gt+1)p(gt+1)? I do not know it and we do not need it in this proof. 2. why p(gt+1|st+1,st)=p(gt+1|st+1)p(gt+1|st+1,st)=p(gt+1|st+1)? Because as I mentioned earlier gt+1gt+1 and stst are independent given st+1st+1. 3. What do you mean by "common density"? You mean joint distribution? You want to know why these random variables have a joint distribution? All random variables in this universe can have a joint distribution. If this is your question, I would suggest you find a probability theory book and read it.
Jie Shi


2

What's with the following approach?

vπ(s)=Eπ[GtSt=s]=Eπ[Rt+1+γGt+1St=s]=aπ(as)srp(s,rs,a)Eπ[Rt+1+γGt+1St=s,At+1=a,St+1=s,Rt+1=r]=aπ(as)s,rp(s,rs,a)[r+γvπ(s)].

vπ(s)=Eπ[GtSt=s]=Eπ[Rt+1+γGt+1St=s]=aπ(as)srp(s,rs,a)Eπ[Rt+1+γGt+1St=s,At+1=a,St+1=s,Rt+1=r]=aπ(as)s,rp(s,rs,a)[r+γvπ(s)].

The sums are introduced in order to retrieve aa, ss and rr from ss. After all, the possible actions and possible next states can be . With these extra conditions, the linearity of the expectation leads to the result almost directly.

I am not sure how rigorous my argument is mathematically, though. I am open for improvements.


The last line only works because of the MDP property.
teucer

2

This is just a comment/addition to the accepted answer.

I was confused at the line where law of total expectation is being applied. I don't think the main form of law of total expectation can help here. A variant of that is in fact needed here.

If X,Y,ZX,Y,Z are random variables and assuming all the expectation exists, then the following identity holds:

E[X|Y]=E[E[X|Y,Z]|Y]E[X|Y]=E[E[X|Y,Z]|Y]

In this case, X=Gt+1X=Gt+1, Y=StY=St and Z=St+1Z=St+1. Then

E[Gt+1|St=s]=E[E[Gt+1|St=s,St+1=s|St=s]E[Gt+1|St=s]=E[E[Gt+1|St=s,St+1=s|St=s], which by Markov property eqauls to E[E[Gt+1|St+1=s]|St=s]E[E[Gt+1|St+1=s]|St=s]

From there, one could follow the rest of the proof from the answer.


1
Welcome to CV! Please use the answers only for answering the question. Once you have enough reputation (50), you can add comments.
Frans Rodenburg

Thank you. Yes, since I could not comment due to not having enough reputation, I thought it might be useful to add the explanation to the answers. But I will keep that in mind.
Mehdi Golari

I upvoted but still, this answer is missing details: Even if E[X|Y]E[X|Y] satisfies this crazy relationship then nobody guarantees that this is true for the factorizations of the conditional expectations as well! I.e. as in the case with the answer of Ntabgoba: The left hand side does not depend on ss while the right hand side does. This equation cannot be correct!
Fabian Werner

1

Eπ()Eπ() usually denotes the expectation assuming the agent follows policy ππ. In this case π(a|s)π(a|s) seems non-deterministic, i.e. returns the probability that the agent takes action aa when in state ss.

It looks like rr, lower-case, is replacing Rt+1Rt+1, a random variable. The second expectation replaces the infinite sum, to reflect the assumption that we continue to follow ππ for all future tt. s,rrp(s,r|s,a)s,rrp(s',r|s,a) is then the expected immediate reward on the next time step; The second expectation—which becomes vπvπ—is the expected value of the next state, weighted by the probability of winding up in state ss having taken aa from ss.

Thus, the expectation accounts for the policy probability as well as the transition and reward functions, here expressed together as p(s,r|s,a)p(s,r|s,a).


Thanks. Yes, what you mentioned about π(a|s)π(a|s) is correct (it's the probability of the agent taking action aa when in state ss).
Amelio Vazquez-Reina

What I don't follow is what terms exactly get expanded into what terms in the second step (I'm familiar with probability factorization and marginalization, but not so much with RL). Is RtRt the term being expanded? I.e. what exactly in the previous step equals what exactly in the next step?
Amelio Vazquez-Reina

1
It looks like rr, lower-case, is replacing Rt+1Rt+1, a random variable, and the second expectation replaces the infinite sum (probably to reflect the assumption that we continue to follow ππ for all future tt). Σp(s,r|s,a)rΣp(s,r|s,a)r is then the expected immediate reward on the next time step, and the second expectation—which becomes vπvπ—is the expected value of the next state, weighted by the probability of winding up in state ss having taken aa from ss.
Sean Easter

1

even though the correct answer has already been given and some time has passed, I thought the following step by step guide might be useful:
By linearity of the Expected Value we can split E[Rt+1+γE[Gt+1|St=s]]E[Rt+1+γE[Gt+1|St=s]] into E[Rt+1|St=s]E[Rt+1|St=s] and γE[Gt+1|St=s]γE[Gt+1|St=s].
I will outline the steps only for the first part, as the second part follows by the same steps combined with the Law of Total Expectation.

E[Rt+1|St=s]=rrP[Rt+1=r|St=s]=arrP[Rt+1=r,At=a|St=s](III)=arrP[Rt+1=r|At=a,St=s]P[At=a|St=s]=sarrP[St+1=s,Rt+1=r|At=a,St=s]P[At=a|St=s]=aπ(a|s)s,rp(s,r|s,a)r

E[Rt+1|St=s]=rrP[Rt+1=r|St=s]=arrP[Rt+1=r,At=a|St=s](III)=arrP[Rt+1=r|At=a,St=s]P[At=a|St=s]=sarrP[St+1=s,Rt+1=r|At=a,St=s]P[At=a|St=s]=aπ(a|s)s,rp(s,r|s,a)r

Whereas (III) follows form: P[A,B|C]=P[A,B,C]P[C]=P[A,B,C]P[C]P[B,C]P[B,C]=P[A,B,C]P[B,C]P[B,C]P[C]=P[A|B,C]P[B|C]

P[A,B|C]=P[A,B,C]P[C]=P[A,B,C]P[C]P[B,C]P[B,C]=P[A,B,C]P[B,C]P[B,C]P[C]=P[A|B,C]P[B|C]


1

I know there is already an accepted answer, but I wish to provide a probably more concrete derivation. I would also like to mention that although @Jie Shi trick somewhat makes sense, but it makes me feel very uncomfortable:(. We need to consider the time dimension to make this work. And it is important to note that, the expectation is actually taken over the entire infinite horizon, rather than just over ss and ss. Let assume we start from t=0t=0 (in fact, the derivation is the same regardless of the starting time; I do not want to contaminate the equations with another subscript kk) vπ(s0)=Eπ[G0|s0]G0=T1t=0γtRt+1Eπ[G0|s0]=a0π(a0|s0)a1,...aTs1,...sTr1,...rT(T1t=0π(at+1|st+1)p(st+1,rt+1|st,at)×(T1t=0γtrt+1))=a0π(a0|s0)a1,...aTs1,...sTr1,...rT(T1t=0π(at+1|st+1)p(st+1,rt+1|st,at)×(r1+γT2t=0γtrt+2))

vπ(s0)G0Eπ[G0|s0]=Eπ[G0|s0]=t=0T1γtRt+1=a0π(a0|s0)a1,...aTs1,...sTr1,...rT(t=0T1π(at+1|st+1)p(st+1,rt+1|st,at)×(t=0T1γtrt+1))=a0π(a0|s0)a1,...aTs1,...sTr1,...rT(t=0T1π(at+1|st+1)p(st+1,rt+1|st,at)×(r1+γt=0T2γtrt+2))
NOTED THAT THE ABOVE EQUATION HOLDS EVEN IF TT, IN FACT IT WILL BE TRUE UNTIL THE END OF UNIVERSE (maybe be a bit exaggerated :) )
At this stage, I believe most of us should already have in mind how the above leads to the final expression--we just need to apply sum-product rule(abcabcaabbccabcabcaabbcc) painstakingly. Let us apply the law of linearity of Expectation to each term inside the (r1+γT2t=0γtrt+2)(r1+γT2t=0γtrt+2)

Part 1 a0π(a0|s0)a1,...aTs1,...sTr1,...rT(T1t=0π(at+1|st+1)p(st+1,rt+1|st,at)×r1)

a0π(a0|s0)a1,...aTs1,...sTr1,...rT(t=0T1π(at+1|st+1)p(st+1,rt+1|st,at)×r1)

Well this is rather trivial, all probabilities disappear (actually sum to 1) except those related to r1r1. Therefore, we have a0π(a0|s0)s1,r1p(s1,r1|s0,a0)×r1

a0π(a0|s0)s1,r1p(s1,r1|s0,a0)×r1

Part 2
Guess what, this part is even more trivial--it only involves rearranging the sequence of summations. a0π(a0|s0)a1,...aTs1,...sTr1,...rT(T1t=0π(at+1|st+1)p(st+1,rt+1|st,at))=a0π(a0|s0)s1,r1p(s1,r1|s0,a0)(a1π(a1|s1)a2,...aTs2,...sTr2,...rT(T2t=0π(at+2|st+2)p(st+2,rt+2|st+1,at+1)))

a0π(a0|s0)a1,...aTs1,...sTr1,...rT(t=0T1π(at+1|st+1)p(st+1,rt+1|st,at))=a0π(a0|s0)s1,r1p(s1,r1|s0,a0)(a1π(a1|s1)a2,...aTs2,...sTr2,...rT(t=0T2π(at+2|st+2)p(st+2,rt+2|st+1,at+1)))

And Eureka!! we recover a recursive pattern in side the big parentheses. Let us combine it with γT2t=0γtrt+2γT2t=0γtrt+2, and we obtain vπ(s1)=Eπ[G1|s1]vπ(s1)=Eπ[G1|s1] γEπ[G1|s1]=a1π(a1|s1)a2,...aTs2,...sTr2,...rT(T2t=0π(at+2|st+2)p(st+2,rt+2|st+1,at+1))(γT2t=0γtrt+2)

γEπ[G1|s1]=a1π(a1|s1)a2,...aTs2,...sTr2,...rT(t=0T2π(at+2|st+2)p(st+2,rt+2|st+1,at+1))(γt=0T2γtrt+2)

and part 2 becomes a0π(a0|s0)s1,r1p(s1,r1|s0,a0)×γvπ(s1)
a0π(a0|s0)s1,r1p(s1,r1|s0,a0)×γvπ(s1)

Part 1 + Part 2 vπ(s0)=a0π(a0|s0)s1,r1p(s1,r1|s0,a0)×(r1+γvπ(s1))

vπ(s0)=a0π(a0|s0)s1,r1p(s1,r1|s0,a0)×(r1+γvπ(s1))

And now if we can tuck in the time dimension and recover the general recursive formulae

vπ(s)=aπ(a|s)s,rp(s,r|s,a)×(r+γvπ(s))

vπ(s)=aπ(a|s)s,rp(s,r|s,a)×(r+γvπ(s))

Final confession, I laughed when I saw people above mention the use of law of total expectation. So here I am


Erm... what is the symbol 'a0,...,aa0,...,a' supposed to mean? There is no aa...
Fabian Werner

Another question: Why is the very first equation true? I know E[f(X)|Y=y]=Xf(x)p(x|y)dxE[f(X)|Y=y]=Xf(x)p(x|y)dx but in our case, XX would be an infinite sequence of random variables (R0,R1,R2,........)(R0,R1,R2,........) so we would need to compute the density of this variable (consisting of an infinite amount of variables of which we know the density) together with something else (namely the state)... how exactly do you du that? I.e. what is p(r0,r1,....)p(r0,r1,....)?
Fabian Werner

@FabianWerner. Take a deep breath to calm your brain first:). Let me answer your first question. a0,...,aa0a1,...,aa0,...,aa0a1,...,a. If you recall the definition of the value function, it is actually a summation of discounted future rewards. If we consider an infinite horizon for our future rewards, we then need to sum infinite number of times. A reward is result of taking an action from a state, since there is an infinite number of rewards, there should be an infinite number of actions, hence aa.
Karlsson Yu

1
let us assume that I agree that there is some weird aa (which I still doubt, usually, students in the very first semester in math tend to confuse the limit with some construction that actually involves an infinite element)... I still have one simple question: how is “a1...aa1...a defined? I know what this expression is supposed to mean with a finite amount of sums... but infinitely many of them? What do you understand that this expression does?
Fabian Werner

1
internet. Could you refer me to a page or any place that defines your expression? If not then you actually defined something new and there is no point in discussing that because it is just a symbol that you made up (but there is no meaning behind it)... you agree that we are only able to discuss about the symbol if we both know what it means, right? So, I do not know what it means, please explain...
Fabian Werner

1

There are already a great many answers to this question, but most involve few words describing what is going on in the manipulations. I'm going to answer it using way more words, I think. To start,

GtTk=t+1γkt1Rk

Gtk=t+1Tγkt1Rk

is defined in equation 3.11 of Sutton and Barto, with a constant discount factor 0γ10γ1 and we can have T=T= or γ=1γ=1, but not both. Since the rewards, RkRk, are random variables, so is GtGt as it is merely a linear combination of random variables.

vπ(s)Eπ[GtSt=s]=Eπ[Rt+1+γGt+1St=s]=Eπ[Rt+1|St=s]+γEπ[Gt+1|St=s]

vπ(s)Eπ[GtSt=s]=Eπ[Rt+1+γGt+1St=s]=Eπ[Rt+1|St=s]+γEπ[Gt+1|St=s]

That last line follows from the linearity of expectation values. Rt+1Rt+1 is the reward the agent gains after taking action at time step tt. For simplicity, I assume that it can take on a finite number of values rRrR.

Work on the first term. In words, I need to compute the expectation values of Rt+1Rt+1 given that we know that the current state is ss. The formula for this is

Eπ[Rt+1|St=s]=rRrp(r|s).

Eπ[Rt+1|St=s]=rRrp(r|s).

In other words the probability of the appearance of reward rr is conditioned on the state ss; different states may have different rewards. This p(r|s)p(r|s) distribution is a marginal distribution of a distribution that also contained the variables aa and ss, the action taken at time tt and the state at time t+1t+1 after the action, respectively:

p(r|s)=sSaAp(s,a,r|s)=sSaAπ(a|s)p(s,r|a,s).

p(r|s)=sSaAp(s,a,r|s)=sSaAπ(a|s)p(s,r|a,s).

Where I have used π(a|s)p(a|s)π(a|s)p(a|s), following the book's convention. If that last equality is confusing, forget the sums, suppress the ss (the probability now looks like a joint probability), use the law of multiplication and finally reintroduce the condition on ss in all the new terms. It in now easy to see that the first term is

Eπ[Rt+1|St=s]=rRsSaArπ(a|s)p(s,r|a,s),

Eπ[Rt+1|St=s]=rRsSaArπ(a|s)p(s,r|a,s),

as required. On to the second term, where I assume that Gt+1Gt+1 is a random variable that takes on a finite number of values gΓgΓ. Just like the first term:

Eπ[Gt+1|St=s]=gΓgp(g|s).()

Eπ[Gt+1|St=s]=gΓgp(g|s).()

Once again, I "un-marginalize" the probability distribution by writing (law of multiplication again)

p(g|s)=rRsSaAp(s,r,a,g|s)=rRsSaAp(g|s,r,a,s)p(s,r,a|s)=rRsSaAp(g|s,r,a,s)p(s,r|a,s)π(a|s)=rRsSaAp(g|s,r,a,s)p(s,r|a,s)π(a|s)=rRsSaAp(g|s)p(s,r|a,s)π(a|s)()

p(g|s)=rRsSaAp(s,r,a,g|s)=rRsSaAp(g|s,r,a,s)p(s,r,a|s)=rRsSaAp(g|s,r,a,s)p(s,r|a,s)π(a|s)=rRsSaAp(g|s,r,a,s)p(s,r|a,s)π(a|s)=rRsSaAp(g|s)p(s,r|a,s)π(a|s)()

The last line in there follows from the Markovian property. Remember that Gt+1Gt+1 is the sum of all the future (discounted) rewards that the agent receives after state ss. The Markovian property is that the process is memory-less with regards to previous states, actions and rewards. Future actions (and the rewards they reap) depend only on the state in which the action is taken, so p(g|s,r,a,s)=p(g|s)p(g|s,r,a,s)=p(g|s), by assumption. Ok, so the second term in the proof is now

γEπ[Gt+1|St=s]=γgΓrRsSaAgp(g|s)p(s,r|a,s)π(a|s)=γrRsSaAEπ[Gt+1|St+1=s]p(s,r|a,s)π(a|s)=γrRsSaAvπ(s)p(s,r|a,s)π(a|s)

as required, once again. Combining the two terms completes the proof

vπ(s)Eπ[GtSt=s]=aAπ(a|s)rRsSp(s,r|a,s)[r+γvπ(s)].

UPDATE

I want to address what might look like a sleight of hand in the derivation of the second term. In the equation marked with (), I use a term p(g|s) and then later in the equation marked () I claim that g doesn't depend on s, by arguing the Markovian property. So, you might say that if this is the case, then p(g|s)=p(g). But this is not true. I can take p(g|s,r,a,s)p(g|s) because the probability on the left side of that statement says that this is the probability of g conditioned on s, a, r, and s. Because we either know or assume the state s, none of the other conditionals matter, because of the Markovian property. If you do not know or assume the state s, then the future rewards (the meaning of g) will depend on which state you begin at, because that will determine (based on the policy) which state s you start at when computing g.

If that argument doesn't convince you, try to compute what p(g) is:

p(g)=sSp(g,s)=sSp(g|s)p(s)=sSp(g|s)s,a,rp(s,a,r,s)=sSp(g|s)s,a,rp(s,r|a,s)p(a,s)=sSp(s)sSp(g|s)a,rp(s,r|a,s)π(a|s)sSp(s)p(g|s)=sSp(g,s)=p(g).

As can be seen in the last line, it is not true that p(g|s)=p(g). The expected value of g depends on which state you start in (i.e. the identity of s), if you do not know or assume the state s.

당사 사이트를 사용함과 동시에 당사의 쿠키 정책개인정보 보호정책을 읽고 이해하였음을 인정하는 것으로 간주합니다.
Licensed under cc by-sa 3.0 with attribution required.