GSTDTAP

浏览/检索结果: 共3条,第1-3条 帮助

限定条件            
已选(0)清除 条数/页:   排序方式:
A community nitrogen footprint analysis of Baltimore City, Maryland 期刊论文
ENVIRONMENTAL RESEARCH LETTERS, 2020, 15 (7)
作者:  Dukes, Elizabeth S. M.;  Galloway, James N.;  Band, Lawrence E.;  Cattaneo, Lia R.;  Groffman, Peter M.;  Leach, Allison M.;  Castner, Elizabeth A.
收藏  |  浏览/下载:11/0  |  提交时间:2020/08/18
nitrogen  nitrogen footprints  sustainability  diet  community  
Increasing contribution of peatlands to boreal evapotranspiration in a warming climate 期刊论文
NATURE CLIMATE CHANGE, 2020, 10 (6) : 555-+
作者:  Helbig, Manuel;  Waddington, James Michael;  Alekseychik, Pavel;  Amiro, Brian D.;  Aurela, Mika;  Barr, Alan G.;  Black, T. Andrew;  Blanken, Peter D.;  Carey, Sean K.;  Chen, Jiquan;  Chi, Jinshu;  Desai, Ankur R.;  Dunn, Allison;  Euskirchen, Eugenie S.;  Flanagan, Lawrence B.;  Forbrich, Inke;  Friborg, Thomas;  Grelle, Achim;  Harder, Silvie;  Heliasz, Michal;  Humphreys, Elyn R.;  Ikawa, Hiroki;  Isabelle, Pierre-Erik;  Iwata, Hiroki;  Jassal, Rachhpal;  Korkiakoski, Mika;  Kurbatova, Juliya;  Kutzbach, Lars;  Lindroth, Anders;  Lofvenius, Mikaell Ottosson;  Lohila, Annalea;  Mammarella, Ivan;  Marsh, Philip;  Maximov, Trofim;  Melton, Joe R.;  Moore, Paul A.;  Nadeau, Daniel F.;  Nicholls, Erin M.;  Nilsson, Mats B.;  Ohta, Takeshi;  Peichl, Matthias;  Petrone, Richard M.;  Petrov, Roman;  Prokushkin, Anatoly;  Quinton, William L.;  Reed, David E.;  Roulet, Nigel T.;  Runkle, Benjamin R. K.;  Sonnentag, Oliver;  Strachan, Ian B.;  Taillardat, Pierre;  Tuittila, Eeva-Stiina;  Tuovinen, Juha-Pekka;  Turner, Jessica;  Ueyama, Masahito;  Varlagin, Andrej;  Wilmking, Martin;  Wofsy, Steven C.;  Zyrianov, Vyacheslav
收藏  |  浏览/下载:15/0  |  提交时间:2020/05/13
A distributional code for value in dopamine-based reinforcement learning 期刊论文
NATURE, 2020, 577 (7792) : 671-+
作者:  House, Robert A.;  Maitra, Urmimala;  Perez-Osorio, Miguel A.;  Lozano, Juan G.;  Jin, Liyu;  Somerville, James W.;  Duda, Laurent C.;  Nag, Abhishek;  Walters, Andrew;  Zhou, Ke-Jin;  Roberts, Matthew R.;  Bruce, Peter G.
收藏  |  浏览/下载:61/0  |  提交时间:2020/07/03

Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain(1-3). According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning(4-6). We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.


Analyses of single-cell recordings from mouse ventral tegmental area are consistent with a model of reinforcement learning in which the brain represents possible future rewards not as a single mean of stochastic outcomes, as in the canonical model, but instead as a probability distribution.