Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems / Libristo.pl
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

Code: 04834934

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

by Sebastian Bubeck, Nicolo Cesa-Bianchi

A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The ... more

442.53


In stock at our supplier
Shipping in 14 - 18 days
Add to wishlist

You might also like

Give this book as a present today
  1. Order book and choose Gift Order.
  2. We will send you book gift voucher at once. You can give it out to anyone.
  3. Book will be send to donee, nothing more to care about.

Book gift voucher sampleRead more

More about Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

You get 258 loyalty points

Book synopsis

A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The goal is to maximize the total payoff obtained in a sequence of allocations. The name bandit refers to the colloquial term for a slot machine (a "one-armed bandit" in American slang). In a casino, a sequential allocation problem is obtained when the player is facing many slot machines at once (a "multi-armed bandit"), and must repeatedly choose where to insert the next coin. Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the 1930s, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, it also analyzes some of the most important variants and extensions, such as the contextual bandit model. This monograph is an ideal reference for students and researchers with an interest in bandit problems.

Book details

Book category Books in English Computing & information technology Computer science Mathematical theory of computation

442.53

Trending among others


Books by language

250 000
safisfied customers

Since 2008, we have served long line of book lovers, but each of them was always on the first place.


Paczkomat 12,99 ZŁ 31975 punktów

Copyright! ©2008-24 libristo.pl All rights reservedPrivacyPoučení o cookies


Account: Log in
Wszystkie książki świata w jednym miejscu. I co więcej w super cenach.

Shopping cart ( Empty )

For free shipping
shop for 299 zł and more

You are here: