Solvedagones Game Server Allocation advanced filtering: player count, state, reallocation
✔️Accepted Answer
Objective
Already defined above
Background
These are terms that are currently used in Player Tracking:
- Player Capacity: Max number of players that could end up in a GameServer
- Player Count: The number of players that are in a GameServer currently
- Player Availability: Capacity - Count. (this is new for this ticket)
Requirements and Scale
- Is a high throughput operation, so will need to have an implementation that allows for this.
- Will need similar scaling characteristics to current Allocation policy.
- Will sit behind a feature flag of “PlayerAvailableAllocation”
- Will still need to adhere to
Packing
andDistributed
optimisation patterns - Will still need to be filterable on
required
andpreferred
match labels/expressions
Design Ideas
We extend the required
and preffered
sections of GameServer Allocation to allow for player availability and game
server state filtering (Credit), as well as allow allocation to "re-allocate" an already Allocated GameServer.
This should be extensible to other types of allocation strategies down the line, as well as having applicability
outside of only player tracking re-allocation. This approach also doesn’t require a breaking change to the existing
allocation implementation.
For example, the current way of allocating against Ready GameServers (which would be default, and therefore backward compatible) would become:
apiVersion: "allocation.agones.dev/v1"
kind: GameServerAllocation
spec:
required:
matchLabels:
agones.dev/fleet: simple-udp
gameServerState: Ready # Allocate out of the Ready Pool (which would be default, so backward compatible)
Wherein:
gameServerState
- is the state of the GameServers to search, limited toReady
andAllocated
.
By default this wouldReady
, which is the same as the current implementation.
Therefore, to attempt to find a GameServer that has room for between 2 and 10 players, that is already Allocated, and if
not found, to then find a Ready one that has the same amount or more of capacity, and move it to Allocated, this would look like the following:
apiVersion: "allocation.agones.dev/v1"
kind: GameServerAllocation
spec:
preferred:
- matchLabels:
agones.dev/fleet: simple-udp
gameServerState: Allocated # new state filter: allocate from Allocated servers
players: # new player availability filter
minAvailable: 2
maxAvailable: 10
required:
matchLabels:
agones.dev/fleet: simple-udp
players: # new player availability filter
minAvailable: 2
maxAvailable: 10
gameServerState: Ready # Allocate out of the Ready Pool (which would be default, so backward compatible)
Wherein:
players.minAvailable
- is the minimum number of players to match against (default: 0)players.maxAvailable
- is the maximum number of players to match against (default: max(int))
This works since the preferred
section would currently Allocated GameServers first, and if it failed to match, then search then would move onto the required
section.
This would also need to eventually be expanded to the gRPC Allocation Service API, but the same format could be reused.
Removing GameServers from the pool on re-allocation
One concern with this design is that you could continually get the same GameServer over and over again on re-allocation, which could send way more players to a particular GameServer than may be desired.
To solve that problem, we can use already existing Agones alloction constructs to solve this: matchLabel selectors
and also being able to set annotations and labels on a GameServer at allocation time.
For example, we can extend the example above and also add a user defined label agones.dev/sdk-available
to our
GameServers that indicates that there are slots available for a player to fill.
apiVersion: "allocation.agones.dev/v1"
kind: GameServerAllocation
spec:
preferred:
- matchLabels:
agones.dev/fleet: simple-udp
agones.dev/sdk-available: "true" # this is important
gameServerState: Allocated # new state filter: allocate from Allocated servers
players: # new player availability filter
minAvailable: 2
maxAvailable: 10
required:
matchLabels:
agones.dev/fleet: simple-udp
gameServerState: Ready # Allocate out of the Ready Pool (which would be default, so backward compatible)
metadata:
annotations:
waitForPlayers: "2" # user defined data to pass to the game server so it know when to switch back the agones.dev/sdk-available label
labels:
agones.dev/sdk-available: "false" # this removes it from the pool
Upon allocation the value of agones.dev/sdk-available
is set to false
, thereby taking it out of the preferred
re-allocation pool. From the annotation data we're passing through (in this case, telling the game server binary to
potentially wait for 2 players to connect), the game server binary can use the SDK.SetLabel(...)
This strategy will need to be documented, but also have applicability outside of only allocating by player count
-- for example, it could also be used where GameServer containers host multiple instances of a GameSession.
It also allows users to utilise their own labelling names and systems that make sense for their game.
The downside here being that the onus is on the user to pass the required information down to the GameServer to know when to add the GameServer back into the pool. Long term, we could look at doing some of this automatically, but it would be best to get user feedback first on this initial implementation before going down that path.
Technical Implementation
- Much like we keep a ready cache, we should also keep a cache of Allocated game servers that have a player availability greater than 0. This may be able to be pre-sorted based on packing rules. This should be the first place that is searched for Allocations that match the label selectors.
- If we cannot find a previously Allocated GameServer that matches the criteria required, then we fall back to searching for a Ready GameServer - but with the additional requirement of making sure the players available of the Ready GameServer is greater than or equal to that which is requested.
- This is likely going to require some refactoring of the
ListenAndAllocate function
, as it is currently optimised for only the Ready policy. - When allocating a GameServer, whether Ready or Allocated, an annotation of
agones.dev/last-allocated
with a
timestamp of that moment will be applied. This has two pieces of functionality: (a) so that SDK.WatchGameServer() (or
any listener to events) can see when a GameServer is re-allocated (b) we can take advantage of CRD generational
locking so that when re -allocating we know the resource is up to date, and is still available to be re-allocated.
Alternatives Considered
New Allocation Pathway
We could implement a whole new Allocation pathway (which we discussed earlier), which seems untenable.
Allocation "policy"
Previous design used an allocation policy to switch out strategies. That was deemed too inflexible when compared to
this current design.
Dependent on #1033
(Not a detailed design ticket, just putting down the basic idea to get the conversation started)
Is your feature request related to a problem? Please describe.
In both session based games and persistent games, it would be useful to be able to do something akin to:
Just give me a GameServer that has room for 2 (or n number) players, regardless whether it is Allocated or not
Which would pass back a GameServer if it had capacity available, and if not, would Allocate a whole new one (with enough initial capacity), and return that instead.
This would be useful for:
Describe the solution you'd like
Some kind of adjustment to
GameServerAllocation
that accounts for capacity of players that are available. Maybe a way to choose what State(s) are in the Allocation pool? (Ready or Allocated??)This probably requires much thought, and would want to tie into #1197 so they don't conflict with each other.
Describe alternatives you've considered
Having a totally different Allocation path for player capacity based allocations -- but then we will likely end up with a huge number of paths for different types of allocations.,
Additional context
Can't think of anything else.