In this section you can check out what I'm currently working in, practically in real time.
If some of this seems cryptic it's because the logs seen here come directly from my personal productivity tool prio, and are really just "notes to self".
Nevertheless, if you are interested in what I'm currently working on, this space will certainly be updated FAR more often than the news section.
00:00:00 up2 LOG
-Fixed use of SoundTrack to allow sound to be turned off.
-Reworked powerups so that the client requests to take a powerup, and the server confirms it before it
is taken locally.
-This required a unique id for each powerup, which is assigned at creation time on the client.
-Currently, the server simply has an array of flags which are all initially true, which correspond to
the active state of each powerup.
-When a client grabs a powerup, he does nothing but send a request to the server with the id of the powerup.
-The server gets this, makes sure that it indeed was a valid request (i.e. the powerup was active),
deactivates the powerup, and sends a confirmation back to all clients (client id and powerup id).
-Clients will all deactivate this powerup, and if the client id matches their own, they will award themselves
with a new powerup.
-All checks for "need" are done locally before the Take Powerup command is sent (i.e. can't take
armor if full), so there is of course a lot of potential for cheating here, as the server doesn't
care about any of the relevant local state of a client.
-The server then sets a respawn time for the powerup in question, and when it elapses the server sends
a respawn command to all clients.
-Fixed a memory leak where the powerups in the spawnlist weren't deleted.
-Implemented entire death reporting pipeline and promptly realised that it was the wrong thing to do.
-I remembered earlier design thoughts about sending an alive flag with each network update, so death
and spawn could be detected automatically.
-What does this tell you about coding before you design? :(
-Even further errors, as I just saw that the armor is being sent along as well.... :(
-Got basic spawn, play, die pipe in, along with powerup grabbing.
-Should be able to begin implementation of remote case soon, however, need to solve the question
of Pilot and Hud usage in Ship's implementation for the remote ships.
-For the purposes of multiplayer, I have made the old Pilot class into LocalPilot.
-Ship will be based on the new abstract Pilot interface, and most likely CL_Info will inherit from
Pilot to serve as the Pilot object for the MPRemoteShips.
-Something similar for Huds, but I think that the remote hud will simply be a proxy object
00:00:00 up2 LOG
-Numerous things discovered today.
-Got the basic network update pipe in, so that you can see a remote ship. This is currently using the update
from the local ship, offset a bit.
-Have tested a few different update intervals (from the server) and some things have come to light.
-The remote ships will NOT be the same class as the local ship, which is now MPShip : public Ship
for the addition of a function to export a CL_Update for network transport.
-The remote ships will be a simple graphical thingy that shares code for rotation lerping,
thrusting, and weapon fire with the normal Ship class (a baseclass to Ship and RemoteShip,
probably without any public interface at all).
-With this in place, I can keep tweaking the RemoteShip with the data from the local update
bounced back from the server at low intervals, to simulate network delay.
-This should allow me to get a good lerp going before testing the remote case.
-The heirarchy will probably have to be:
-Thing -> ShipBase -> Ship -> MPShip
-ShipBase -> MPRemoteShip
-Also, the network idea is to have the Client call back the MPGameState for all things Received from the
server. Also, per-frame updates are currently pulled from the MPGameState by the Client, this might be
a bad idea and may change.
-All Send to server stuff is being pushed through the Client by the MPGameState. This is convenient
for things like menu changes to clientinfo and config (because the MPGameState's menu triggers the
changes, and can thus push them to the Client at the right time.
-This is still all nice, since it uses the abstract interface.
-Got a basic lerp going now (had forgotten to lerp on all updates!!!)
-As suspected, lerp quality is greatly influenced by packet-loss.
-This worse than low rates, as a consistent time interval is better for the algorithm.
-Am however parameterizing the deltas with the update interval as received.
-Will aim for a standard server update of 50 hz or higher.
-Current CL_Update is 27 bytes. Could probably compress vectors to lots less.
-Positions as unsigned shorts (2 * 2 * 2)
-Speed as unsigned chars? (1 * 2)
-This gives 8 + 2 + 2 + 1 = 13 bytes
-Check if armor values can use unsigned char instead (saves one byte)
-However, 27 * 32 = 864, and 13 * 32 = 416, can probably send this in a single udp packet.
-Still way to much for a lowend modem at 50 hz.
-Can probably run 10 hz reasonably, if all above compression is done.
-Should keep remote version of local ship for ping tests and "ghosting". good debug tool for testing
00:00:00 up2 LOG
-Now I'm 27!
-Continuing work on splitting GameState subclasses.
-Moving boys and enemies to SPGameRule implementation, and making the necessary interface changes
to other classes to make them more transparent.
-Subclassed PowerupContainer for multiplayer, and implemented powerup respawning.
-They respawn after a scripted amount of time (same for all), and with a custom sound.
-Changed Efx::Powerup() to use 2d sound instead of normal play.
-Implemented basic pipe for sending and receiving Ship updates.
-The client will send an update packet each frame. This will be UDP in the remote version.
-The server will cache this update if it arrives.
-Each SV_Client will send out these cached updates to each client each time the update interval
-This update interval will probably be variable, maybe decided by the client as part of the
00:00:00 up2 LOG
-Got MPState working totally.
-SV_Config now includes a MissionContainer, just like the planet, and loads the same kinds of mission configs.
-Fixed MPState to exit and restart when a map change is detected. Very clean.
-The game still continues, as the client and server are external :)
-This will rule! :)
00:00:00 up2 LOG
-Put all GameState2 code (the singleplayer mode) into GameState.
-SPGameState is the new singleplayer subclass, and MPGameState is the multiplayer version.
-Will be shifting code over the SPGameState as the need arises, and making certain functions virtual
in GameState, as to facilitate as much reuse as possible of the existing game.
-Completed moving the basic single player specifics of GameState to SPGameState.
-Got the MPGameState up and running as well in the new heirarchy.
00:00:00 up2 LOG
-Beta has gone well, only one user could make it crash, and that had something to do with the music. Have not tried
-Commencing multiplayer work because the product needs it.
-Going to version 22.214.171.124.
-Implemented basics of MultiMenu, and the core of the ServerList classes.
-Finished core of connection (transport protocol level), authorization (game protocol level), and configuration
(game world state) process for local client.
-I have decided to let the concept of Client be the game level multiplayer communication abstraction.
-This allows me to do without a local network abstraction, as the LocalClient and SV_LocalClient communicate
directly with each other via function callbacks.
-This makes the whole transport representation abstract, so there is no need to pack things into messages for
local communication, which is nice, and it moves all the network encryption work from the Server to the
SV_Client subclasses, which is also cleaner than old designs.
-The SV_Client will also handle all intermediate states for clients, allowing the local version to completely
bypass emulating remote states, which is really great.
-There is a LocalServer with a somewhat extended interface compared to Server, but this is only for use by
the LocalClient which is parameterized with it (a simple "virtual connection" interface). The LocalServer
otherwise extends no functionality from the Server, and the core code for all client handling is transparent
(between local and remote) in Server.
-Implemented a bare bones MultiState, and went ahead on the MPGameMenu.
-Implemented player list, based on local client info.
-Using the tiny font here, very Unreal Tournament! :)
-Implemented map list.
-Implemented loading of MPFactory config data from script.
-This is the default, in case there are no multiplayer configs to load data from.
-Implemented map switching via gui -> server -> back to all clients
-This led to some server validation of client authority to do map switches, and the security in the
local case here is quit strict, but that is good I guess.
-This network solution (the local abstraction) is proving to be quite interesting.
-It feels like it differs a lot from the previous stuff I have done where I had the abstraction below
the application packet level. In these solutions, the entire system was heavily geared towards the limitations
of UDP packets. Of course, this solution will also use UDP at some level, but since I am implementing the
local version first, and the fact that the abstraction is much higher up, this feels quite different.
-The main thing is that the local client and server don't cache their send and receives to and from each other,
but rather the effects are instantly reflected in the state on the receiving end. The update passes thus need
to be constructed so that thay can detect that changes to state have already occured, and need to be handled.
-This isn't a problem even in the remote case, as all the local functions are doing now is setting state and
validating that the state could be set, and then the update pass works on this new data; essentially the
same thing as before, only the step where data is offloaded from the local network cache is removed.
-In previous designs, this step was where incoming data was analyzed BEFORE making any state changes.
-This is the same in principal to the current setup, since for example when a host client wants to change
the SV_Config, the function wherein this is done has validation steps to make sure the function call
fails if there is something wrong with the data.
-The remote implementation will do a Receive on each remote client at the beginning of the frame. This will
result in data being pulled off the local net cache (since data can arrive async), and then the SV_RemoteClient
will be in the same position as the SV_LocalClient is when a function is called to change server state.
-Loopwise, the changes in state are acted upon at the same time. Since the clients are updated after the server
(in the MPFactory), that means that any state changes that they do directly (locally) to the server will happen
before the next Update call. Note that this is also true of the remote case, where Receive() is called before
the Update of the world.
-On the output side (server send), any Write_ functions that the server calls due to world processing will
locally result in a direct function call to change client state, and remotely result in a buffer write in the
outgoing cache, to be send en masse at the end of the frame.
-The whole key is that I am used to buffering network stuff to be processed (received or sent) at fixed points
in the loop. This makes both cases the same, even if they are implemented in radically different ways (as
compared to previous solutions).
-One interesting case is the problem I just had with map switching.
-The client menu changed the local copy of the server config, intending to send this to the server when
the user pressed ok in the menu. What happened instead was that the map was loaded as soon as the local
config changed, which seemed strange.
-It wasn't strange though, due to the fact that the local loop was set up to always compare the config map
name to the currently loaded map, and thus switched right away.
-The problem here is really not the design, but the fact that the menu should act like a standard modal
dialog, and the changes shouldn't happen until you press "ok". Making changes directly in the target
data structure is the problem here, since this data is being monitored all the time.
-Removed equality check between clientinfos when validating clients, since when a client wants to change clientinfo
details (like name), this is done directly in local client data (the menu modifies directly)
-This is so that the name change shows up immediately in the gui (is read from local memory of course)
-So, when the server receives the info change, the info has already been changed in the local client memory,
and so the clientinfos don't match.
-This could be solved by changing so the menu gets the localclient info from a temp copy of the info (like
the config works), but this would be special casing for the local case, because all this validation is only
possible in the local case (as the server only has it's local copy of client info when the client is remote,
and can't compare directly to the "remote end" like LocalServer can with the local clients which are actually
passed into the function.
-It might be better to just pass the id of the local client into these functions, and only validate that
host operations are done by someone that the server thinks is the host.
-This is simple to spoof locally, since you can send any valid client id, but why would you want to
mess with your own server? You will always be host on it anyway, and you can of course hack the hell
out of the local part anyway.
-Having to send a LocalClient pointer is better, but not much, since you could probably fake the interface
pointer in some way, but it all boils down to messing with your local server, which is not really an issue.
-Following on the above discussion, I removed the validation completely by only passing the id of the calling
client into LocalServer functions.
-No hackproblem here, and remote validation will be based on local server info as well as network addresses.
-For the time being, all host operations can uniformly be denied in the remote cases, since the only case
where you will have a remote host is when running a dedicated server.
-Another interesting aspect of this design is that you actually have a place to do extra work depending on
if the client is local or remote. The previous designs were such as you couldn't know the concrete connection
type that a client was using, since all the relevant work was being done above the level of abstraction.
-Now instead all operations being sent from server to client are handled by the SV_LocalClient operating
directly on the LocalClient interface, and all operations being sent from client to server are handled by
the LocalClient operating directly on the LocalServer interface.
-This allows for as much custom messing around as anyone could need, as the conversions are hidden
behind abstract interfaces.
-There will be probably be more code written totally, but there are no cases where the local version
needs to emulate a remote version just to be compatible. The previous designs were very much lowest common
denominator in that way, and that isn't very robust or flexible.