User specific data handling strategy, please comment?

#1

I am creating a single-user game that will have (hopefully) many people playing it. Each user has their own data set.

With my app I am adopting the strategy of using Pengines. Each box in my server farm will have it’s own Pengines instance. I want each Pengine instance to be decoupled or independent of a specific user’s data.

Prolog code partitioning

Each instance will start up with the game logic code resident, but no user data. So the Prolog code will be partitioned into two distinct sets. The game logic code and the user data code. All the user data code is in predicate form and is wrapped in an outer predicate named user_data. For example:

user_data(item_location(player_1, kitchen)).

User Data Load/Save

Here are the steps I expect to take whenever a user executes a transaction with my game. Each transaction is independent from each other so the user is not “logged in” in a formal sense. With each transaction the following steps will execute:

  • (Node.js) Using the user ID, retrieve the user data from a database server. The data will be a list of user_data() predicates as described above retrieved as a single text blob from the server…
  • (Pengine) The Prolog code in the Pengine instance will have a predicate to retract all user_data() predicates, thereby clearing out the user data from the last user while leaving the game logic code alone.
  • (Pengine) The user’s data will be loaded into the target Pengine instance, which was created at startup in the Node.JS host app and the Pengine instance reference stored as a global, permanent variable. The Pengine instance and the Node.JS instance are co-resident on the same box, so everything is accessed via localhost URLs. When the Node.JS app shuts down, the Pengine instance is released.
  • (Pengine) The current user request/action is executed against the Pengine instance to effect game play.
  • (Node.JS) To complete the transaction, the Node.JS app requests the current listing of the user_data() predicate, which is saved to the database server replacing the previous content for the current user ID.

My questions are:

  • What is the best way to get the user_data() predicates over to the Pengine instance? Do I simply pass in a giant string with a Pengines call? Or is it better to create a file, even if only in memory, and have the Pengine instance consult it? (Remember, the Node.JS app and Pengines server are on the same box).
  • Does Prolog have a Gzip utiltiy or other compression command? My guess is that the user data will compress quite well due to the frequent repetition of similarly named predicates. If not I will do that on the Node.JS side.
  • What do you think about the general strategy?

Note, I realize that a fair amount of complexity comes about from making the Pengine instances “agnostic” of each user’s data, thereby forcing the need for transporting the user data round. But this is the general architecture I have seen for most high-capacity server farms since any server can fail with a minimal impact on the overall application universe. In addition, this makes load balancing and dynamic server allocation a lot easier since you don’t need to target any particular box. My opinion.

Note-2: The user game transactions are voice driven so there is no need for a fast, low latency response time like in a first person shooter. There is plenty of time to do the operations above.

#2

Just a comment since you asked for comments.

I don’t know exactly what you are doing because I haven’t done something similar, but if you need to load large prolog data files fast then use the Quick Load File.

Here are real stats from a few weeks ago on large files. Notice the time of the 2.30 GB load went from 11 minutes to 24 seconds.

1 Like
#3

Thanks Eric.

#4

While I appreciate the thanks, I didn’t really earn the thanks as I only made a reference to it.

Thank Jan and the others who created it.

1 Like