This weekend I woke up late Saturday night and was working on an unrelated web project. This project was giving me some issues that required some HTTPS debugging using a tool called Fiddler. Fiddler is a great tool for reading HTTPS traffic. During my debugging I caught glimpse of a UE4 ajax marketplace call that pulled down marketplace data as a JSON object. I immediately thought “I bet I could do something with this”, and my great annoyance that is the marketplace launcher not having search functionality led me to start a jam session on “could I rebuild the marketplace launcher?” Turns out, you can replicate quite a lot of it easily.

I hate web development and I thought this would be a fun break from things. Not only did it end up being fun, it ended up being quite educational!

The Goal

My goal was simple. Create something that resembles the marketplace frontend found in the UE4 launcher but include a search feature. On this journey, search ended up actually being the last thing I implemented on top of many other features I thought would be nice to have.

First Steps

The Development Platform

I am a big fan of Popcorn Time and know that they’re basically using a form of Chromium wrapped around node.js. I’m a big fan of node.js. Looking into how this type of development was done, I stumbled upon nw.js which turns out to be exactly what I wanted. I saw that others were using the request node module to do web requests with nw.js, so I started a blank project and popped that module in. This, combined with cURL for Windows. I now had a base for sending arbitrary web requests.

Figuring Out the Authentication Process

From the very beginning I wanted to access the marketplace using my credentials so I can get data for what assets I owned versus not owned. I underestimated how much I didn’t know about this process and this step took most of my time.

My first approach was to use Fiddler to capture the login process using the web version of the marketplace and then simply recreate it. I’m not skilled at doing this sort of thing, so I spent a few hours and I gave up. I kept getting ‘400 Bad Requests’ any time I attempted hitting the login endpoint using cURL.

I then tried various ways to sniff web traffic and tried sniffing the actual launcher’s traffic. Here I learned the launcher goes through a full OAuth process, whereas logging in from the web only goes through what appears to be a partial OAuth process. I tried abusing my way through the OAuth chain and replicating what the launcher was doing but Fiddler wasn’t providing me with the data the launcher was sending to Epic’s MCP/webserver, and I’m not skilled enough in reverse engineering to dig deeper here. Setting up a more advanced HTTPS ‘man-in-the-middle’ proxy server would probably work but the Google results on how to do so were scary. I went back to capturing the web login process.

I found I had two major problems, cookie management and unknown api requirements.

Battling Cookies

Storing the cookies needed during the login process was a big problem for me, and I still can’t figure out how to get Epic’s cookie requests to save in nw.js’s built-in cookie support. I decided to say ‘I don’t know what I’m doing, lets accept the fact I have to write some bad code’. With this sin in mind, I simply turned off all cookie and redirect support in nw.js and decided to handle it myself. Any time a web request would want to set a cookie, I just set a key-value pair in a global object. Not the most elegant solution, but hey, I was finally storing cookie data.

Not knowing much about web security or the HTTP protocol, I thought sending these cookies back to Epic and having them accept them would be a pain in the ass if not impossible. Turns out ‘cookie-access’ rules that I know of, the ones that prevent various ‘cookie exploits’, are dictated almost solely by browsers. Nw.js isn’t really a browser and you can get it to do really whatever you want. The request module and cURL allowed easy setting of your ‘cookie string’ header. Now that I was sending good cookie data back to Epic, its webservers were much happier with me.

Fighting the API

Now that cookie management was solved, I just needed to figure out the right sequence of endpoints and what data goes where to facilitate the login process. Fiddler helped immensely with this and all I had to do was make proper use of what Fiddler was telling me.

I kept my web requests as small as possible and added data to each request one piece at a time, trying to establish all the parameters that Epic’s API expects. In this process, I accidently clicked too fast on the Submit button when logging in to the web marketplace and was prompted saying I failed to log in because I have submitted the form twice. Looking at the form data, sure enough there are some hidden SYNCRONIZE tokens that got updated every time I loaded the login form. Instead of piecing all this form data together myself, I decided to simply just load in Epic’s login form from their web server, rip out the unneeded bits like ‘reset password’ and ‘register’, and have that form run my own request instead of submitting to Epic.

Successful Login

Javascript and node.js made this real easy to do, and once I got my first successful login on the first auth endpoint, the rest was easy. The web marketplace appears to go through an OAuth login process as well after you get your ‘Single Sign On (SSO)’ cookie and I’m not sure why. I decided to go through this process as well but the OAuth’s token authorizing step doesn’t return anything. Looking at Fiddler, this was the same during the ‘real’ web marketplace login, so I am assuming that OAuth logins for the web ‘client_id’ are blocked. This gives me hope that one day there will be a proper ‘3rd party’ OAuth auth chain so people like me can script up safe and secure ways to use other people’s data associated with Epic, such as “do they own my Marketplace asset?”

Here is what my fancy login UI looks like:

Login

Grabbing the Marketplace Data

Now that I was logging in correctly, it was time to see if pulling down the marketplace data is even feasible. Admittedly this should have been the first thing I did but I sort of rushed into this without thinking and I was anxious to get a successful login working. At this point I added a button that would ‘skip login’ and go directly to data fetching and I kicked myself a bit. The ability to log in though allows for some great features later though.

Fiddling with ajax-get-categories

If you look at the page source for any web marketplace page using a ‘dumb’ client such as cURL, you’ll see that its extremely light and that most of the asset data is somehow being pulled in by JavaScript. I tried opening up the source JavaScript and uncompressing it to see if I can simply just tap into existing api functions. I spent a few hours looking through this as I found it fascinating and I learned a lot about all the different api calls Epic has set up, but it being compressed it was hard to find exactly what I wanted. During this I also found out that they are using Handlebars to generate HTML from templates using javascript. This was a great sign as it means the data probably exists in a form where I can generate my own HTML using my own Handlebar templates.

Looking in Fiddler, there was only one web request that looked like it had anything to do with fetching data. As soon as I inspected it, it was very clear that there is definitely a way to get marketplace data as a JSON object. Now I just had to figure out the API.

Ajax JSON Jackpot

Fetching All Marketplace Data

I tried navigating around the web marketplace a bit more to see if any more api endpoints would show up, but they didn’t. It appears everything I could find is going through ajax-get-categories. The problem with this endpoint is that it only returns 25 assets at a time, and only for a given category. I tried adding all sorts of parameters I can think of such as count, limit, end, num, length, all, and just trying arbitrary endpoints such as ajax-get-assets, but I couldn’t get anything to work.

I’m not that great in the javascript world and I knew that fetching all this data asyncronously would be a pain in the ass for me, so I decided that I’ll just get all the data all at once. It isn’t pretty but I ended up with some ugly api code and a whole lot of bad bad bad global variables that would do exactly what I needed it to do.

In a nutshell, this process is:

  1. Call ajax-get-categories to get a list of all available categories and how many assets are in each category
  2. Keep calling ajax-get-categories adjusting the start parameter as needed to get all assets for each category
  3. Merge all this data together into one giant javascript object
  4. Once the number of assets we have match the numbers in Step 1, we know we have all the available data

In the form of some nasty ass code:

api.prototype.getAllAssets = function() {
    global.fetching = true;
    // Grabbing environments will allow us to get a full list of categories
    module.exports.getAssetsInCategory('assets/environments', 0, false, function (json) {
        
        var categoriesLeft = json.categories.length;
        global.categories = json.categories;
                
        // Build Category List
        for (var i = 0; i < json.categories.length; ++i) {
            marketplace[json.categories[i].path] = { name: json.categories[i].name };
            module.exports.getAssetsInCategory(json.categories[i].path, 0, true, function (json, path, finished) { 
                if(finished) {
                    categoriesLeft--;
                    if (categoriesLeft == 0) {
                        global.fetching = false;
                    }
                }
            });
        }		
    });
}

You’ll see a few bad practices in that snippet alone. The fact that I’m using module.exports to reference other functions in the same module is probably an absolute terrible thing. It is not so apparent here, but in getAssetsInCategory, I abuse the global space pretty badly. I am quite skilled in C++ and UE4 in general, but when it comes to javascript and me, if it works, it works. You can look at the full code for this in api.js.

Manipulating the Marketplace Data

The result of my api set of functions I wrote results in a global object that contains all marketplace data. To inspect it, I just logged it to Chrome’s console any time it was complete. This turned out to be an extremely powerful way to analyze the marketplace data and pull out what I need and where.

Asset Data

Getting this into usable HTML form was trivial using Handlebars, which is probably why Epic also uses it. To render a category and all of its assets, I created this Handlebars template. Once the HTML layout was done, popping in the values from the marketplace data was as easy as… riding a bike? If I’m more witty, I could think of a better Handlebar pun.

	<!-- Template for showing a category and all the assets in said category -->
    <script id="category-template" type="text/x-handlebars-template">
      <div id="" class="categorylist jumptarget">
        <h1></h1>
        <hr>
        <div class="wrapper">
          <ul>
            
            <li id="" class="asset" data-effectivedate= data-owned= data-price= data-rating=0 data-raters=0>
              <a href="#" onclick="showDetailsFor('','')"><img class="thumb" src=></a>
              <span class="title"></span>
              <span class="author"></span>
              
              <span class="raters">/5 []<span class="glyph glyph-star"></span></span>
              
              <span class="raters">Unrated</span>
              
              <span class="price">OwnedFREE</span>
            </li>
            
          </ul>
          <br/>
        </div>
      </div><!--  -->
    </script>

Replicating the Frontend

I won’t go into much detail here. Once you have the marketplace data, laying out the frontend is just your average HTML + CSS + javascript/jQuery development. For this project I used LESS to compile my CSS, but most of the work was just looking at Epic’s launcher for reference and making CSS that matches. I could have possibly sniffed around and used Epic’s actual stylesheets for this but learning to use their compressed versions would have been more work than just writing them from scratch. The design work was already done, aside from a few features I added, so all that remained was grunt work.

Unexpected Snags

I ran into a few issues I did not expect when manipulating the marketplace data. The three worth mentioning are:

Asset ‘Categories’ Data Is Wrong

The first step in getting assets is to ask for the assets in a category. These ‘top level’ categories are what all assets must fall into to be shown in the launcher or web marketplaces. Assets themselves have a categories property as well, perhaps so that an asset can identify itself as being in two categories. What ever the case, the categories property within assets is outdated and/or wrong. This means if you were to look up more information about the category an asset belongs to as opposed to looking for assets that belong in a category, you’re going to be met with some challenges.

Asset Categories Mismatch

Asset Descriptions Are Malformed HTML

It seems like the majority, if not all, of marketplace asset descriptions have some basic HTML injected into them. This isn’t a problem in itself, infact it is even helpful, but the problem is however these HTML tags are being created, they are being created wrong, which means some HTML fixup is required if you needed to display them properly. To put the issue clearly This malformation results in a good chance that an asset’s description has an HTML closing tag that doesn’t specifiy an HTML element.

Or simply put, often times <a href="...">Text</> is used instead of <a href="...">Text</a>. See the difference in closing tags? Trying to render this HTML directly could result in some problems if you don’t do any pre-processing of it first.

I don’t know the best way to fix this issue, but I wrote a dirty function that seems to do the job.

    // Fix Epic's broken ass malformed closing tags i.e. <a></> instead of <a></a>
    // I wrote this using some really hacky logic and the assumption that jQuery's ".parseHTML"
    // results in a 'good enough DOM' where I can extract the tags I need to close. I don't know if this
    // fixes every case, but it appears to be 'good enough' for now.
    // It also replaces new lines with <br> and adds <hr> to any </h1>
    // It then takes the fixed HTML and makes all links open in a new browser
    $('.fix-html').each(function(index) {
        var fixed = $(this).html();
        fixed = fixed.replace(/&lt;/g, '<');
        fixed = fixed.replace(/&gt;/g, '>');
        var badTagIndex = fixed.indexOf('</>');
        while (badTagIndex != -1) {
            var badDOM = $.parseHTML(fixed.substring(0, badTagIndex));
            var elementTag = $(badDOM).last().get(0).tagName.toLowerCase();
            fixed = fixed.replace('</>', '</' + elementTag + '>')
            badTagIndex = fixed.indexOf('</>');
        }
        fixed = fixed.replace(/(?:\r\n|\r|\n)/g, "<br>"); // Makes newlines pretty
        fixed = fixed.replace(/<\/h1>/g, "</h1><hr>"); // Adds <hr> to <h1> i.e. Contact and Support
        fixed = fixed.replace(/<br><br><h1>/g, "<br><h1>"); // Removes extra newline before <h1>'s
        fixed = fixed.replace(/<hr><br>/g, "<hr>"); // Removes extra newline after <h1>'s        
        $(this).html(fixed);
        
        // All links in these descriptions should open in a new browser
        $(this).find('a').on('click', function(){
            open(this.href);
            return false;
        });
        
    ...

Contact and Support Not Treated As ‘Proper’ Data

Not the biggest feature, but important to note nonetheless. Have you ever noticed that the Contact and Support sections of asset descriptions are kind of inconsistent and are prone to errors? This is because the Contact and Support data is actually a hacky addition to the same data property that holds ‘Technical Details’ instead of it being treated as a proper data point for an asset. I’d really like to see ‘Contact and Support’ data folded into more proper, easier to maintain and read data properties.

On some assets, such as Crumbling Ruins at the time of this writing have extra or duplicate data regarding ‘Contact and Support’, which is unfortunately visible in the launcher, web, and my custom marketplace frontend.

Seller Videos as ‘Proper’ Data

Similar to the Contact and Support snag, I wish seller’s videos were stored as proper data instead of having to be parsed from the description texts. Videos are important.

The Result

You can read more about the end product here on this blog post if you’re interested into exactly what features exist and how far I got. Most likely you would have read that post before coming to this one though. A binary release you can download and try right now is avaiable in my repo’s releasesThe source code for this project is also available here on my github.

Even if this project means nothing to anyone, it was definitely a fun and worthwhile Sunday for me.

With the release of 4.10, the Epic Launcher seems to have improved its ability to pop up when you don’t want it. Not only does it open the Launcher now if you don’t have it open, now it forces the Launcher to take focus and brings it to the top of your desktop. There is no user-facing setting to control this behavior at the time of this writing.

You can disable this auto-launching of the Launcher behavior without making any serious changes to your engine.

To do this, simply create a new text file called PerforceBuild.txt inside your Engine\Build folder. Make this text file non-empty, meaning, open it up and type some random stuff in it such as “Stop opening the launcher.” Now when you open the editor, the Launcher should not open and steal focus automatically.

This does come with some side effects though:

  1. Any time the version of the editor is listed, it’ll show a much longer ‘more technical’ string.
  2. It removes the Recompile C++ button in the Editor (I frankly never use this)
  3. Disables “INI overrides”, a feature rarely used by anyone. Might want to remove this text file if it appears in your distributed builds though, but it shouldn’t affect anything.
  4. Bug and analytic reports to Epic will indicate you are running a ‘Perforce Build’
  5. Crashes that don’t report a runtime callstack might now report a runtime callstack (a good thing?)

For those who haven’t discovered this magic yet, blueprint alignment lets you clean up your blueprint graphs real quick.

Align Sample

This alignment feature comes with no keyboard shortcuts bound out of the box, which is a real shame. If you want to step up your blueprint game though, I highly recommend these easy to use custom shortcuts. You can set up custom shortcuts by navigating your editor to Edit -> Editor Preferences -> Keyboard Shortcuts.

Straighten Shortcuts

This I believe is easily the single most powerful shortcut in the Blueprint editor now. Select a bunch of nodes, hit Q, and all the wires straighten. I’m a firm supporter over ‘straight wires’ opposed to ‘aligned nodes’ in the ‘how you should layout your blueprints debate’, so 90% of the time hitting Q is all I need.

Some cases require node alignment though.

Align Shortcuts

This shortcut scheme for alignment is super easy to learn. You are most likely familiar with the WASD paradigm, and with this all you have to do is hold down SHIFT while pressing W, A, S, or D. Doing so will align nodes using edges corresponding with your ‘WASD Direction’, i.e. Shift+W will align all top edges, Shift+A will align all left edges, etc.

When confronted with the rare need of aligning horizontal or vertical centers, just use Alt+Shift+W and Alt+Shift+S to horizontal center align and vertical center align respectively.

This document describes how to set up completely open, absolutely insecure, fully accessible Samba shares on Ubuntu Server. This is incredibly useful for rapid deployment testing from a Windows machine to a Ubuntu Server target as a deployment can be then done with a simple robocopy or even a “drag and drop” within Windows.

WARNING: This will remove all pre-existing Samba shares on the server. This should only ever be used on Ubuntu Servers that are non-critical. The machines you use this on must be able to be nuked at any time. This should never be used on a public facing machine as then the entire world may have access to your Samba share and potentially other dangerous things. Use this when you know it is only you or only a trusted network can access this Samba Share.

Requirements

  • A Linux Server you have full access to. This guide covers an Ubuntu Server but the instructions should be the same for most Linux distributions.
  • Accepting responsibility for creating an insecure file share on your server
  • Access to a shell on the server, whether its a local shell, or a remote one (i.e. PuTTY on Windows)

Automated Version

I wrote a script that will do this for you automatically. It will create a Samba share called Drop at /home/Drop. If you don’t care where your shared folder is located or what it is named and just want an insecure Samba share, this is the method for you. Otherwise follow the manual instructions.

If you want to see the source of the automated version, it can be found here on my GitHub.

To do this automatically, log into a shell on your server, then execute the following lines:

wget https://raw.githubusercontent.com/Allar/automated-insecure-samba-share/master/automated-insecure-samba-share.sh -O automated-insecure-samba-share.sh
chmod +x automated-insecure-samba-share.sh
./automated-insecure-samba-share.sh

Your server should restart and you should have a Samba share named Drop ready to be accessed. See the end of this guide for details on how to access it.

Manually Setting up the Samba Share

1. Install Samba if it isn’t installed already. This can be done with:

sudo apt-get install samba

2. Delete the default Samba configuration file.

sudo rm /etc/samba/smb.conf

3. Make a directory for your Samba share. I will be using the directory /home/Drop

sudo mkdir /home/Drop

4. As my created directory is outside my user directory, it had to be created by the root user (sudo does that for us). We don’t want outsiders accessing the Samba share as root, so instead we’ll make this directory accessible by the current shell user (assuming you are not logged in as root). Replace YOUR_USERNAME_HERE with your user name.

sudo chown YOUR_USERNAME_HERE /home/Drop

5. Begin editing a new Samba configuration file. Replace YOUR_USERNAME_HERE with the same user name you used in the previous step. Replace YOUR_SERVER_HOSTNAME_HERE with the hostname of your server.

[global]
guest account = YOUR_USERNAME_HERE
map to guest = bad user
workgroup = WORKGROUP
server string = YOUR_SERVER_HOSTNAME_HERE
security = user
name resolve order = hosts lmhosts
create mask = 0777
directory mask = 0777

[Drop]
path = /home/Drop
guest ok = yes
read only = no
writable = yes
public = yes

6. If you want to add additional Samba shares, copy paste the entire [Drop] definition and add it to the bottom of the configuration file. Then change [Drop] to [THE_NAME_OF_YOUR_SHARE] and path = /home/Drop to the path of your new share’s directory.

7. Be sure to allow Samba access through your firewall. I’m using UFW here as it is pretty straightforward.

sudo ufw allow samba

8. Reboot the server.

sudo reboot

Accessing the new Samba Share

You should be able to access your Samba share in Windows under Network in any explorer window. If you do not see it, you can type in \\ServerHostName and you should see your new Samba Share and have full access without any form of credentials.

Samba Share access in Windows

This document covers the bare basics on how to get your Unreal Engine 4 game project able to build both Windows and Linux dedicated server builds, using just a Windows machine for compiling.

Requirements

Adding Dedicated Server Support

Note: The word Project in any referenced file name or code will refer to your project’s name. For example, my project for this tutorial is named GenShooter, so in my case Project.Target.cs refers to GenShooter.Target.cs. ProjectTarget in my case would be GenShooterTarget.

  1. Navigate to your Project’s Source folder. You should see some .Target.cs files.
  2. Make a copy of Project.Target.cs file and rename it ProjectServer.Target.cs, be sure not to grab ProjectEditor.Target.cs.
  3. Open up ProjectServer.Target.cs in your favorite text editor. I’ll be using Visual Studio here.
  4. Rename all instances of ProjectTarget to ProjectServerTarget.
  5. Change Type = TargetType.Game; to Type = TargetType.Server;.
  6. Save this file. Your ProjectServer.Target.cs file should look something like this now:
// Your Copyright Text Here

using UnrealBuildTool;
using System.Collections.Generic;

public class GenShooterServerTarget : TargetRules
{
	public GenShooterServerTarget(TargetInfo Target)
	{
		Type = TargetType.Server;
	}

	//
	// TargetRules interface.
	//

	public override void SetupBinaries(
		TargetInfo Target,
		ref List<UEBuildBinaryConfiguration> OutBuildBinaryConfigurations,
		ref List<string> OutExtraModuleNames
		)
	{
		OutExtraModuleNames.AddRange( new string[] { "GenShooter" } );
	}
}

Building your Dedicated Server

  1. Right-click your project’s .uproject file in your project’s folder and “Generate Visual Studio project files”.
  2. Now we need to build our project in Visual Studio with the Development Server configuration for the Windows platform, and for the Linux platform as well if you have the Linux x86 Cross-Compile Toolchain installed. To do this, build your game project just as we built it in the past tutorials but this time with the Development Server build configuration. When the Windows server is done building, your output should look like this. Here is the build output for the Linux server.

Now your project supports building for dedicated servers, for all platforms, including Linux. Whether Linux will compile is dependant on if your Linux x86 Cross-Compile Toolchain is setup correctly.

Packaging Your Dedicated Server

  1. Open up your project in the UE4 Editor.
  2. Open up the Project Launcher using Window -> Project Launcher. This should greet you with a window that looks like this. This window allows for launching various project deployment configurations.
  3. To build your project in dedicated server form, we need to make a custom build profile. Click the “Add a new custom launch profile” button in the bottom panel that looks like a plus sign. This should open up the custom profile editing screen.
  4. Choose your Project in the Project drop down. If you do not see it, click browse and feed it your project’s .uproject file.
  5. Change Cook mode from On the fly to By the Book. Select the WindowsServer platform under Cooked Platforms. Select the LinuxServer platform as well if you have the Linux x86 Cross-Compile Toolchain installed. Also select en under Cooked Cultures, or select your base language if your project is not English centric. Click here to see what these settings look like.
  6. Change Package mode from Do not package to Package & store locally. Leave all the settings in here blank by default.
  7. Change Deploy mode to Do not deploy.
  8. Click “Back” on the top right of this window to go back to the main Project Launcher Window.
  9. Click the “Launch This Profile” button next to your new custom profile. This button looks like the Play button in the level editor window.
  10. This will begin the process of cooking and packaging your dedicated servers for your selected platforms. This will take a while. When it is done, it should look like this.

Locating your Dedicated Server Builds

Now that you have packaged your dedicated server builds, you can find them in your project’s Saved\StagedBuild directory. If you have packaged your regular game builds, you’ll see them listed here as WindowsNoEditor and LinuxNoEditor as well. You are free to copy these builds to your target machines and distribute them as you like.

Note about running the Windows Dedicated Server

If you load the Windows Dedicated Server, it will seem that nothing loads up and that there is no UI or command prompt of any kind. If you open up your Windows Task Manager, you will see that your server is in fact running, but it is invisible. If you would like to see the log output of your Windows Dedicated Server, you need to run it with the -log command argument. The easiest way to do this is:

  1. Hold Shift and Right-click the folder your Windows Dedicated Server is in and choose “Open command window here.”
  2. Type in ProjectServer.exe -log and hit Enter. In my case, this is named GenShooterServer.exe -log
  3. This will load your Windows Dedicated Server with a log window.

Note about running the Linux Dedicated Server

After copying your files to your Linux server (which is outside the scope of this tutorial), you will need to run ProjectServer located in your builds Project/Binaries/Linux/ folder.

In my case that is, loading it from a terminal would look like:

GenShooter/Binaries/Linux/GenShooterServer

If you want to load it and then send it to the background so that it will not terminate when you close your terminal session, you can load it with:

nohup GenShooter/Binaries/Linux/GenShooterServer &

To kill a server that has been sent to the background, find it’s process name using the command top, then route that name to pkill, which would look like this:

pkill GenShooterServe

Your process name is usually your server binary’s name limited to 16 characters.