Earlier this year I bootstrapped a project called OpenSpartan. Its intent is to primarily fill a gap for those that want to tap into Halo Infinite data - the developers that build tools such as Halo Data Hive and others. As part of this work I stumbled across an interesting piece of data - every game mode and map, when queried through the Halo Infinite API, would return some rudimentary telemetry data, showing the number of recent and all-time plays, along with number of ratings, bookmarks, and average rating.
Comes to no surprise if you read my blog or follow me on Twitter (or maybe you're even following OpenSpartan if you're that into niche content) that the Halo Infinite API has been somewhat of a focus domain for me for the past year. It single-handedly took over most of my free engineering time because it's that interesting to me as a frequent Halo player.
If you've been following some of my recent work, you might've caught my latest blog post on enabling hidden game modes and maps in Halo Infinite. Well, clearly my curiosity got the best of me, because this post is very much a continuation of that story.
You might've seen over the course of the past couple of months since the Halo Infinite release some news and rumors about upcoming game modes in the game. I wondered how those folks got their hands on the new experience and just assumed that it's typical data mining in game files to look for new assets. That was until I really got hooked on the Halo Infinite story, started exploring the undocumented Halo Infinite API, and realized that the answer to my question was here all along!
If you've been following me on Twitter, you probably already know that I spend inordinate amounts of time on reverse engineering the Halo Infinite API. As I am working on my .NET wrapper for it (astutely called Grunt, I realized that putting together a nice-to-use blanket over the many GET HTTP APIs is relatively straightforward. There are some permissions here and there that I need to figure out, or in some cases fiddle with undocumented query parameters.
Last year I was upgrading my computer's power supply unit (PSU) in an effort to prepare for the GeForce RTX 3090. It so happened that I swapped PSU manufacturers as well while I was at it.
NOTE This post is part of a series about the Halo Infinite Web API. You can read more about how I started in the first post, where I talk about the process of figuring out the data endpoints, as well as more about the authentication process. You can also explore the .NET wrapper for the API that makes endpoint interaction a bit easier. If you’ve been following my blog, you know that I’ve been fiddling quite a bit with the Halo Infinite API.
I like reading papers on arXiv, but I like discovering them more through Andrej Karpathy’s arxiv-sanity-lite. The little challenge with the latter is that there is no way to get those papers in a RSS feed, that I can then hook up to an RSS reader, like feedly or NetNewsWire if you’re on a MacOS machine. So, as a starting step, I thought I’d try and fix this with an open-source project, called arxiv-sanity-feeds.
Now that I got authentication out of the way, it's time that we actually get something useful done with the Halo Infinite API. This whole saga started with me wanting to get the match stats so that I can analyze them outside the game, and that's what I thought I'd tackle first.
A week ago I was finally able to figure out what endpoints the Halo Infinite Web API uses. Now, the challenge became figuring out how to properly request the data from those, as there were two component pieces to every request - a Spartan token, and a clearance. After fiddling with the API a bit, and looking at the endpoint that aggregates all other endpoints, I was able to learn that there is a straightforward way to get all the right tokens through a number of chained requests, that are documented in this blog post.
As with most of my reverse engineering stories, this one starts with "Hmm... I wonder if I can get this data anyway?" I mentioned this in my previous blog post that I just finished the Halo Infinite campaign, and the next step was multiplayer, which also meant that I wanted to keep track of my stats to see just how bad I am playing against real people and aimbots.
I recently moved most of my websites over to Netlify, because, well - I work there, and I want to be dogfooding as much of our product as possible. As part of this, I enabled my sites to use Netlify Analytics, which has been a fantastic lens to look at the site usage from a server-side perspective.
Spotify's foray into podcasting may be fairly recent, but I'm already discovering some interesting APIs that I can play with. The podcaster dashboard is tremendously useful and offers way more data than Apple and Google combined (with better reliability too), so the more I use it, the more I started thinking that it would be helpful for me to build some kind of automation mechanism to ingest the data into my own storage and then process it outside the default dashboard boundaries. In the process, I also spotted a little thing that I wanted to share with readers, and that is the ability to enable experimental features inside your podcast private view.
Twitter just announced that they are re-launching their verification program, and now you can check whether you are eligible directly from your Twitter account settings. Neat! Which naturally made me curious as to what they used behind the scenes, since in previous instances of the process they used a form that likely was manually reviewed. Before I dive in, I want to call out - no, I am not important enough to be verified, nor do I care about getting the blue tick.
Recently my Xbox started yelling at me every time I started taking a new game capture, reminding me that the storage for my account is full on the Xbox Live network. When I first got it, I thought that it had something to do with the fact that I am blocking outbound telemetry requests through PiHole, and all of a sudden, my local cache filled up.
Every year (unless you're one of those Apple Music people) music fans rejoice to get their Spotify Wrapped, or - the musical year in review. It's a fun way to explore the most frequently listened to songs and artists. And every year up until this one, if memory serves me right, the experience could be viewed in the browser. And then, I paid the site a visit in the year 2021.
The other day, Clint Rutkas (yes, that Clint Rutkas) tweeted about a potential scenario that GitHub does not have built-in, but that could be useful for folks that want to have a deeper look at the performance of their repositories - identifying "center of gravity" issues. What that means is essentially finding issues that are cross-referenced the most from other issues.
It's ridiculous that I have to write it, but it looks like there is no hope in this being an actual feature of the Microsoft Edge web browser - setting a blank new tab. Not a tab with minimal ads, but just one that is blank. You know, the thing that you could set in Firefox for ages.
I got into the habit of signing my GitHub commits. It’s awesome - anyone that looks at my repositories is able to tell that it really came from my account (and not someone just using my email). As an added bonus, I get a fancy badge associated with my commits, which makes me feel special (since I am not really “verified” anywhere else).
I've had a Stream Deck for a while. It's basically keyboard shortcuts, mapped to an individual button, with a visual set up for that button. It's convenient, it's easy to use, and it looks really good on the desk among the multitude of other RGB lights that are embedded in modern computing technology.
I encountered a very unique challenge today - I needed to cut a part of a video hosted online with Azure Media Services for reference. The video in question is Into Focus, the 'show within a show,' that aired at Microsoft's Ignite conference earlier this week.
2021 turns out to be a good year for folks like myself, that love collecting their own personal metrics. Earlier, I chatted about collecting air quality data, Twitter data - and now, GitHub contribution data. In this post I will describe a simple approach to grabbing your own GitHub contribution statistics without having to jump through too many hoops.
I've used my Synology NAS for some time now - about two years and counting, and it's been a great tool to backup information locally (e.g. from my phones or shared computers). Then, I got to thinking - it's pretty much a mini-computer. It has a quad-core 1.4Ghz CPU, a whopping 2GB RAM, and _plenty_ of storage. I can do more with it than just use it for occasional data dumps. That is - I could use it for frequent data dumps.
Back in 2010, Nikon built a nice 14.2MP entry-level DSLR camera - the D3100, which I was using for some time now. It's a nice camera, and for 99.9% of cases it works perfectly well. It takes nice shots, supports interchangeable lenses, can film decent quality (by that I mean 1080p) video, and has some extensibility points.
I just recently got a Stream Deck - it's a wonderful tool to automate some of the more boring (read: routine) tasks. Literally with a click of a button I can kick off a bunch of automation. Apparently it can do everything _but_ launch Windows Store applications.
As I was fiddling with some automation scenarios at home, I thought of putting the Synology Network Attached Storage (NAS) to good use. That is, in addition to all the photo backup stuff it's already doing. At the end of last year, I wrote a blog post about building a simple system to maintain evergreen notes, based on Hugo, Docker, and, well, that's it - there are only Markdown files in the mix. Evergreen notes in this context are nothing other than a personal Wikipedia of sorts.
I had a chat with a friend the other day, and he mentioned off-hand that in his life there is a very unique problem - lack of a short username for their Twitter account. Seems like everything good is already taken, which makes sense considering that Twitter itself is 14 years old. You can bet that in 14 years, a lot of people did get very creative with usernames.
Way back in 2018, I coded up a little project that allowed me to record my Nest camera stream in a _very_ hacky way. I wanted to get the raw video off of the camera without paying for a Nest Aware subscription.
I am naturally curious about the APIs that the devices in my house use, so when I got an air quality monitor, one of the first things I did was fiddle with the REST APIs that were made available through the device.
Some of the more traditional approaches to taking digital notes work quite well in 99% of cases - I think most of the tools on the market are doing a marvelous job. But I often caught myself needing something more, specifically for notes that I wanted to write once and frequently refer to later (e.g. details about specific projects that don't change often).
In 2020, it might seem like the art of crafting your own personal site became a thing of the past. Most of the engagement happens on social networks or inside walled gardens.
It's the perfect time to drink coffee, sit inside, and code. There was just one problem with that for me - I actually need to step away from my machine from time to time to brew some coffee, and while I was gone, the computer would go to sleep, and I needed to wake it up, enter my credentials.
It finally happened - after almost five years of sticking with macOS and a MacBook, I gave up and built my own desktop computer.
Got an interesting problem today - had to re-image a Surface Pro 3, but only had a 64GB flash drive handy. Following the typical dance, I installed the Windows 7 USB/DVD Download Tool, downloaded a Windows 10 ISO from my Visual Studio subscription download site, used WUDT to put the ISO on the flash drive and… nothing. My Surface Pro 3 would just refuse to even look at the USB for boot information.
Curiosity got the best of me about a year ago when Pokemon Go came out, so I had to dig up ways to inspect traffic from iOS on a Mac. Since then, time has passed and today I decided to do it again, but couldn’t find a decent guide on how to do that (clearly I was missing some steps), so once I figured out what goes where, I thought I would do a write-up for posterity, and so that I can re-use it later.
This is one of those questions that gets asked every week or so - I want to build documentation for my package the same way docs.microsoft.com does, but on my own server/cluster. While today we do not provide the entire infrastructure as a single open-source entity (but you can certainly read up on what we do behind the scenes), I thought I would write a short guide on how you can document your own NuGet packages and then publish the documentation on GitHub pages.
With the release of Windows 10, all photos are now opened by default with the help of the Photos app. I like the Photos app, but I also enjoy the UI of the traditional Windows Photo Viewer.
I recently overhauled my network setup to get better WiFi coverage as well as get more data as to what traffic actually happens through my local network. After some relatively short conversations with my colleagues, I landed on Ubnt gear.
As part of the project that I am working on, I need to make sure that I allow the user to specify what GitHub repository they want to bind to their Visual Studio Team Services build definitions. As part of the project that I am working on, I need to make sure that I allow the user to specify what GitHub repository they want to bind to their Visual Studio Team Services build definitions. I am using the library for that, but no matter what I tried, the repository just did not show up.
Gone are the days when you no longer have to worry whether you want to fetch a website through HTTPS. No matter whether you are handling private information or not, there is no excuse to have a site residing in plain HTTP land. That said, this tutorial assumes that you, the reader, already have some knowledge as to why HTTPS is necessary.
The Windows Phone team recently announced that it will remove the Windows Phone apps section from the Zune Desktop client due to the fact that users mostly access those through the web interface or the mobile client directly on the device.
If you own a Samsung Windows Phone device, you probably noticed that there is an update available for the stock Diagnostics application. The default build is 1004 and the new one is 0210.
For a college project I had to set up an Ubuntu box and work on a network analysis assignment. I worked with this kind of tasks on Windows, and got some pretty interesting results by sniffing Windows Phone, Xbox and Windows 8 traffic with Wireshark. Ubuntu is a new environment for me, and I figured that the actual capture process is set a bit differently.
With the new update, Dell also updated their EM application. Which is still pretty horrible, surprisingly. It didn’t change much – some functionality is gone while other parts were unnecessarily hidden.
I wrote an article on this topic not long ago, so this video is created both to show how it’s done correctly and how it is possible to use a local Isolated Storage Explorer on a Windows Phone 7 device.
I just released a new video that shows how it is possible to access system applications inside a locked emulator image. I already described the details in one of my articles, so this video is more of a proof and a hands-on demo.
As you probably know, the Windows Phone SDK comes with an emulator that is locked down to the maximum – the developer only has access to Internet Explorer and to a limited number of settings. However, today I found out an indirect (and maybe not that optimal) way to access various applications that are blocklisted, but are still available on the device.
Dropped phones are not that uncommon of a phenomenon. My idea was to find a way when this happens while a third-party application is running on Windows Phone. No phones were destroyed or damaged during the experiment.
The Isolated Storage Explorer Tool is new with the Mango SDK (7.1). In this video I am talking about general capabilities of the application and why you should use it if Isolated Storage is a component part of your application.
The new Windows Phone Mango SDK introduced a testing tool, that is tied to the device emulator, allowing developers to simulate location data and readings from the accelerometer sensor (now it also allows taking screenshots).
Each Windows Phone OS – powered device has its own way of communicating with its hardware. In a non-public environment, this is done through a COM (Component Object Model) layer. A DLL providing this layer is usually shipped as a part of the OS or an official OEM application. When it is distributed the second way, it is fairly easy to intercept the XAP and extract the DLL. And that’s when the experiments begin.
For those who were developing for Windows Phone for quite a while, you probably know that the emulator itself exposes quite a few gems. The one I found today is rather useless at this point, but it’s interesting nonetheless.
I found out that the Xbox Live Game Marketplace content is syndicated via a web service tied to the Xbox Live CDN. The service returns enough information to build my own syndication client, which will be able to read game data about various titles that are currently available to be downloaded through the Xbox Live Marketplace. Here, I will explain some details on how exactly the queries can be built.
Have you ever wondered if the default YouTube application can be replaced? With tight system integration to the level where it has its own URI scheme registered, it seems like it’s a sealed deal and developers can’t do anything about it. What developers don’t know is that it is possible to fully replace the default YouTube application as long as you take it’s identity. Apparently Windows Phone OS recognizes applications by IDs only.
Windows Phone 7 comes with built-in support for YouTube. The system has a dedicated URI scheme registered for it, and I talked about it a while ago. It is pretty cool if the developer knows the URI scheme so that the application can be initiated from inside another application, but it is even cooler to disassemble the default YouTube application itself and attempt to integrate Microsoft-built capabilities in your own application.
I recently started working with the Netduino microcontroller and one of the initial projects I decided to tackle was creating a better sample for a LED matrix shield. It wasn’t really complicated – overall, it took me around an hour to put everything together and test it on a real device. Image lost since transition to new blog Here are some things that I added to the updated sample: Automatically initialize the I2CDevice instance when the LEDMatrix class is instantiated.
By default the Windows Phone emulator is pretty limited in terms of applications that are available out-of-the-box. In fact, Internet Explorer is the only application that is available – the rest are apps that are side-loaded. I already talked about a way to invoke the default YouTube application and about some other hidden call-related features. Today I found an interesting new access point that allows me to work with the Maps application without actually having the app accessible in the main menu.
I use Zune a lot, having started with the 4GB player, and now it’s available on Windows Phone 7. Though the WP7 player is labeled as Music Hub, it appears under the Zune icon and incorporates Zune’s organization, providing many of the same capabilities as the desktop client.