tl;dr The amount of Nullsec manufacturing has increased from July 23rd to August 23rd.
The evolution towards independent, self-sustainable manufacturing in Nullsec is an area of high interest to me. I want trade and manufacturing hubs to develop beyond what we have today, but given the need to import materials for invention and Tech 2 production, I don’t see it happening given the current industrial framework.
I wanted to see if the steps taken in Crius were moving us in the right direction so I turned to my data. I’ve been recording the system Index data from the API on an almost daily basis so I joined this data with sovereignty data to see if Nullsec manufacturing has been increasing.
The top 50 corporations by sum of their production have shown an increase on average of 46%. The overall change for all corporations has been 114%.
The full table is available below with conditional formatting placed on a narrow band excluding outliers to better visualize the changes.
Ishtars have always been a solid performer, generating on average about 31.2M profit per trade. The nerf coming in the Hyperion release doesn’t phase me. I’ve learned that you have to roll with the punches and not let adjustments like this impact your overall trading strategy. Items move in and out of popularity over time. I’ll just stop working with the Ishtar hull for a while and see how demand changes over time.
If you have any interest in 3rd party development or databases, this post will be a entertaining as I share my current lackluster architecture for saving some of the new Eve API data.
I have a Raspberry Pi running MySQL that I use as a basic storage location for various databases. One of them is a new database that contains index values from the CREST API endpoint with a timestamp so I can start to develop a archive of the values.
The current solution I have for importing the new Index data isn’t very elegant. I’m using Lockefox’s Crius Toolset sheet to grab a CSV every day, adding two columns for a primary key and a timestamp, and importing into my table to get a table that looks like this:
“transactionID” “transactionDateTime” “solarSystemName” “solarSystemID” “manufacturing”
“103724” “2014-08-19 13:28:00″ “Jouvulen” “30011392” “0.0339317910000”
“103725” “2014-08-19 13:28:00″ “Urhinichi” “30040141” “0.0236668590000”
“103726” “2014-08-19 13:28:00″ “Akiainavas” “30011407” “0.0285709850000”
“103727” “2014-08-19 13:28:00″ “Seitam” “30041672” “0.0162879230000”
“103728” “2014-08-19 13:28:00″ “BK4-YC” “30003757” “0.0143238350000”
It’s growing and starting to show how under-powered the Raspberry Pi is for data processing. Most of my issue stems from a lack of salable design on my part. I have no table indexes and am joining with the bulky mapDenormalize table.
I have a love-hate relationship with the mapDenormalize table. If you have ever worked with this table, you know that it is a beast: 502,558 rows, 15 columns with five of them being DOUBLE values coming in at 214MB. Normally not a problem for server with a lot of CPU cycles and RAM, but the 700MHz ARM processor on the Raspberry Pi has a hard time with multiple JOIN operations and GROUP BYs.
Here’s a query I was running against my dataset that ran for 15.5 minutes (!).
SELECT systemCosts.solarSystemName, systemCosts.transactionDateTime ,systemCosts.manufacturing, mapRegions.regionName, mapDenormalize.security
ON (systemCosts.solarSystemID = mapDenormalize.solarSystemID)
ON (mapDenormalize.regionID = mapRegions.regionID)
GROUP BY transactionDateTime,solarSystemName
So two full table JOIN and two GROUP operations and a table with no indexes, uf. I sent a screenshot off to James, my development partner.
My first solution was to remove extraneous data from the mapDenomalize table. After removing the groupID, constellationID, orbitID, x, y, z, radius, celestialIndex, and orbitIndex I trimmed the table down even further by deleting all entries that were not star systems. What was left was 7,929 rows coming in at around 1MB.
I’m glad to report that my terrible query is now running in 20 seconds. This was a small step to getting my growing index dataset to a usable state while I write something more permanent.
We are currently experiencing login issues with Tranquility. More information will be made available as we work on solving the problems!
— EVE Online Status (@EVE_status) August 12, 2014
Today we saw a large outage to CCP’s servers in London as the number of BGP routes advertised on the Internet passed a critical milestone. If you are unfamiliar with BGP, the easiest definition is that it is the protocol used to allow major ISPs to talk to each-other and share information on where to send traffic in order for it to reach its destination. Without ISPs peering using BGP, routers would not know to send traffic and nothing would reach the target host.
Certain models of Cisco routers that have not been modified from their default configuration became unstable after accepting more than 512,000 routes. Users all across the Internet saw strange behavior as routers began to drop traffic, slowly pass traffic through software routing, or crashing entirely. I saw my connection to CCP’s server in London from San Francisco become unreachable for several hours.
This issue has been written about months ago, but it seems that a lot of people were caught by surprise. There is even a Cisco approved interim fix to buy more time by allocating additional memory space to store additional IPv4 routes (1).
Further reading on this topic can be found on this r/networking post.
(1) CAT 6500 and 7600 Series Routers and Switches TCAM Allocation Adjustment Procedures [link]
Interactive fast-paced games that operate over networks present many challenges to game designers that want to present a fluid user experience. I recently stumbled on a paper written by J.M.P. van Waveren in 2006 that details the advancements in network architecture in Doom III over it’s predecessors such as Quake I, II, and III.
If you want some insight into the items that CCP has to consider when trying to maintain a cohesive grid for our ships to fight on, read the Abstract, Section 1, and Section 2. Further sections go into Doom III specific implementation of transmitting data between server-client.
J.M.P. van Waveren. “The DOOM III Network Architecture”. Id Software, Inc. 2006. [PDF]
The color shift indicates that manufacturing is spreading out and the landscape is starting to adjust to new levels. 67 systems have gone up more than 1% and 21 are down more than 1%.
Here’s a snapshot of the top 50 systems.
Looking at a 24 hour range in the manufacturing index data, it looks like people have started to move out of major manufacturing centers as the top 50 list has showing a drop across the board.
Here is a look at the biggest changes in the one day period. There has been a large amount of increase in many differing systems also indicating that manufacturing is spreading out.