I eagerly await the presentation of your research. Im sure youve formed a wonderful hypothesies and isolated and controlled your experimental variables correctly and arent totally stuck at the "observe phenomena" stage of the process.
It's a dataset analysis. So there aren't experiments or controls. Luckily you don't need much peer review for that. If you have an experimental design you'd like to suggest, I'll include it. Serious question; where should it go? Access Guide, Rules Suggestions, I'm not sure.
Which would require a semi reliable environment to do so (any sort of option for skill matchmaking etc) and a sufficiently sized data sample. Both impossible for lack of a sufficiently large playerbase. Working with a 52% based on a sample size of 17 is usually not great. We did however do some testing to bet on individual players after turn 1 rolls in Kill Missions, which was correct 28/30. Method was horrible matchup (i.e. 10 Order Druze list against a real list) > better player (subjectively in our oppionion, notably not Fred the casual gamer, that has been playing for 5 years but only cares about painting) > first turn wins for lack of other criteria (i.e. we didn't know either player) The two matches we didn't call correctly were ties btw. Hypothesis to check was Skill>First Turn, but First Turn is a hell of a drug if you know what you're doing. Conclusion: 28/30 is very far away from standard deviation. 60-70% correct is my success rate for any sort of online game I'm semi invested in Battle Pass stuff etc, 93.3% is outrageously good for semi informed low stakes guessing. This is not a very balanced game at the moment. Especially in a tournament format.
Back to where we started. You are at best identifying what is based off current data. (Observation of the phenomena) Not why and how to manipulate it. (That pesky model/theory part)
Let's just jump to the end, you want to be able to dismiss anything you don't like so you're demanding an unreasonable level of proof. You have not and will not meet your own proposed threshold. Your argument reeks of not just moving goal posts but relocating them to another country. You accused a particular group of group think and said that they engaged in low level statistical analysis to back up their claims. You called this half assed and demand a level of research that would go into a scientific paper for submission to a journal. This is laughable. This would render any thought or observation of the game as subjective. Even a basic comparisons would be rendered pointless and moot as there is no point in everyone standing around and saying "in my opinion," if no basis of fact could be established. Now someone says that they're working on a statistical analysis and you're just dismissing it without having seen it again demanding a strict methodology that you proposed but haven't followed in any advice or guide. Congratulations on becoming the circus.
I'm not sure what field you're involved in, but this is incorrect. Many fields of study are built on quantifiable observations of complex natural (i.e. predefined) systems. Data on variables is collected and built into datasets that model the system. Those datasets can then be experimentally manipulated, to determine how well the model replicates observed reality. For example, an experiment might consist of changing one variable on a Hackers profile (i.e. WIP), and determining how that changes the probabilities of all potential hacking matchups, with its normal matchups as control. Observed reality might be things such as: player consensus on a faction, list construction, tournament results, etc. Here is a preview of the dataset. I'm going to do a write-up before I start a new thread. https://docs.google.com/spreadsheets/d/1FCSJmT9MYimTd5TCV4udRRm4mJ6sUhRq8W6ieaQnSqU/edit#gid=0
So im reading it now actually. And the first question I ask is, whats the conclusion you reach? Because the data as you presented to me suggests the conclusion that Nomads have the access to most repeater profiles, they have the greater proportion of high bts profiles, alongside combined, followed by yu jing and panoceania. And they sit in the middle of the factions with respect to pts per wip for their hackers (haqqislam being a notable exception) Youve done a great job at illustrating why Nomad hacking is powerful and indeed what the tools Nomads have access too which make that the case. But I have to ask how this relates to the broader context of the game being in a "solved" state. In particular what is missing is the impact the knowledge that Nomads have access to the "best hackers", across the range of their choices, has on game balance. Id suggest an inclusion of the relative frequency of hacking vs non hacking interactions in a game, the impact of the limitations of hacking against units which are non-hackable, and the distribution of those units throughout factions. And finally a comparison of the "best in faction tool" for hacking and their capabilities. Having access to the most profiles doesnt mean much if you can only take 1 or 2 in a list. Though I would not then draw the conclusion that hacking acces is responsible for actual faction power without far more work and analysis/testing. Final question. Your data for repeaters and what youve limited that too seems to be incomplete. Panoceania alone has more than 2 repeaters, with the bulleteer, peacemaker, and fugazi as examples, available in the list but you have the frequency as 2. Edit: and if as I suspect you have excluded universally available tools, what is your justification for that? Access to the fugazi does not have a net nuetral or negative impact on Panoceania hacking capabilities just because another faction also has access to those tools.
Availability probably should have some role, to take two extremes MO and Neoterra both have Fugazi the former AVA 1 the latter AVA 3.
Absolutely. Not to mention greater highlighting of the relationship between skills. PanO may only have 2 bespoke repeaters. But one of them is 20 pts and starts 8" up the table. The impact of that is possibly far greater than any number of DZ bound repeaters or dep reps. And I say possibly only as you would need to relate access (available from the data) with impact (the context which is not identifiable from this data set). And this illustrates the point I was initially making with regards to the approach used to "solve" the game.
And just so we are clear. This is not moving the goalposts @Judge Dredd. This is the difference between a surface analysis of what is observable and a fundamental investigation into why the observed phenomenon exists. And thats the part that is pretty much always missing. Its missing not only due to the effort required to undertake the work correctly but also because sometimes the data does not currently exist, requiring even more work to obtain it. Edit: as the questions above get answered there will be more questions asked and more considerations to be made. Thats exactly how this works.
First, thank you for reading it. I appreciate your feedback and I will take it all into consideration. Second, this dataset doesn't directly relate to the "solved meta" debate; that's a whole different thing. You could make some inferences about it, but that wasn't my specific goal. Sorry for the confusion. There are a lot of conclusions you can draw from the dataset, but I think the most important conclusion is that data reflects general player consensus about hacking within and across factions. For example; the Haqq and O-12 hacking game plan relies on stacking multiple efficient hackers. CA and Aleph pay a premium for access to a handful of all-star hackers. YJ has poor hacking, but impressive hacking defenses. Nomads are... Nomads; and PanO. The data also helps to visualize some interesting facts about hacking, that are not easily identifiable by just scrolling through the armybuilder. For example; the average cost of a CA hacker is not as high as might be expected, considering the number of expensive profiles. Haqq and O-12 have surprisingly powerful and efficient hacking, despite how few profiles are available to them. PanO discount on BS might not be as much as is commonly believed. Nomad access to hacking tools is honestly staggering. This dataset can provide a good model that can be easily and reliably experimented with by the community. Unfortunately I don't have access to a lot of that data, and some of the results would likely be ambiguous. I will look into your suggestions though. Generic profiles were excluded from this data. I wanted to look at only the units which are unique to each faction. Everyone has access to 3 more repeater units, in the form of flashbot, sensorbot, and baggagebot. Loaner units available to sectorials from other factions are also not included, for the same reason. Other notable exceptions include: Szalamadra pilot is not included. Posthumans were counted as two separate units, total Aleph hackers is 13 or 14, depending on how they are considered. Finally, the data considers only "Units" not "Profiles;" point values and sectorial specific profiles excepted.
I could take a look at AVA, but I don't think the information will be as relevant. Haqq and O-12 manage to make use of a strong repeater-based gameplan, in spite of having access to relatively few repeaters. Instead they rely on the impressive efficiency of Barids and Cyberghosts. This suggests that availability of a repeater profile is not as important as its efficiency. Further, total AVA of repeaters in a faction should directly correlate with the number of repeater carrying units. Edit: Also, to do this I would have to look at each sectorial individually. That would make this all exponentially more complicated.
Thank you for this analysis. I think a critical aspect of hacking is the ability to project it. Especially with Forward deployment or infiltration. This is what makes Morans and Pathfinders so powerful. Deployable Repeaters also are massively buffed on infiltrating platform: even more so if they have camo or minelayer (Guilangs). All these are big hacking mutlipliers. Maybe there is a way to quantify that?
So I'm working on this now. I definitely agree, hacking area projection is critically important. It is doable, but the calculations become a bit arbitrary. The question becomes how to weight the equation for inclusion of equipment. How much better is a hacker with a repeater vs the same without. Is it 10% better, 30% better? I don't know, in cases like this it's usually best to rely on the professional method of just guessing. For example I rank repeaters as follows: peripheral repeater, fastpanda, pitcher, deprep, native repeater. I consider a hacker with a peripheral repeater to be almost twice as efficient as the same profiler without it; so we might give it a modifier of 0.6. We could then assume a modifier of 0.7 for fastpandas, 0.8 for pitchers, 0.9 for DepReps, and no modifier for a hacker with a native repeater. If Points/WIP can be used as a metric for efficiency of a hacker (as shown on Sheet2), then the presence of various repeaters can be expressed as a weighted modifier. Deployment skills such as camo and infiltration are much more difficult to account for, because the added efficiency can't really be expressed by an equation. Determining the effect of repeaters and skills on units besides a hacker is similarly problematic. Thankfully there are relatively few repeaters carried by non-hackers.