Quote:
Originally Posted by BigGayAl
So lets say kaiba is like CBA, and actually lacks the intelligence to debate on your level ... does that make his opinion wrong? Let me ask you ... honestly ... before you sat down and thought about this deeply, were you for or against a 30-50man alliance cap?
|
I can't tell you exactly what my thoughts were, since for me this goes back about 6-7 years. However, it stems from a conclusion we (me, LordN, some others) came to back during the middle years of PIA, which was that there simply wasn't enough leaders and officers around at that time to support the recent expansion of alliances wanting to compete for top ranks. As a result, the better lead Dragons steamrolled them.
Now, firstly I'd like to note that these alliances sprang up without the need for alliance size limits. Secondly, most of them collapsed quite soon after, because they weren't able to sustain themselves due to the lack or leadership.
They did not fail because they were outnumbered, we simply had the better leaders and the better players, and if they had created fewer alliances, they could have better given us a run for our money with more players and more competent leaders.
In effect, more alliances made it easier for the best alliance to win, and made it all quite boring for us at times.
Quote:
Originally Posted by BigGayAl
Do you honestly think you have thought about everything impartially?
|
Well, obviously my viewpoint is affected by my experiences and what I know, so just as everyone I have a cognitive bias. However, I am very aware of this, and I do try to go out of my way to control for it, to try and gather as much information as possible from various different viewpoints before I make any direct statements.
In this particular case I even stated my bias early on in my first post. I said that for me, it's about what is good for new players, as I believe strongly (and always have) that this game needs to integrate new players for it to survive and flourish. My bias in this case doesn't lie with the best alliances, and I have a long history of promoting solutions for problems that many top players were abusing, amongst them newbie bashing. Mz or someone from old PIA forums can back me up on this one.
Quote:
Originally Posted by BigGayAl
Would you agree it is almost impossible to prove you wrong without actually testing it in practise?
I don't care much for mz's graphs either (long time since I actually saw them, skipped over that bit today) - I do remember thinking "interesting", followed by "data set is too small to actually legitimise the results". Of course, nothing can be done about that, its the only data available - but to use it as "proof" to "help win" an argument isn't really on, and would get laughed out of physics, hopefully economics too...
|
As for the testing, I believe that it's in fact impossible to prove anything through testing. That's kind of how "soft sciences" work, you can't really prove it as you do in physics, you can simply say that you have a theory and it fits the data. More data only makes it more likely to be true. Having a theory which fits all the data perfectly is essentially impossible in economics and other related fields.
However, as for the data we have on this issue, I believe it reasonably supports the conclusions that have been made, but is not conclusive. It doesn't however seem to support the opposing theories well at all. There is a large variance, but this seems to me to be more likely a result of other factors which are not controlled for. Mz did make an effort to explain some of the major discrepancies, and I believe he did a good job of it.
Now, it would be possible for the data to support our conclusions, or indeed as Gerbie2 argued, we might have gotten some of our causality mixed up. This seems unlikely to me, as the logic behind the causality seems solid and so does the conclusions. In fact the conclusions, at least in my case (and since I know my rants in the past have been read by many of those who now agree with me, I believe this is true for them as well), predates the data. This means the data fits the prediction, not the other way around.
Now usually if a conclusion is wrong it stems from being made after the fact. In such a case, people end up trying to fit their explanations to their conclusion, rather than creating a proper logical framework. This all becomes a bit hard to explain, but a good example is recounted in the book Freakonomics by Levitt. It concerns how statisticians made assumptions about the cause of the sudden drop of the crime rate during the 90's in the U.S.A. They basically fell for the proximity fallacy, in that they assumed whatever actions had been taken most recently caused the drop (in this case, being tough on crime). In reality, Levitt and Donohue later showed that these causes were out of sync with the change, and that the real cause was legalized abortion 10+ years earlier.
Legalized abortion and crime effect
So anyway, the data isn't perfect, but good logic supported by reasonable data, especially when the logic came before the data, to me makes a relatively strong case.