Disney Count support may be spotty from here. We apologize for the inconvenience.

Our problem is punishers didn't take over before the defectors

http://sciam.com/print_version.cfm?articleID=788CF452-E7F2-99DF-3EBC599C3A9F1C6F - The Roots of Punishment

<quote>
Essentially, the researchers detail an oscillating cycle: Some cooperators may emerge from a group of nonparticipants, who increase their bounty relative to the group and make their cooperative practice the norm. Then, the group of cooperators may become dominated by defectors, who ruin everything for everyone. Punishers would be unwelcome when defectors take over, as they would have to police the entire group—at high cost to themselves. But what the algorithm has pointed out is that if punishers become dominant before defectors take over a group, they can ensure long-term cooperation … at least until a new innovation comes along—say, processed food—and the cycle begins anew.
</quote>

In the world the defectors are the elite that have discovered they can make more money from Haliburton type scams than good old fashion hunting and gathering.

The problem is we haven't punished them because these people have coopted the mechanisms of punishment. They own government, media, corporations, all of which should be the source of control. But instead they just suck off the rest of us.
Permalink son of parnas 
July 6th, 2007 11:05am
Somehow, I feel this applies to Moderation of Forums.
Permalink SaveTheHubble 
July 6th, 2007 11:08am
Well now, I think about 1/2 of us would have much less a problem if they were only sucking us off. It's the bending over without so much as a reach around that gets to us.
Permalink Send private email JoC 
July 6th, 2007 11:08am
Feedback loops are corrupted. And the republican'ts continue to pervert them.
Permalink Peter 
July 6th, 2007 11:13am
Solution: punish them outside of the system, since they own the system.  Improve the world: kill a Haliburton employee before breakfast, and an executive before dinner.
Permalink Send private email Clay Dowling 
July 6th, 2007 11:14am
And now, having finished reading the article --

It's unfortunate that they called the "Slacker Police" "Punishers" -- since "Punishers" implies punishment -- a jailer, if you will. 

When the activity of what they CALLED "Punishers" was really to prevent the Slackers from Slacking off.  True, what motivated the Slackers was "fear of punishment" -- but that's different from ACTUAL 'punishment'.

I think "slacker police" would be a much less confusing term.  Why, when I was in bootcamp, the Company Commander served as an extremely motivating "slacker police".
Permalink SaveTheHubble 
July 6th, 2007 11:15am
And I agree with SoP -- the problem is that the elite slackers have NO 'fear of punishment', because they've coopted and redefined the role of "slacker police".

That, and that they never admit any wrongdoing of any kind what-so-ever.
Permalink SaveTheHubble 
July 6th, 2007 11:18am
> kill a Haliburton employee before breakfast, and an executive before dinner

Then the terrorists have won. LOL.
Permalink son of parnas 
July 6th, 2007 11:19am
"It's the bending over without so much as a reach around that gets to us."

Sidebar!
Permalink  
July 6th, 2007 11:19am
Taking a bit of a tangent here, I consider all research of this sort to be completely bogus. A bunch of guys write a 'computer model' of 'human behavior', get the results they wanted, then publish their 'new findings' that 'prove' this and that.

Fuck, their computer models of the weather can't even predict if it will rain 3 days from now with any accuracy. And they are calling it 'science' to be doing the same sort of models of human behavior, which is infinitely more complicated than weather patterns? That is bullshit, my friends!
Permalink Practical Economist 
July 6th, 2007 3:06pm
> That is bullshit, my friends!

We knew you'd say that.
Permalink Send private email computer models 
July 6th, 2007 3:09pm
Show us your tits, computer models!
Permalink Practical Economist 
July 6th, 2007 3:12pm
No really this research is fascinating. It's part of the quest for emergent properties in human social groups .. ie, simple rules that illicit complex, or at least non-obvious, behaviors.

Justice has been with us for a long time. And yes, enforcement of justice. Something like half or 3/4 of human's stories (even non-religious ones like the stuff with Paris Hilton) are about "getting what one deserves".

I'm lazy but someone should look thru these threads, I bet the more popular threads have to do with some sort of karma enforcement.
Permalink Send private email strawdog 
July 6th, 2007 3:14pm
Let's rewrite the paragraph to be more accurate and see if  that helps:

Essentially, the AI engine created by the researchers details an oscillating cycle of behavior within their virtual system: Some virtual actor cooperators may emerge from a group of virtual actor nonparticipants, who increase their bounty relative to the virtual actor group and make their cooperative practice the norm. Then, the group of virtual actor cooperators may become dominated by virtual actor defectors, who ruin everything for all the virtual actors. Virtual actor punishers would be unwelcome when virtual actor defectors take over, as they would have to police the entire virtual actor group—at high cost to their virtual actor selves. But what the algorithm has pointed out is that if virtual actor punishers become dominant before virtual actor defectors take over a virtual actor group, they can ensure long-term virtual actor virtual cooperation … at least until a new parameter is added to the model—say, processed food—and the virtual simulation begins anew.
Permalink Practical Economist 
July 6th, 2007 3:19pm
I think the hypothesis is not "these simple rules will make this complex behavior happen". I think it is "this behavior is modelable by some set of simple rules". That *is* interesting, no?
Permalink Send private email strawdog 
July 6th, 2007 3:27pm
I think that's right, straw.

Supposedly there are complicated routing mechanisms that work like that in communications systems. I don't think they ever set out trying to make a system that 'thinks precisely like an ant'. They were really after the 'take the best path to get where you need to be' behavior of ants.
Permalink Send private email JoC 
July 6th, 2007 3:31pm
No.

Where is the data set of actual measurements of human behavior they are comparing their model against?

There isn't one. The entire study is just this computer model written by a mathematician.

Publishing this in a science journal is absurd. It's as if I published my TheSims algorithms in the Journal of Human Behavior.
Permalink Practical Economist 
July 6th, 2007 4:56pm

This topic is archived. No further replies will be accepted.

Other topics: July, 2007 Other topics: July, 2007 Recent topics Recent topics