Quote:
My impression was that the Streetfighter article was trying to show that there is a positive correlation between winning, according to a given rule set and victory criterion, and trying to win according to that same rule set and victory criterion. Groundbreaking stuff.
where have i heard something like this before? oh, right, from a hypothetical scrub:
They say "that guy didn't do anything new, so he is no good." the author did not say he'd discovered a heretofore unheard of concept or phenomenon. what he did was restate a very general and abstract conceptual relationship and apply it to the specifics of a novel and popular domain in a clear and accessible way. i think that qualifies as praiseworthy.
as for your (mis)interpretation of the article, do you think you failed to achieve the ''victory criterion'' of applying what the author was trying to say to the topic of this thread because you're incapable of doing so, or was it because your intentions (your evolving goal-states) when reading the article were in conflict with conceptual cross-pollination? how would you go about answering that question? incidentally, how we should test the more general and inane conjectures of research psychologists depends on your answer to the latter.
Quote:
VanVeen, are you this Sirlin guy?
no, but sirlin is probably the smartest dude talking about games (of any kind) on the internet in a way that is accessible to the typical gamer.
Quote:
The interesting concept, IMO, is how often people have conscious "goals" that are disconnected from underlying emotional priorities
at least someone can read.
intra-psychic and interpersonal goal conflicts are ubiquitous and self-evident. how do we determine where they exist and in what degree? how do we fix conflicts between goals when our meta-goals dictate that we should? this is very far from trivial esp for those whose
meta-goal (often this means long-term) is truth discovery or a domain-specific competency or the avoidance of self-defeating behavior ("utility maximization" through "goal modification and realignment" is a favorite topic of mine). it's also very far from obvious since virtually
no one uses ''goal-states'' to frame their discussions and arguments except in the most vague and disingenuous ways.
that's why this article was so popular with blogging decision-theorists and programmers of self-modifying AI (...). it's a (very) well written introduction to an extremely useful conceptual framework.