Quote:
Originally Posted by spino1i
So just to be 100% clear, you are denying that you hired people to bot for you on Full Tilt, and that you were caught and all of your accounts frozen about 2 years ago?
I suspect you saw the link on our forum to some guys claiming I botted. Id like to keep this thread to support for our software beta launch. However, Ill answer your question (which is a fair one) and hopefully Ill be disciplined enough to keep this thread about support.
About two years ago, I felt our AI was probably excellent deep stack AI. I thought the monumental effort to make a computer play good deep stack poker (i.e., the foundation for the software discussed on this thread) was nearing completion. I wanted to know how good it was. Moreover, I needed to know where its leaks were.
I came to conclude that this was the primary problem the University of Alberta had (i.e., they could not test their AI for real money play). They had no way to know if they had leaks in their program. Imagine trying to learn how to play poker, but not playing for money. This was what caused them to fail in my opinion after a major decade long effort. I did not foresee this problem. So I had to make a decision: (a) give up my dream to make great poker AI and great training software or (b) find out how it played against people.
So rather than producing crummy training software, I decided to play online poker using our software as an advisor. To do it quickly I worked w/ some others. I told myself I was not breaching terms and conditions because other advisors were allowed and people were pushing buttons. I also tried to convince myself that it was okay, because I was using an analytical tool that I had created. However, I quickly learned that I was wrong. The first site closed the accounts and took my money. Then I tried it on another site and had a similar experience. I lost a lot of money on this effort but I did learn a lot.
Since this time, I have discovered a far better way to test our AI than using it as an advisor. (I wish I had thought of this before, but I didnt) We replay hands played by great pros. Then we look at the deviations similar to the way our program works.
I do feel bad about having used it as an advisor, but I did. And whats supposed to happen happened. Presumably the sites returned the $ to people, and no one was hurt. I have not done this since nor will I particularly now that I am no longer a lone gun, but a part of a professional enterprise with employees, a partner and investors, and have a better alternative. To test our AI we can benchmark it against pros. This has proven much more effective in improving our AI than my early and dumb effort to get real data.
So I made a mistake, paid a price, learned from it, and have moved on.