Open Side Menu Go to the Top
Register
Card dealing efficiency question Card dealing efficiency question

06-04-2017 , 12:43 PM
There are two different schemes for dealing cards in Python that I've tried (code included below):
1) Dealing and looking for collisions
2) Creating a deck and reducing it as you go

With the first method, I know that as we get deeper into the deck, the probability of a collision goes up and so I expect it to take longer and longer as the number of cards being dealt increases. With the second method, I know that I should expect basically linear time growth per card.

Here's the result (small samples because my computer is old and slow, but it's good enough to get an estimate):
Code:
simulations = 10000
Collisions:
1 card(s): 0.04687333106994629
2 card(s): 0.0937509536743164
3 card(s): 0.12500262260437012
4 card(s): 0.17187166213989258
5 card(s): 0.2187519073486328
6 card(s): 0.2499990463256836
7 card(s): 0.3281261920928955
8 card(s): 0.3593747615814209
9 card(s): 0.3906266689300537
10 card(s): 0.46875
11 card(s): 0.4843757152557373
12 card(s): 0.5625007152557373
13 card(s): 0.6250011920928955
14 card(s): 0.6744186878204346
15 card(s): 0.7187488079071045
16 card(s): 0.7812511920928955
17 card(s): 0.8593766689300537
18 card(s): 0.9062516689300537
19 card(s): 0.9843764305114746
20 card(s): 1.0312485694885254
21 card(s): 1.1562550067901611
22 card(s): 1.171877145767212
23 card(s): 1.2656269073486328
24 card(s): 1.343752384185791
25 card(s): 1.421877145767212
26 card(s): 1.5044364929199219
List Reduction:
1 card(s): 0.12499833106994629
2 card(s): 0.1875014305114746
3 card(s): 0.2343747615814209
4 card(s): 0.3125002384185791
5 card(s): 0.3437507152557373
6 card(s): 0.3885951042175293
7 card(s): 0.4687509536743164
8 card(s): 0.5099546909332275
9 card(s): 0.5625009536743164
10 card(s): 0.6093757152557373
11 card(s): 0.6718761920928955
12 card(s): 0.7343728542327881
13 card(s): 0.7812545299530029
14 card(s): 0.8125028610229492
15 card(s): 0.8759596347808838
16 card(s): 0.9501519203186035
17 card(s): 0.9930555820465088
18 card(s): 1.047055959701538
19 card(s): 1.0980629920959473
20 card(s): 1.1530654430389404
21 card(s): 1.1959989070892334
22 card(s): 1.2500026226043701
23 card(s): 1.2812516689300537
24 card(s): 1.343752145767212
25 card(s): 1.3906352519989014
26 card(s): 1.4531207084655762
So it looks like it will be dealing out roughly half the deck before it becomes more efficient to model the entire deck instead of individual cards. But is there something else I should try? I could shuffle the whole deck every time, but I think that will be even less efficient all the way around.

Also, are there ways to improve the efficiency of my code?

Code:
import time
import random

max_cards = 26+1
simulations = 10000

print('simulations = {}'.format(simulations))

# Dealing speed test

print('Collisions:')

for num_cards in range(1,max_cards):
    start_time = time.time()
    for i in range(simulations):
        dealt_cards = []
        dealt_cards.append(random.randint(0,51))
        for i in range(num_cards-1):
            card = random.randint(0,51)
            while card in dealt_cards:
                card = random.randint(0,51)
            dealt_cards.append(card)

    elapsed_time = time.time() - start_time
    print('{} card(s): {}'.format(num_cards, elapsed_time))

print('List Reduction:')
for num_cards in range(1,max_cards):
    start_time = time.time()
    for i in range(simulations):
        deck = [i for i in range(52)]
        dealt_cards = []
        for i in range(num_cards):
            card = random.randint(0,len(deck)-1)
            dealt_cards.append(deck[card])
            deck.remove(deck[card])

    elapsed_time = time.time() - start_time
    print('{} card(s): {}'.format(num_cards, elapsed_time))
Card dealing efficiency question Quote
06-04-2017 , 01:32 PM
With python, if there is a library function that does exactly what you want, that will usually be the fastest. This works pretty good:

Code:
def dealt(num):
     return random.sample(range(52), num)
This was a bit more than 4x faster than your methods. (at the high end, at the low # of cards end, only about 2x as fast)
Card dealing efficiency question Quote
06-04-2017 , 03:07 PM
Quote:
Originally Posted by RustyBrooks
With python, if there is a library function that does exactly what you want, that will usually be the fastest. This works pretty good:

Code:
def dealt(num):
     return random.sample(range(52), num)
This was a bit more than 4x faster than your methods. (at the high end, at the low # of cards end, only about 2x as fast)
Thanks. Interestingly, I got a different timing result. It's definitely faster at the high end, but at the low end the collision method is outperforming the library function. Do you have any ideas as to why it would be running a factor of two slower for me than for you?

Code:
simulations = 50000
Collisions - 1 card(s): 0.23437952995300293
List Reduction - 1 card(s): 0.6249935626983643
Library - 1 card(s): 0.5937542915344238

Collisions - 2 card(s): 0.4218738079071045
List Reduction - 2 card(s): 0.8954880237579346
Library - 2 card(s): 0.7031168937683105

Collisions - 3 card(s): 0.6250073909759521
List Reduction - 3 card(s): 1.248410701751709
Library - 3 card(s): 0.8130462169647217

Collisions - 4 card(s): 0.874042272567749
List Reduction - 4 card(s): 1.4190893173217773
Library - 4 card(s): 0.89805006980896

Collisions - 5 card(s): 1.0370588302612305
List Reduction - 5 card(s): 1.7110977172851562
Library - 5 card(s): 1.0040502548217773

Collisions - 6 card(s): 1.2450778484344482
List Reduction - 6 card(s): 1.9511113166809082
Library - 6 card(s): 1.2250592708587646

Collisions - 7 card(s): 1.4630939960479736
List Reduction - 7 card(s): 2.208117961883545
Library - 7 card(s): 1.3180804252624512

Collisions - 8 card(s): 1.6860997676849365
List Reduction - 8 card(s): 2.461132526397705
Library - 8 card(s): 1.416088342666626

Collisions - 9 card(s): 1.9431030750274658
List Reduction - 9 card(s): 2.7381608486175537
Library - 9 card(s): 1.5000884532928467

Collisions - 10 card(s): 2.175116539001465
List Reduction - 10 card(s): 3.015171527862549
Library - 10 card(s): 1.5971012115478516

Collisions - 11 card(s): 2.414134979248047
List Reduction - 11 card(s): 3.24918532371521
Library - 11 card(s): 1.6950886249542236

Collisions - 12 card(s): 2.6751601696014404
List Reduction - 12 card(s): 3.4911911487579346
Library - 12 card(s): 1.796102523803711

Collisions - 13 card(s): 2.9231741428375244
List Reduction - 13 card(s): 3.7462158203125
Library - 13 card(s): 1.8820970058441162

Collisions - 14 card(s): 3.197178840637207
List Reduction - 14 card(s): 4.002238988876343
Library - 14 card(s): 2.0061097145080566

Collisions - 15 card(s): 3.4711945056915283
List Reduction - 15 card(s): 4.269243478775024
Library - 15 card(s): 2.0741183757781982

Collisions - 16 card(s): 3.7562108039855957
List Reduction - 16 card(s): 4.504267454147339
Library - 16 card(s): 2.176123857498169

Collisions - 17 card(s): 4.080280303955078
List Reduction - 17 card(s): 4.757181882858276
Library - 17 card(s): 2.343864917755127

Collisions - 18 card(s): 4.386770725250244
List Reduction - 18 card(s): 5.037399053573608
Library - 18 card(s): 2.351652145385742

Collisions - 19 card(s): 4.6757118701934814
List Reduction - 19 card(s): 5.257134914398193
Library - 19 card(s): 2.4673912525177

Collisions - 20 card(s): 4.987916946411133
List Reduction - 20 card(s): 5.520564079284668
Library - 20 card(s): 2.5570244789123535

Collisions - 21 card(s): 5.338906526565552
List Reduction - 21 card(s): 5.769258260726929
Library - 21 card(s): 2.65639066696167

Collisions - 22 card(s): 5.687526702880859
List Reduction - 22 card(s): 6.012403964996338
Library - 22 card(s): 2.760930299758911

Collisions - 23 card(s): 6.042987585067749
List Reduction - 23 card(s): 6.234477519989014
Library - 23 card(s): 2.8286144733428955

Collisions - 24 card(s): 6.397864818572998
List Reduction - 24 card(s): 6.493626117706299
Library - 24 card(s): 2.92207670211792

Collisions - 25 card(s): 6.80842924118042
List Reduction - 25 card(s): 6.7416651248931885
Library - 25 card(s): 3.0473031997680664

Collisions - 26 card(s): 7.212479829788208
List Reduction - 26 card(s): 7.050436019897461
Library - 26 card(s): 3.109863758087158
Card dealing efficiency question Quote
06-04-2017 , 03:53 PM
I didn't run that many simulations, just the 10k in your original code. Also, I re-factored your code somewhat, putting everything into functions. I don't expect that would make a huge difference, but maybe.

At 10k sims, your 2 functions both took about .45s for 10k, .10s for the random.sample method

The best version I came up with myself was about .30s

I was using python 2.7 if that matters.
Card dealing efficiency question Quote
06-04-2017 , 03:53 PM
I'm not at my computer any more but I can post my code later.
Card dealing efficiency question Quote
06-04-2017 , 07:15 PM
I have some sample code here
https://gist.github.com/rustybrooks/...e4f1648f542b24
with a few different methods you tried. I have some variations on your methods and also some different ones. I haven't found anything better than random.sample()

Note that I am reporting results in 1000s of iterations/s since I find that a more useful measure. To me I want to know "if I ran this a million times, how long would it take"

I also used the timeit.repeat() method, doing 10k samples 5 times and taking the shortest one. This is a decent way to remove "noise" from the samples.

You might like some of my examples because they show ways to make the function very brief - some people like this style, some don't.

I have written several poker simulation programs over the years, and frankly, I wrote most of it in C++ and then wrote python (and other language) modules for it. This is an optimization you don't need to make until you are well on your way, though. Python is really great for iterating quickly, c++, not so much, and by adding c++ code you make your code more difficult to build and distribute. Avoid adding dependencies, until you have to.

I wrote a lot of my poker simulation stuff 10+ years ago, when computing resources were much less powerful.
Card dealing efficiency question Quote
06-04-2017 , 07:50 PM
Quote:
Originally Posted by RustyBrooks
I have some sample code here
https://gist.github.com/rustybrooks/...e4f1648f542b24
with a few different methods you tried. I have some variations on your methods and also some different ones. I haven't found anything better than random.sample()

Note that I am reporting results in 1000s of iterations/s since I find that a more useful measure. To me I want to know "if I ran this a million times, how long would it take"

I also used the timeit.repeat() method, doing 10k samples 5 times and taking the shortest one. This is a decent way to remove "noise" from the samples.
Thanks for this. It's very interesting. I had to modify it slightly because I'm running Python 3.5. (range() isn't a list anymore, so I had to turn it into one.)

I'm intrigued by the fact that I'm still getting something different (quite different) from you. random.sample() does not fare so well. It's probably something in the Python versions. (If it makes any difference, I'm running Windows 10 on a laptop that's about 7-8 years old.)

Code:
simulations = 10000
------------------------------------------------ 1
                               Collision: 221.4k/s
                          Dict Collision: 194.6k/s
                          List reduction: 118.2k/s
               Simplified List reduction: 279.1k/s
                          dict reduction: 197.6k/s
                           random sample: 84.4k/s
------------------------------------------------ 6
                               Collision: 42.3k/s
                          Dict Collision: 42.8k/s
                          List reduction: 30.0k/s
               Simplified List reduction: 270.1k/s
                          dict reduction: 197.0k/s
                           random sample: 41.0k/s
------------------------------------------------ 11
                               Collision: 22.1k/s
                          Dict Collision: 23.3k/s
                          List reduction: 17.4k/s
               Simplified List reduction: 271.3k/s
                          dict reduction: 195.1k/s
                           random sample: 29.7k/s
------------------------------------------------ 16
                               Collision: 14.2k/s
                          Dict Collision: 15.2k/s
                          List reduction: 12.3k/s
               Simplified List reduction: 267.1k/s
                          dict reduction: 195.1k/s
                           random sample: 23.2k/s
------------------------------------------------ 21
                               Collision: 10.0k/s
                          Dict Collision: 11.0k/s
                          List reduction: 9.5k/s
               Simplified List reduction: 270.5k/s
                          dict reduction: 197.2k/s
                           random sample: 18.9k/s
------------------------------------------------ 26
                               Collision: 7.4k/s
                          Dict Collision: 8.2k/s
                          List reduction: 7.8k/s
               Simplified List reduction: 270.8k/s
                          dict reduction: 195.8k/s
                           random sample: 16.2k/s
Quote:
You might like some of my examples because they show ways to make the function very brief - some people like this style, some don't.
I like being exposed to new things. I did not really think of using lambda functions, probably because I don't have as much experience with them for it to be on the list of things that I think about.

Quote:
I have written several poker simulation programs over the years, and frankly, I wrote most of it in C++ and then wrote python (and other language) modules for it. This is an optimization you don't need to make until you are well on your way, though. Python is really great for iterating quickly, c++, not so much, and by adding c++ code you make your code more difficult to build and distribute. Avoid adding dependencies, until you have to.

I wrote a lot of my poker simulation stuff 10+ years ago, when computing resources were much less powerful.
At this point, I'm just goofing around with stuff. There's a thing in my head that I want to see/do/create, but I don't foresee myself having either the time or the patience to see it all the way through to the end. So I'm just going to saunter down the path and learn whatever I manage to learn along the way.
Card dealing efficiency question Quote
06-04-2017 , 08:11 PM
I think I worked part of it out. The methods that use maps are returning a map object, which hasn't been translated into a list. Changing it from a map to a list cuts the speed in half.

I tried to do the same with method3, but it's giving me problems. The dictionary keys are a set-like object, and so it doesn't allow itself to be indexed into a list. list(set) doesn't work in Python 3.

Code:
TypeError: 'dict_keys' object does not support indexing
Card dealing efficiency question Quote
06-04-2017 , 08:26 PM
When I force the Simplified List reduction approach to return a list to be consistent with the other techniques, random sample pulls ahead rather quickly.

So I guess that leaves me with an interesting question. My intention was to take the list of values and plug them into a function. So if I ended up with a list like dealt = [0, 1, 2] (say, for a 3 card poker game), I would then have some sort of evaluation function:

Code:
def hand_eval(dealt[0], dealt[1], dealt[2]):
    STUFF HAPPENS
    return hand_ranking_information
If creating a map object is so much faster, would it be better to try to pass that along? If so, how? Or does that simply put off the expense until later when I need to get the values out of the map object?

Or... is the reason why it doesn't suffer any slowdown as the number of cards increases is that it hasn't actually evaluated anything yet? My knowledge of the inner workings of Python is very limited.

Code:
simulations = 10000
------------------------------------------------ 1
                               Collision: 222.9k/s
                          Dict Collision: 194.3k/s
                          List reduction: 116.7k/s
               Simplified List reduction: 276.7k/s
         list(Simplified List reduction): 108.4k/s
                           random sample: 84.2k/s
------------------------------------------------ 6
                               Collision: 42.4k/s
                          Dict Collision: 42.1k/s
                          List reduction: 29.6k/s
               Simplified List reduction: 265.9k/s
         list(Simplified List reduction): 34.9k/s
                           random sample: 41.0k/s
------------------------------------------------ 11
                               Collision: 21.8k/s
                          Dict Collision: 22.8k/s
                          List reduction: 16.5k/s
               Simplified List reduction: 256.2k/s
         list(Simplified List reduction): 20.2k/s
                           random sample: 26.6k/s
------------------------------------------------ 16
                               Collision: 14.0k/s
                          Dict Collision: 14.1k/s
                          List reduction: 11.1k/s
               Simplified List reduction: 263.2k/s
         list(Simplified List reduction): 14.3k/s
                           random sample: 22.3k/s
------------------------------------------------ 21
                               Collision: 9.9k/s
                          Dict Collision: 10.8k/s
                          List reduction: 9.3k/s
               Simplified List reduction: 269.4k/s
         list(Simplified List reduction): 11.0k/s
                           random sample: 18.8k/s
------------------------------------------------ 26
                               Collision: 7.3k/s
                          Dict Collision: 8.1k/s
                          List reduction: 7.7k/s
               Simplified List reduction: 264.2k/s
         list(Simplified List reduction): 9.1k/s
                           random sample: 15.2k/s
Card dealing efficiency question Quote
06-04-2017 , 08:39 PM
Well, one thing to consider is that I did not much check that the functions return reasonable results. But man that does not look right. Simplified list reduction has to have a serious flaw (in python 3) that is not evident.

Code:
simulations = 10000
--------------- 1
                               Collision: 679.9k/s
                          Dict Collision: 627.0k/s
                          List reduction: 350.3k/s
               Simplified List reduction: 338.1k/s
                          dict reduction: 365.5k/s
                           random sample: 434.3k/s
--------------- 11
                               Collision: 68.3k/s
                          Dict Collision: 72.6k/s
                          List reduction: 55.9k/s
               Simplified List reduction: 68.1k/s
                          dict reduction: 80.9k/s
                           random sample: 182.8k/s
--------------- 21
                               Collision: 31.7k/s
                          Dict Collision: 37.5k/s
                          List reduction: 31.8k/s
               Simplified List reduction: 38.0k/s
                          dict reduction: 44.6k/s
                           random sample: 119.2k/s
Card dealing efficiency question Quote
06-04-2017 , 08:41 PM
I'm a few beers in, gonna have to fool with this, for python 3, tomorrow.

ETA but I feel like you've demonstrated neatly why people don't try to convert their code to python 3.
Card dealing efficiency question Quote

      
m