There are two different schemes for dealing cards in Python that I've tried (code included below):
1) Dealing and looking for collisions
2) Creating a deck and reducing it as you go
With the first method, I know that as we get deeper into the deck, the probability of a collision goes up and so I expect it to take longer and longer as the number of cards being dealt increases. With the second method, I know that I should expect basically linear time growth per card.
Here's the result (small samples because my computer is old and slow, but it's good enough to get an estimate):
Code:
simulations = 10000
Collisions:
1 card(s): 0.04687333106994629
2 card(s): 0.0937509536743164
3 card(s): 0.12500262260437012
4 card(s): 0.17187166213989258
5 card(s): 0.2187519073486328
6 card(s): 0.2499990463256836
7 card(s): 0.3281261920928955
8 card(s): 0.3593747615814209
9 card(s): 0.3906266689300537
10 card(s): 0.46875
11 card(s): 0.4843757152557373
12 card(s): 0.5625007152557373
13 card(s): 0.6250011920928955
14 card(s): 0.6744186878204346
15 card(s): 0.7187488079071045
16 card(s): 0.7812511920928955
17 card(s): 0.8593766689300537
18 card(s): 0.9062516689300537
19 card(s): 0.9843764305114746
20 card(s): 1.0312485694885254
21 card(s): 1.1562550067901611
22 card(s): 1.171877145767212
23 card(s): 1.2656269073486328
24 card(s): 1.343752384185791
25 card(s): 1.421877145767212
26 card(s): 1.5044364929199219
List Reduction:
1 card(s): 0.12499833106994629
2 card(s): 0.1875014305114746
3 card(s): 0.2343747615814209
4 card(s): 0.3125002384185791
5 card(s): 0.3437507152557373
6 card(s): 0.3885951042175293
7 card(s): 0.4687509536743164
8 card(s): 0.5099546909332275
9 card(s): 0.5625009536743164
10 card(s): 0.6093757152557373
11 card(s): 0.6718761920928955
12 card(s): 0.7343728542327881
13 card(s): 0.7812545299530029
14 card(s): 0.8125028610229492
15 card(s): 0.8759596347808838
16 card(s): 0.9501519203186035
17 card(s): 0.9930555820465088
18 card(s): 1.047055959701538
19 card(s): 1.0980629920959473
20 card(s): 1.1530654430389404
21 card(s): 1.1959989070892334
22 card(s): 1.2500026226043701
23 card(s): 1.2812516689300537
24 card(s): 1.343752145767212
25 card(s): 1.3906352519989014
26 card(s): 1.4531207084655762
So it looks like it will be dealing out roughly half the deck before it becomes more efficient to model the entire deck instead of individual cards. But is there something else I should try? I could shuffle the whole deck every time, but I think that will be even less efficient all the way around.
Also, are there ways to improve the efficiency of my code?
Code:
import time
import random
max_cards = 26+1
simulations = 10000
print('simulations = {}'.format(simulations))
# Dealing speed test
print('Collisions:')
for num_cards in range(1,max_cards):
start_time = time.time()
for i in range(simulations):
dealt_cards = []
dealt_cards.append(random.randint(0,51))
for i in range(num_cards-1):
card = random.randint(0,51)
while card in dealt_cards:
card = random.randint(0,51)
dealt_cards.append(card)
elapsed_time = time.time() - start_time
print('{} card(s): {}'.format(num_cards, elapsed_time))
print('List Reduction:')
for num_cards in range(1,max_cards):
start_time = time.time()
for i in range(simulations):
deck = [i for i in range(52)]
dealt_cards = []
for i in range(num_cards):
card = random.randint(0,len(deck)-1)
dealt_cards.append(deck[card])
deck.remove(deck[card])
elapsed_time = time.time() - start_time
print('{} card(s): {}'.format(num_cards, elapsed_time))