Quote:
Originally Posted by ChazDazzle
Code:
import copy
b = copy.deepcopy(a)
This works for any generic or nested data structure.
Right, but it's really really really slow. It took an astonishing 8 seconds on my computer to copy a list of 1M ints, compared to 0.14 for b = a[:].
I don't know why it's THAT ridiculously slow, but Google shows that it's not something with my machine/python implementation.
Random tangent:
This is a good reminder for Python users that Python's not built for performance, and that making things perform their best in Python often requires making them non-Pythonic. Even numpy and scipy can be amazingly inefficient at times. For example, I recently was working with some code that (among a lot of other things) calculated the distance between a bunch of vectors (np arrays) using the p-minkowski distance. I was using np.sum(np.abs(x-y)*p)*(1.0/p) . I wasn't happy with the run-time, and I'm trying to use a lot more pre-written functions because I find that to be cleaner, so I switched to scipy's function to do the exact same thing, scipy.spatial.distance.minkowski(x,y,p). It turned out that that SLOWED DOWN my distance calcs by a factor of 10. This is because scipy's own function is essentially identical to mine except that it first calls the very expensive private function _validate_vector, which essentially rebuilds the array to make sure it's a proper numpy array and does some other bookkeeping.
Python is sllooooowwwwww. It gets even slower when you use built-in functions that are made to handle lots of special cases to make sure that everything "just works". So, if you know your data, you can often do much better than a built-in function. Of course, that's not always true, since a lot of built-in functions are written in C++.