Iterating through pixels in an image is terribly slow with python (OpenCV)
I know about iterating through pixels and accessing their values ββusing OpenCV from C ++. Now I am trying to learn python and I tried to do the same in python. But when I run the following code, it takes a long time to display the image (~ 7-10 seconds). And the script keeps running for a few seconds even after the image is displayed.
I found a similar question here on SO , but I can't figure out how to use numpy in my case (because I'm new to python) and is it really required?
Explanation of code: I am just trying to put black pixels on the left and right side of the image.
import numpy as np
import cv2 as cv
#reading an image
img = cv.imread('image.jpg')
height, width, depth = img.shape
for i in range(0, height):
for j in range(0, (width/4)):
img[i,j] = [0,0,0]
for i in range(0, height):
for j in range(3*(width/4), width):
img[i,j] = [0,0,0]
cv.imshow('image',img)
cv.waitKey(0)
source to share
(note: I'm not familiar with opencv
, but this is a problem numpy
)
The "eerily slow" part is that you are looping the python bytecode instead of skipping the loop numpy
at C speed.
Try directly assigning a (3-D) slice that masks the area you want to zero out.
import numpy as np
example = np.ones([500,500,500], dtype=np.uint8)
def slow():
img = example.copy()
height, width, depth = img.shape
for i in range(0, height): #looping at python speed...
for j in range(0, (width//4)): #...
for k in range(0,depth): #...
img[i,j,k] = 0
return img
def fast():
img = example.copy()
height, width, depth = img.shape
img[0:height, 0:width//4, 0:depth] = 0 # DO THIS INSTEAD
return img
np.alltrue(slow() == fast())
Out[22]: True
%timeit slow()
1 loops, best of 3: 6.13 s per loop
%timeit fast()
10 loops, best of 3: 40 ms per loop
The above shows that zeroing is left side; doing the same for the right side is an exercise for the reader.
If the numping slicing syntax is off, I suggest reading the indexing documents .
source to share