The World’s Largest Online Community for Developers
def get_pixel_colour(self, i_x, i_y): i_desktop_window_id = win32gui.GetDesktopWindow() i_desktop_window_dc = win32gui.GetWindowDC(i_desktop_window_id) long_colour = win32gui.GetPixel(i_desktop_window_dc, i_x, i_y) i_colour = int(long_colour) return (i_colour & 0xff), ((i_colour >> 8) & 0xff), ((i_colour >> 16) & 0xff)
This is basically what I have, but someone else wrote it.
I am aware of this approach, and I have written a version of my code using the PIL, it works fine.
However, I am trying to improve the performance of my code. Is there a way to do this
but in a different way? I am trying to avoid PIL, or the
color = GetPixel(hDC, p.x, p.y)
Is there an approach called "BitBlt?
Three things you can do to improve performance:
If you are invoking your
get_pixel_colour function in a tight loop (e.g. once for every pixel), then you are incurring redundant overhead for calling GetDesktopWindow and GetWindowDC for each pixel. Fetch the DC once for every frame instead of for every pixel.
Further, just the function call overhead itself can skew results. Not sure how Python optimizes code, but a function call per-pixel in an image is a killer as well. So it's often better to just inline code that directly calls
GetPixel instead of having a function wrap it.
Finally, GetPixel itself is likely a heavyweight call itself. A better approach would be to blit the entire desktop to a memory buffer once. Then iterate over RGB bytes in the memory buffer. Look at this answer here: How to get a screen image into a memory buffer?