Page 1 of 1

Intrinsic Matrix for simulated camera

Posted: Mon Jan 14, 2019 7:03 pm
by timeforscience
Hello all! I'm currently trying to do some computer vision tests using the pybullet simulated camera. I can get the image fine, but what I really need for my development is a mapping from world space to pixel space. I don't fully understand the view matrix and projection matrix however. I'm used to an intrinsic matrix where pixel = intrinsic_matrix * transformation_matrix * point_vector. Could someone help guide me just to find a way to create a matrix that will map from world space to pixel space if I have all the other necessary parameters?

I've tried pixel = projection_matrix * view_matrix * point_vector, but that didn't seem to work. Any guidance would be greatly appreciated.

Re: Intrinsic Matrix for simulated camera

Posted: Wed Sep 25, 2019 12:52 am
by shantythaks
@timeforscience, have you figured out a way to find the matrix to map from world space to pixels? If yes can you please share.

Thanks!

Re: Intrinsic Matrix for simulated camera

Posted: Thu Oct 10, 2019 2:00 am
by hyyou
I studied coordinate conversion

Code: Select all

 model coordinate --> world coordinate --> view coordinate --> projected coordinate.
... from here: https://solarianprogrammer.com/2013/05/ ... iew-model/

If you get projected coordinate, then multiply it by screen resolution, you will get pixel coordinate.

Re: Intrinsic Matrix for simulated camera

Posted: Mon Mar 29, 2021 12:09 pm
by aaron
Hi, I am trying to figure this out as well. It seems it is the same with OpenGL transformations.https://learnopengl.com/Getting-started/Transformations. There should be four matrixes but pybullet only takes in projection and view. Does that mean the rest are all identities?