## Intrinsic Matrix for simulated camera

Official Python bindings with a focus on reinforcement learning and robotics.
timeforscience
Posts: 3
Joined: Sat Jan 12, 2019 2:11 am

### Intrinsic Matrix for simulated camera

Hello all! I'm currently trying to do some computer vision tests using the pybullet simulated camera. I can get the image fine, but what I really need for my development is a mapping from world space to pixel space. I don't fully understand the view matrix and projection matrix however. I'm used to an intrinsic matrix where pixel = intrinsic_matrix * transformation_matrix * point_vector. Could someone help guide me just to find a way to create a matrix that will map from world space to pixel space if I have all the other necessary parameters?

I've tried pixel = projection_matrix * view_matrix * point_vector, but that didn't seem to work. Any guidance would be greatly appreciated.
shantythaks
Posts: 1
Joined: Wed Sep 25, 2019 12:50 am

### Re: Intrinsic Matrix for simulated camera

@timeforscience, have you figured out a way to find the matrix to map from world space to pixels? If yes can you please share.

Thanks!
hyyou
Posts: 96
Joined: Wed Mar 16, 2016 10:11 am

### Re: Intrinsic Matrix for simulated camera

I studied coordinate conversion

Code: Select all

`` model coordinate --> world coordinate --> view coordinate --> projected coordinate.``
... from here: https://solarianprogrammer.com/2013/05/ ... iew-model/

If you get projected coordinate, then multiply it by screen resolution, you will get pixel coordinate.
aaron
Posts: 1
Joined: Mon Mar 29, 2021 12:04 pm

### Re: Intrinsic Matrix for simulated camera

Hi, I am trying to figure this out as well. It seems it is the same with OpenGL transformations.https://learnopengl.com/Getting-started/Transformations. There should be four matrixes but pybullet only takes in projection and view. Does that mean the rest are all identities?