Replicate ai generated animation back in pybullet.
Posted: Sat Jan 28, 2023 9:25 am
Wassup fellow programmers I have been working on a project for a while now and decided this is the right time to get some advice, help or anything else idk. Long story short I have created and been training my own AI for a specific simulation of a movement and record that movement onto a JSON file, this would go through generations and generations to create better results and it did, but working with pybullet lead me to know that no matter the time parameters I give it, recorded joint info on its own implemented to replicate the animation would just run through all of the animations in a second. So I recorded time too but python is pretty darn slow to pick up the correct time to execute the exact movement in the exact time it is shown, rounding up the time would give a disaster where the body would just flinch and stop every split second, so after trying linear interpolation, setting the time step and numerous amounts of combinations for calculating time and still nothing worked. I ask you guys because Im sure my task was done at least once by one of you so I want to learn from you, what is your sauce?
To give you a more of a look into what I'm doing, heres a single 1/5 key out of the json file
The full one would count all of the 50 joints respectively and such keys would go on into the hundreds.
The script that tries to replicate this is very simple
Now Im sure this would be enough for you to understand a bit of the logic to try to figure something out yourself but I know you will might notbe able to fully replicate what I'm doing since of the sheer amount of data you might need to have to do so, so I want you to just to tell me more about the experience of you doing something similar to this rather than solving this like some overstackflow problem, python scripts are very appreciated. Furthermore the model is a XSensStyleModel generated using their motion capture, which I took from github: https://github.com/robotology/human-gazebo and found it has 50 joints, some are a bit glitchy on their own but it didnt really glitch me when I was doing simulations which I guess mostly because every movement was designed to not exceed their joint angles.
TLDR:
Give me your experience in replicating animations preferably in python (C++ scares me).
To give you a more of a look into what I'm doing, heres a single 1/5 key out of the json file
Code: Select all
{"1.2839901447296143": {"0": {"joint_positions": -0.2660407240964722}, "1": {"joint_positions": 0.5304539359064856}, "2": {"joint_positions": -0.44852838864481787}, "3": {"joint_positions": -0.14016010671763662}, "4": {"joint_positions": 0.5558784919638431}, "5": {"joint_positions": 0.463813963178519}, "6": {"joint_positions": 0.1232185850913412}, "7": {"joint_positions": 0.5509734074329725}, "8": {"joint_positions": -0.5095558255751145}, "9": {"joint_positions": 0.08173290868566063}, "10": {"joint_positions": 0.3949344400183078}
The script that tries to replicate this is very simple
Code: Select all
import pybullet as p
import time
import json
# Connect to the PyBullet physics server
p.connect(p.GUI)
p.setRealTimeSimulation(0)
# Load the humanoid model
skeleton = p.loadURDF("skeleton.urdf", useFixedBase=False)
p.setTimeStep(0.01)
# add a ground plane
plane_id = p.createCollisionShape(p.GEOM_PLANE)
ground = p.createMultiBody(baseCollisionShapeIndex=plane_id, basePosition=[0, 0, 0])
# enable collisions between the humanoid model and the ground plane
p.setCollisionFilterPair(skeleton, ground, -1, -1, 1)
# set the gravity
p.setGravity(0, 0, -9.8)
# load the joint angles from json file
with open("joint_angles.json", "r") as file:
joint_angles = json.load(file)
# Initialize a dictionary to store the current joint angles
current_joint_angles = {joint_num: 0 for joint_num in range(p.getNumJoints(skeleton))}
# Get the start time
start_time = time.time() +
offset = [0, 0, 1]
position, orientation = p.getBasePositionAndOrientation(skeleton)
new_position = [position[i] + offset[i] for i in range(3)]
p.resetBasePositionAndOrientation(skeleton, new_position, orientation)
n = 1
# Get the list of all time frames
time_frames = list(joint_angles.keys())
# Iterate through the time frames and set the joint angls
for i, keyframe_time in enumerate(time_frames):
keyframe_time = float(keyframe_time)
current_time = time.time() - start_time
time_to_sleep = keyframe_time - current_time
while time_to_sleep > 0:
time_to_sleep = keyframe_time - current_time
# Sleep for the calculated amount of time
time.sleep(time_to_sleep)
# Get the joint angles for this time frame
for joint_num, joint_data in joint_angles[str(keyframe_time)].items():
joint_positions = joint_data['joint_positions']
p.setJointMotorControl2(bodyIndex=0, jointIndex=int(joint_num), controlMode=p.POSITION_CONTROL, targetPosition=joint_positions)
# Step the simulation
p.stepSimulation()
print(time.time() - start_time)
p.disconnect()
TLDR:
Give me your experience in replicating animations preferably in python (C++ scares me).