Problem description:
When you add a body, remove it and then add a new
body, the body id may be reused by Bullet.
Because the visual shape data was not being removed when a body was
removed, requesting the visual shape data for a 'recycled' body id
results in both the visual shape data of the new body as well as the old
one.
Removing the visual shape data when the body gets removed fixes this.
Changes the btAlignedObjectArray for visual shapes to a hashmap, so
that removing is faster. Additionally, functions like getNumVisualShape
don't perform a linear search anymore.
Will add GRPC client and PyBullet GRPC server plugin.
Will cover most/all SharedMemoryCommand/SharedMemoryStatus messages.
Run the server, then test using the pybullet_client.py
This avoids issues with systems with large mass ratios.
Test: add this to BasicDemo/BasicExample.cpp in initPhysics
m_dynamicsWorld->getSolverInfo().m_numIterations = 1000;
m_dynamicsWorld->getSolverInfo().m_leastSquaresResidualThreshold = 1e-4;
This removes the need to specify the body id/link index when retrieving a user data entry.
Additionally, user data can now optionally be set to visual shapes as well.
The following public pybullet APIs have changed (backwards incompatible)
addUserData and getUserDataId
Makes linkIndex parameter optional (default value is -1)
Adds optional visualShapeIndex parameter (default value is -1)
getUserData and removeUserData
Removes required parameters bodyUniqueId and linkIndex
getNumUserData
Removes required bodyUniqueId parameter
getUserDataInfo
Removes required linkIndex parameter
Changes returned tuple from (userDataId, key) to (userDataId, key, bodyUniqueId, linkIndex, visualShapeIndex)
- Limits the maximum number of threads to 64, since btThreadSupportPosix
and btThreadsupportWin32 don't support more than 64 bits at this moment,
due to the use of UINT64 bitmasks. This could be fixed by using
std::bitset or some other alternative.
- Introduces a threadpool class, b3ThreadPool, which is a simple wrapper
around btThreadSupportInterface and uses this instead of the global task
scheduler for parallel raycasting. This is actually quite a bit faster
than the task scheduler (~10-15% in my tests for parallel raycasts),
since the advanced features (parallelFor) are not necessary for the
parallel raycasts.
- Puts 16*1024 of MAX_RAY_INTERSECTION_MAX_SIZE_STREAMING in
parentheses, since it otherwise causes problems with other operators
of equal precedence and introduces a smaller constant for Apple targets.
- Refactors the parallel raycasts code and adds some more profiling.
use b3RaycastBatchAddRays API to enable MAX_RAY_INTERSECTION_BATCH_SIZE_STREAMING num rays.
Old API (b3RaycastBatchAddRay) sticks to 256 rays, MAX_RAY_INTERSECTION_BATCH_SIZE.