-
OpenAI
- San Francisco, CA
- liruiw.github.io
- @LiruiWang1
- in/lirui-wang
Stars
Learning Real-World Action-Video Dynamics with Heterogeneous Masked Autoregression
Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence
Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Google Robot, WidowX+Bridge) (CoRL 2024)
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
Generative Models by Stability AI
Code Release for AdaptSim: Task-Driven Simulation Adaptation for Sim-to-Real Transfer
F3RM: Feature Fields for Robotic Manipulation. Official repo for the paper "Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation" (CoRL 2023).
Tool-use Robotic Benchmark built with Drake Simulation
[ICCV 2023] Tracking Anything with Decoupled Video Segmentation
PyTorch extensions for high performance and large scale training.
LLM-based CLI utility for simulation worlds creation.
Generating Robotic Simulation Tasks via Large Language Models
[CVPR 2023] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects
Simple area to test out concepts or try to reproduce isolated issues.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
An open source implementation of CLIP.
[RSS 2023] Diffusion Policy Visuomotor Policy Learning via Action Diffusion