Scenario:  

A diverse, large-scale multi-modal, multi-view, video dataset and benchmark collected across 13 cities worldwide by 740 camera wearers, capturing 1286.3 hours of video of skilled human activities.

SCENARIOS

5035

(# takes)

740

(# participants)

123

(# sites)

1286.3

(# ego+exo hours)

Cooking icon Cooking
678
173
60
564.13h
Cooking icon Music
276
59
8
180.08h
Cooking icon Soccer
282
78
14
66.97h
Cooking icon Health
397
122
24
114.5h
Cooking icon Basketball
910
113
5
78.01h
Cooking icon Dance
728
93
7
106.57h
Cooking icon Bike Repair
363
32
8
82.15h
Cooking icon Rock Climbing
1401
98
2
93.90h

CHALLENGES

We are hosting competitions in collaboration with eval.ai (cash prize awards). Please see more details on the Documentation Site.

BENCHMARKS

QUESTIONS / ANSWERS

Where can I find additional information?

For further detailed information refer to our manuscript:

K Grauman et al. Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives. PDF | Supplementary PDF | ArXiv paper

How can I download the dataset?

The dataset is now publicly available. Please refer to the Getting Started page on the Documentation Site for details on how to download the dataset. To summarize you will need to:

  1. Sign the Ego-Exo4D Licenses, this will take 2 days to be approved.
  2. Download the data via the CLI downloader. Click here for docs.

I have already downloaded and signed Ego4D licenses before. Do I need to sign again?

Yes. As new university partners are part of this effort, a new license should be signed.

Who collected this data?

The data was collected from 740 participants. We showcase a distribution of age and gender from our participants who volunteered to self-identify their demographics — age and gender.

EGO4D Demographics

Does the data contain identifying information of individuals?

Yes. Since the scenarios allow for closed environments (e.g., no passerby) nearly all video is available without de-identification. Refer to our paper for details of our privacy and data collection pipeline.

What coverage of scenarios do you have?

A sample visualisation of our scenarios is below.

EGO4D Scenarios

What metadata, features and models are available with the dataset?

We will be releasing additional information with the dataset before the end of the year

How does this dataset differ from Ego4D?

Ego4D collected unscripted daily life activities. Instead, we here focus on skilled activities specifically, ensuring participants of diverse levels of skill are recorded performing the same scenario, while maintaining distinctions natural to the scenario being carried out in a new environment. We cover both procedural and physical skilled activities. While some of these general scenarios have been captured in Ego4D (there are indeed examples of cooking, climbing or soccer), these are disjoint and do not record the same scenario making them unsuitable for tasks related to to keystep recognition or skill assessment.

Importantly, Ego-Exo4D is a multi-view dataset with synchronized egocentric and multiple exocentric view. This is the largest and most diverse effort for multi-view ego-exo data.

The egocentric views are captured using the Aria Project devices.

Are there challenges and benchmarks associated with this dataset?

Yes! Please refer to challenges page on the docs.

EGO-EXO4D Team

Logo of Carnegie Mellon University

Pittsburgh, U.S.

  • Kris Kitani (PI)
  • Gene Byrne
  • Sean Crane
  • Rawal Khirodkar
  • Zhengyi Luo
Logo of CMU Africa

Rwanda, Africa

  • Abrham Gebreselasie
Logo of King Abdullah University of Science and Technology

Kingdom of Saudi Arabia

  • Bernard Ghanem (PI)
  • Chen Zhao
  • Merey Ramazanova
Logo of University of Minnesota

Minneapolis, U.S.

  • Hyun Soo Park (PI)
  • Zach Chavis
  • Anush Kumar
Logo of IIIT Hyderabad

Hyderabad, India

  • C. V. Jawahar (PI)
  • Avijit Dasgupta
Logo of Indiana University

Bloomington, U.S.

  • David Crandall (PI)
  • Yuchen Wang
  • Weslie Khoo
  • Ziwei Zhao
Logo of University of North Carolina at Chapel Hill

U.S.

  • Gedas Bertasius (PI)
  • Md Mohaiminul Islam
  • Oluwatumininu Oguntola
  • Wei Shan
  • Jeff Zhuo
Logo of UT Austin

U.S.

  • Santhosh Kumar Ramakrishnan
  • Arjun Somayazulu
  • Changan Chen
  • Romy Luo
Logo of  University of Catania

Italy

  • Giovanni Maria Farinella (PI)
  • Antonino Furnari
  • Francesco Ragusa
  • Luigi Seminara
Logo of University of Tokyo

Japan

  • Yoichi Sato (PI)
  • Ryosuke Furuta
  • Yifei Huang
  • Masatoshi Tateno
  • Takuma Yagi
  • Zecheng Yu
Logo of University of Bristol

UK

  • Dima Damen (PI)
  • Michael Wray
  • Siddhant Bansal
  • Zhifan Zhu
Logo of National University of Singapore

Singapore

  • Mike Zheng Shou (PI)
  • Joya Chen
  • Jia-Wei Liu
  • Xinzhu Fu
  • Chenan Song
Logo of Georgia Institute of Technology

U.S.

  • James Rehg (PI)
  • Fiona Ryan
  • Audrey Southerland
  • Judy Hoffman
Logo of University of Pennsylvania

U.S.

  • Jianbo Shi (PI)
  • Shan Su
  • Edward Zhang
  • Jinxu Zhang
  • Yiming Huang
Logo of UIUC

U.S.

  • Bikram Boote
Logo of Universidad de los Andes

Bogotá, Colombia

  • Pablo Arbelaez (PI)
  • Maria Escobar
  • Cristhian Forigua
  • Angela Castillo
  • Cristina Gonzalez
Logo of Simon Fraser University

Canada

  • Manolis Savva (PI)
  • Sanjay Haresh
  • Yongsen Mao
Logo of FAIR

International

    FAIR

  • Kristen Grauman (PI)
  • Andrew Westbury
  • Lorenzo Torresani
  • Kris Kitani (PI)
  • Jitendra Malik
  • Triantafyllos Afouras
  • Ashutosh Kumar
  • Feng Cheng
  • Fu-Jen Chu
  • Jing Huang
  • Suyog Jain
  • Devansh Kukreja
  • Kevin Liang
  • Sagnik Majumder
  • Miguel Martin
  • Effrosyni Mavroudi
  • Tushar Nagarajan
  • Yale Song
  • Sherry Xue
  • Miao Liu
  • Brighid Meredith
  • Austin Miller
  • Huiyu Wang
  • Xitong Yang
  • Shengxin Cindy Zha

Aria

  • Vijay Baiyya
  • Jing Dong
  • Prince Gupta
  • Sach Lakhavani
  • Xiaqing Pan
  • Kiran Somasundaram
  • Mingfei Yan
  • Jakob Engel
  • Richard Newcombe

HALO

  • Jiabo Hu
  • Robert Kuo
  • Penny Peng

Research Intern

  • Shraman Pramanick

DOWNLOAD
EGO-EXO4D

The dataset was released on 13th December 2023 and is now publicly available

License forms should be signed to access & download the dataset.
Sign Ego-Exo4D Licenses ↗
Refer to "Getting Started" on the Documentation Site
Getting Started ↗

CONTACT
EGO-EXO4D

Email us at: info@ego4d-data.org