Tech »  Topic »  Experimental Setup For Large Language Model Video Generation

Experimental Setup For Large Language Model Video Generation


Experimental Setup For Large Language Model Video Generation by @teleplay

The research examines 2T tokens, fine-tunes for text-to-video tasks, and evaluates zero-shot benchmarks including MSR-VTT, UCF-101, and Kinetics 600.

Authors:

(1) Dan Kondratyuk, Google Research and with Equal contribution;

(2) Lijun Yu, Google Research, Carnegie Mellon University and with Equal contribution;

(3) Xiuye Gu, Google Research and with Equal contribution;

(4) Jose Lezama, Google Research and with Equal contribution;

(5) Jonathan Huang, Google Research and with Equal contribution;

(6) Grant Schindler, Google Research;

(7) Rachel Hornung, Google Research;

(8) Vighnesh Birodkar, Google Research;

(9) Jimmy Yan, Google Research;

(10) Krishna Somandepalli, Google Research;

(11) Hassan Akbari, Google Research;

(12) Yair Alon, Google Research;

(13) Yong Cheng, Google DeepMind;

(14) Josh Dillon, Google Research;

(15) Agrim Gupta, Google Research;

(16) Meera Hahn, Google Research;

(17) Anja Hauth, Google Research;

(18) David Hendon, Google Research;

(19) Alonso Martinez, Google Research;

(20) David ...


Copyright of this story solely belongs to hackernoon.com . To see the full text click HERE