Featured tweet threads
- A self-contained colab notebook for playing with CSCG. No install required!
Hippocampus aficionados, here is a fun colab notebook to play with cognitive maps from our sequence learning model! Learn "space" from sequences without any Euclidean assumptions, visualize graphs & place fields ....all in less than 5 minutes!
— Dileep George (@dileeplearning) March 2, 2022
https://t.co/5dln0Q7e69 https://t.co/FeU9L8DA9O pic.twitter.com/KuAW8uRNPW
- A 15-minute talk on the clone-structured graph model of hippocampus
Want to understand how hippocampal place cells interpret space as a sequence, but got no time? See this 15-minute talk I gave at #NAISys2022 organized by @tyrell_turing, @doristsao, and @TonyZador. Includes some teaser slides about schema learning & PFChttps://t.co/cnhQALtVM5
— Dileep George (@dileeplearning) May 31, 2022
- Thread about our paper on learning generative cognitive maps as higher-order graphs
Are you skeptical about successor representations? Want to know how our new model can learn cognitive maps, context-specific representations, do transitive inference, and flexible hierarchical planning? #tweeprint...(1) @vicariousai @swaroopgj @rvrikhye https://t.co/4WOiJMPBvU https://t.co/TuUFHr27Iq
— Dileep George (@dileeplearning) December 7, 2019
- Predicted the flow of AI debate between Yoshua Bengio and Gary Marcus…
Can't wait for the upcoming 'future of AI' debate between @GaryMarcus and Yoshua Bengio at @Montreal_AI? Then read #AGIcomics pre-coverage of the epic event with predictions of punches and counter-punches 🙃...Thread (1/9) https://t.co/SED5AL1FUJ pic.twitter.com/S5JOMi3dM0
— Dileep George (@dileeplearning) November 27, 2019
- Artificial general accomplishments in AI…..
#AGI series returns after a summer break! This one tackles the tricky question of accomplishments! ... some older comics in the thread. #AGIcomics pic.twitter.com/H5m1VP7khk
— Dileep George (@dileeplearning) August 31, 2019
- Most deep learning generative models use amortized inference. i.e, their encoders
are trained to answer only a particular kind of query. How do we train graphical models to answer arbitrary queries?
We want generative models to be flexible, but models like VAEs answer only the trained query. To have inference networks that answer arbitrary queries, we introduce query-training. See it at the approx Bayes sympsm if you are at #NeurIPS2019 ...(1) https://t.co/F2z1C20PEy
— Dileep George (@dileeplearning) December 8, 2019