Generative Flow Networks (GFlowNets)

Avyav Singh presented the collection of papers that elucidate Generative Flow Networks (Hu et al., 2023) (Bengio et al., 2023), (Madan et al., 2023) and (Malkin et al., 2022).

Abstract

This talk explored the development of Generative Flow Networks (GFlowNets). We dissucssed how the use of amortized Bayesian inference to sample intractable posteriors, achieved through LLMs fine-tuning and diversity prioritised reinforcement learning algorithms enables data-efficient adaptation of LLMs.

References

  1. Amortizing intractable inference in large language models
    Edward J Hu, Moksh Jain, Eric Elmoznino, and 4 more authors
    arXiv preprint arXiv:2310.04363, 2023
  2. Gflownet foundations
    Yoshua Bengio, Salem Lahlou, Tristan Deleu, and 3 more authors
    Journal of Machine Learning Research, 2023
  3. Learning gflownets from partial episodes for improved convergence and stability
    Kanika Madan, Jarrid Rector-Brooks, Maksym Korablyov, and 6 more authors
    In International Conference on Machine Learning, 2023
  4. Trajectory balance: Improved credit assignment in gflownets
    Nikolay Malkin, Moksh Jain, Emmanuel Bengio, and 2 more authors
    Advances in Neural Information Processing Systems, 2022



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Agentic AI systems - Promises, Risks and the Paths Forward
  • A Survey of Cognitive Distortion Detection and Classification in NLP
  • Reproducibility The New Frontier in AI Governance
  • The Biology of a Large Language Model (Anthropic)
  • Reflections on The International AI Safety Report