PULSE: Our community science stream

  • ENDORSEMENTApril 21, 2024, 9:44 p.m.
    To SMOTE, or not to SMOTE?

    In imbalanced binary classification problems the objective metric is often non-symmetric and associates a higher penalty with the minority samples. On the other hand, the …

  • BOOKMARKApril 19, 2024, 1:38 a.m.
    Long-form factuality in large language models

    Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality …

  • ENDORSEMENTApril 19, 2024, 12:03 a.m.
    Neural Spline Flows

    A normalizing flow models a complex probability density as an invertible transformation of a simple base density. Flows based on either coupling or autoregressive transforms …

  • ENDORSEMENTApril 17, 2024, 11:25 p.m.
    Differentiable DAG Sampling

    We propose a new differentiable probabilistic model over DAGs (DP-DAG). DP-DAG allows fast and differentiable DAG sampling suited to continuous optimization. To this end, DP-DAG …

  • ENDORSEMENTApril 17, 2024, 1:24 a.m.
    Causal Bandits without Graph Learning

    We study the causal bandit problem when the causal graph is unknown and develop an efficient algorithm for finding the parent node of the reward …

  • CODEApril 16, 2024, 2:40 a.m.
    JAX

    JAX is a Python library for accelerator-oriented array computation and program transformation, designed for high-performance numerical computing and large-scale machine learning.

  • CODEApril 14, 2024, 10:14 a.m.
    RAGFlow

    RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining …

  • BOOKMARKApril 13, 2024, 9:52 a.m.
    Retrieval Augmentation Reduces Hallucination in Conversation

    Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue models often suffer from factual incorrectness and hallucination of knowledge (Roller et al., 2020). In this work …

  • ENDORSEMENTApril 12, 2024, 5:20 a.m.
    More Agents Is All You Need

    We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method …