[FEEDBACK] Daily Papers
Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.
How to submit a paper to the Daily Papers, like @akhaliq (AK)?
- Submitting is available to paper authors
- Only recent papers (less than 7d) can be featured on the Daily
Then drop the arxiv id in the form at https://huggingface.co./papers/submit
- Add medias to the paper (images, videos) when relevant
- You can start the discussion to engage with the community
Please check out the documentation
We are excited to share our recent work on MLLM architecture design titled "Ovis: Structural Embedding Alignment for Multimodal Large Language Model".
Paper: https://arxiv.org/abs/2405.20797
Github: https://github.com/AIDC-AI/Ovis
Model: https://huggingface.co./AIDC-AI/Ovis-Clip-Llama3-8B
Data: https://huggingface.co./datasets/AIDC-AI/Ovis-dataset
@Yiwen-ntu for now we support only videos as paper covers in the Daily.
we are excited to share our work titled "Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models" : https://arxiv.org/abs/2406.12644
We're thrilled to share our latest work, "Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects"
Paper: https://arxiv.org/abs/2409.02653
We're thrilled to share our latest work, "Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models", the first first-order FL method with shared randomness that significantly enhances the scalability of existing federated full-parameter tuning approaches by achieving high computational efficiency, reduced communication overhead, and fast convergence, all while maintaining competitive model accuracy.
Paper: https://arxiv.org/abs/2409.06277
Github: https://github.com/allen4747/Ferret
Hi, I'd like to share our paper beeFormer: Bridging the Gap Between Semantic and Interaction Similarity in Recommender Systems
Paper https://arxiv.org/pdf/2409.10309
Github https://github.com/recombee/beeformer
๐ Excited to share our latest preprint: "CodonTransformer: a multispecies codon optimizer using context-aware neural networks"!
CodonTransformer is a groundbreaking deep learning model that optimizes DNA sequences for heterologous protein expression across 164 species.
By leveraging Transformer architecture and a novel training stratey named STREAM, it generates host-specific DNA sequences with natural-like codon patterns, minimizing negative regulatory elements.
๐ฅ Website
https://adibvafa.github.io/CodonTransformer/
โญ GitHub (Please give us a :star:!)
https://github.com/Adibvafa/CodonTransformer
๐ค Colab Notebook (Try it out!)
https://adibvafa.github.io/CodonTransformer/GoogleColab
๐ชผ Model
https://huggingface.co./adibvafa/CodonTransformer
๐ Paper
https://www.biorxiv.org/content/10.1101/2024.09.13.612903
No Saved Kaleidosope: an 100% Jitted Neural Network Coding Language with Pythonic Syntax
https://arxiv.org/abs/2409.11600