Skip to content
View ed-fish's full-sized avatar
πŸ¦™
πŸ¦™

Highlights

  • Pro
Block or Report

Block or report ed-fish

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ed-fish/README.md

Welcome!

I'm a research fellow in the Cognitive Vision Group at CVSSP - University of Surrey.

News

  • I've joined CVSSP as a Research Fellow in Computer Vision.
  • πŸ“° PLOT-TAL: Optimal Transport for Few Shot Temporal Action Localization pre-print available here
  • I've joined King's College London as a research associate in Responsible AI.
  • πŸ‘¨πŸ»β€πŸ”§ Code for our Interspeech paper on zero-shot personalized ASR quantization is released at Samsung Labs: myQASR Code
  • πŸ“š My work is featured in the Samsung AI Research blog here
  • πŸ“° "Multi-resolution audio-visual feature fusion for temporal action localization" is accepted at NeurIPS 23 ML for Audio Workshop (Oct 23).
  • πŸ“š I secured EU Horizon funding to extend my research in efficient multi-modal video understanding until Feb 24
  • πŸ“° "A model for every user and budget - data-free mixed-precision ASR quantization" is accepted for INTERSPEECH 23 (Aug 23)
  • πŸ“š I am back at The University of Surrey completing my PhD
  • πŸ‘¨πŸ»β€πŸ”§ I'm currently working as a research intern at Samsung Research UK (Completed Feb 22 - Feb 23)
  • πŸ“° "Two-stream transformer architecture for long form video understanding" is accepted for BMVC 2022 (Mar 22)
  • πŸ“° "Re-thinking genre classification with fine-grained semantic experts" is accepted for ICIP 2021 (Nov 21)

Pinned Loading

  1. myQASR myQASR Public

    Forked from SamsungLabs/myQASR

    "A Model for Every User and Budget: Label-Free and Personalized Mixed-Precision Quantization", Interspeech 2023. The paper has been accepted for publication at the INTERSPEECH 2023 Conference.

    Jupyter Notebook

  2. semantic-video-visualiser semantic-video-visualiser Public

    An interactive visualisation of semantically similar movie trailers.

    JavaScript

  3. spatio-temporal-contrastive-film spatio-temporal-contrastive-film Public

    Unsupervised Film Genre Classification using Spatio-Temporal Contrastive Learning

    Python 27 2