Research Radar is a research-discovery prototype for music information retrieval and audio ML papers. I built it to make recommendations easier to understand: you can browse ranked feeds, open a paper, explore trends, and compare the output against simpler baselines.
The live app stays separate from this portfolio, and the experimental parts are labeled honestly. Bridge recommendations are still exploratory, and the evaluation view is there to show how the system behaves, not to claim the ranking problem is solved.
The live prototype is meant to stay deployed separately from this portfolio. Once its public URL is pinned, this page can link out to it directly.
View sourceSee walkthroughVisitors can browse emerging papers, undercited papers, and an experimental bridge view. Each list is generated ahead of time and includes enough detail to show why a paper was recommended.
Each paper page shows metadata plus a related-papers section, so someone can move from one useful paper to the next without starting over from search every time.
The trends page gives a quick read on which topics are gaining momentum in the current dataset, without pretending to summarize the whole field.
The evaluation page compares the ranking against simpler baselines like citation count and recency. That makes it easier to inspect how the system behaves without pretending there is one perfect relevance score.
The app keeps a saved record of how each ranked list was built and which signals fed into it, like showing your work instead of just giving a final answer. That makes the system easier to debug, easier to compare across versions, and easier to explain to another person.
This page is the case study, while the prototype runs as its own app. Under the hood there is a Next.js frontend, a FastAPI backend, Postgres with pgvector for storage and similarity search, and Python jobs for ingest, ranking, and clustering. Keeping those pieces separate makes it easier to update the ranking workflow without turning every change into a full-site deploy.
Some ideas are still exploratory, especially bridge-style recommendations that try to connect papers across areas. I left them visible because they are real product work, but they are not positioned as finished or used as the default experience.
These captures were taken from a recent run of the prototype against the current API. I pinned the ranking and embedding versions so this walkthrough reflects a real, reproducible state rather than a mocked-up demo: ranking version ml2-5a-qual-r2-k6-20260405 and embedding version v1-title-abstract-1536-cleantext-r2. The app stays deployed separately from the portfolio; this section shows what the live prototype looked like at that pin.

/recommended?family=emergingThe clearest introduction to the product: a ranked list with visible recommendation signals.

/evaluation?family=emergingShows the project's honesty. The ranking is compared against simpler baselines instead of being presented as unquestionable.

/trendsShows that the prototype is not only a recommendation feed; it also gives a quick view of topic momentum in the dataset.

/papers/https%3A%2F%2Fopenalex.org%2FW3093121331Shows how someone can move from one paper into a useful cluster of related work.

/recommended?family=bridgeAn honest look at the experimental bridge view, which is still exploratory rather than a finished recommendation mode.
The interface is built in Next.js, the API is built in FastAPI, and the data lives in Postgres with pgvector. A separate Python pipeline handles ingest, cleanup, embeddings, ranking, and clustering experiments behind the scenes.
Building explanation and versioning in early made the project easier to trust and easier to improve.
It is better to label an idea as experimental than to oversell a result that is not there yet.
Evaluation became more useful once I treated it as a way to inspect behavior, not as a claim that the ranking problem was solved.
Status: In Progress
The strongest stable claim today is that the prototype makes its ranking behavior visible and understandable over a curated set of MIR and audio ML papers.
Known limits: Bridge recommendations are still experimental, semantic ranking is not part of the default ranking, and the corpus is still narrower than the long-term plan.
No questions or comments yet. Sign in with GitHub to leave the first one.