Skip to content

19/03/2026 8:58 AM - NECV workshop

⬅️ [19/02/2026 1:50 PM](<./19_02_2026 1_50 PM.md>) | ⬆️ [Lab Meetings](<./README.md>)

19/03/2026 8:58 AM - NECV workshop

Chenliang Xu
Also what notes are they playing in specific.

How many views do they need to render sounds?

Do they propose new metrics for audio reconstruction from novel views?

What was the supervision signal on slide 9?

Do they record the same sound every time? Does environment shift over time?

Have they tried it out on complex environments like those that cause echos? Like inside a tunnel?

Would there be any benefit to try to use learned explicit representations directly for physics simulation? Or is it fine to just keep using conversion to poisson surface?

Someone had an idea to use a nerf to predict where a robot is based on click sounds it makes. Echolocation.

Tin Stribor
We didn't not gain lidar because it was not adventageous for sensing, but because it cannot evolve.

Max Dillitzer
How can we reduce the token count?

Bernadette
State estimation from vision improvemenets? What were those?

Temperature sampling seems to be something about over different episodes move toward making the dataset balanced?

Brent Griffin
How do we check if a VLM model was correct?

Play a game where we try to trick networks and learn to detect that.

Maximize F1 which effectively means increase precision and maintain recall.

Does this help downstream models?

Game 1: Which are psuedolabels? If they are distringuishable, then they will be pruned. Only if we can't tell the difference will they not be pruned?

Game 2: Judge which are in the set and which are out of the set.

What are the inputs to the turing test network?

Have they done ablations (adding a given number of incorrect labels to the in-set)?

Are there biases in what it prunes? Which examples did it fail to prune correctly?

Why not apply the method to itself when finding the in-set? Sample random subsets and count how many times any given label is rejected?

Did you consider other ways to generate the reference set? What about self-agreement with in the class? Randomly sample a million times and check which get rejected the most?

Vikas Dhiman
Uncertainty estimation!

How much do we trust unceratintu from uncertainty estimation techniques?

Distance aware/not distance aware

Want uncerntainty to increase as you go away from data generally.
But what do you mean about distance? Are we in an embedding space?

Guassian processes (1996) -> Bayesian NN (2015) -> Deep Ensembles (2017)
Deep ensembles are now the best for getting real uncertainty.
Better than just softmax and train with cross entropy.

Only guassian processes are guaranteed to be distance aware.

Newton's polynomial error bound.

Wait but how do you assign the knots?


⬅️ [19/02/2026 1:50 PM](<./19_02_2026 1_50 PM.md>) | ⬆️ [Lab Meetings](<./README.md>)