---
layout: post
comments: true
title: Post Template
author: UCLAdeepvision
date: 2022-04-10
---
> This block is a brief introduction of your project. You can put your abstract here or any headers you want the readers to know.
{: class="table-of-content"}
* TOC
{:toc}
## Main Content
Your article starts here. You can refer to the [source code](https://github.com/lilianweng/lil-log/tree/master/_posts) of [lil's blogs](https://lilianweng.github.io/lil-log/) for article structure ideas or Markdown syntax. We've provided a [sample post](https://ucladeepvision.github.io/CS188-Projects-2022Winter/2017/06/21/an-overview-of-deep-learning.html) from Lilian Weng and you can find the source code [here](https://raw.githubusercontent.com/UCLAdeepvision/CS188-Projects-2022Winter/main/_posts/2017-06-21-an-overview-of-deep-learning.md)
## Basic Syntax
### Image
Please create a folder with the name of your team id under /assets/images/, put all your images into the folder and reference the images in your main content.
You can add an image to your survey like this:
![YOLO]({{ '/assets/images/UCLAdeepvision/object_detection.png' | relative_url }})
{: style="width: 400px; max-width: 100%;"}
*Fig 1. YOLO: An object detection method in computer vision* [1].
Please cite the image if it is taken from other people's work.
### Table
Here is an example for creating tables, including alignment syntax.
| | column 1 | column 2 |
| :--- | :----: | ---: |
| row1 | Text | Text |
| row2 | Text | Text |
### Code Block
```
# This is a sample code block
import torch
print (torch.__version__)
```
### Formula
Please use latex to generate formulas, such as:
$$
\tilde{\mathbf{z}}^{(t)}_i = \frac{\alpha \tilde{\mathbf{z}}^{(t-1)}_i + (1-\alpha) \mathbf{z}_i}{1-\alpha^t}
$$
or you can write in-text formula $$y = wx + b$$.
### More Markdown Syntax
You can find more Markdown syntax at [this page](https://www.markdownguide.org/basic-syntax/).
## Reference
Please make sure to cite properly in your work, for example:
[1] Dwibedi, Debidatta, et al. "Counting out time: Class agnostic video repetition counting in the wild." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[2]
---
## Data Rich and Physics Certain
| Experiment | Parameters | Results | Comments |
| :--- | :----: | :---: | ---: |
| **DL + Data** |
| Predicting only velocity | Dataset size : 10000
Network : 2->5->5->1
activation: ReLU | ~100% accurate | Generalises well over various initial velocities |
| Predicting only displacement | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | Reasonable | Better prediction for $u_0 \in dataset$, average prediction outside |
| Predicting both $v_t, s_t$ | Dataset size : 10000
Network : 2->16->16->2
activation: tanh | Reasonable | Better prediction for $u_0 \in dataset$, poor prediction outside |
-----
| **DL + Physics** |
| Predicting both $v_t, s_t$, using Loss $L_{physics} = \|v_{predicted}^2-u_{initial}^2-2*g*s_{predicted}\|$ | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | ~0% accuracy | Expected result as no supervision of any kind is provided |
| Predicting both $v_t, s_t$, using Loss $L_{velocity+phy} = (v_{predicted}-v_{actual})^2+\gamma*(v_{predicted}^2-u_{initial}^2-2*g*s_{predicted})^2$ | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | Reasonable | Prediction of $v_t$ is good. Was able to learn $s_t$ reasonably well without direct supervision |
| Predicting both $v_t, s_t$, using Loss $L_{supervised+phy} = (v_{predicted}-v_{actual})^2+(s_{predicted}-s_{actual})^2+\gamma*(v_{predicted}^2-u_{initial}^2-2*g*s_{predicted})^2$ | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | Reasonable | Not a better result w.r.t direct supervision |
**Observations :**
- Physics equations are certain in this case and are the best to use.
- Both DL, Hybrid(DL+Physics) methods performance are equivalent (actual accuracy/loss varies based on fine training, random dataset generation)
Re running the above experiments with Dataset size of 200(Data Starvation), yielded the following observations
- DL performance is comparable with 10000 dataset when trained on much mode epochs(5x)
- Hybrid(DL+Physics) without direct supervision on $s_t$ has comparable/better closeness than DL only method for limited epochs($\sim$300) training.
## Data Rich and Physics Uncertain
| Experiment | Parameters | Results | Comments |
| :--- | :----: | :---: | ---: |
| **DL + Data** |\
| Predicting both $v_t, s_t$ | Dataset size : 10000
Network : 2->16->16->2
activation: tanh | Reasonable | Better prediction for $u_0 \in dataset$, poor prediction outside |
| **DL + Physics** |
| Predicting both $v_t, s_t$
using Loss $L_{physics} = \|v_{predicted}^2-u_{initial}^2-2*g*s_{predicted}\|$ | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | ~0% accuracy | Expected result as no supervision of any kind is provided |
| Predicting both $v_t, s_t$
using Loss $L_{velocity+phy} = (v_{predicted}-v_{actual})^2+\gamma*(v_{predicted}^2-u_{initial}^2-2*g*s_{predicted})^2$ | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | Reasonable | Prediction of $v_t$ is good. Was able to learn $s_t$ reasonably well without direct supervision |
| Predicting both $v_t, s_t$
using Loss $L_{supervised+phy} = (v_{predicted}-v_{actual})^2+(s_{predicted}-s_{actual})^2+\gamma*(v_{predicted}^2-u_{initial}^2-2*g*s_{predicted})^2$ | Dataset size : 10000
Network : 2->16->16->1
activation: ReLU | Reasonable | Not a better result w.r.t direct supervision, but bettr than DL when $u0$ is out of dataset |
**Observations :**
- Both DL, Hybrid(DL+Physics) methods performance are similar, Hybrid(DL+Physics) is better when $u0$ is out of dataset, DL is better for $u0$ in dataset.
- Physics equations are not certain in this case and the above methods are better to use than Physics.
## Data Starvation and Physics Uncertain
- Similar observations as in data rich