-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
199 lines (180 loc) · 10.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Wanshui Gan</title>
<meta name="author" content="Wanshui Gan">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/icon.png">
</head>
<body>
<table style="width:100%;max-width:850px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:60%;vertical-align:middle">
<p style="text-align:center">
<name>Wanshui Gan </name>
</p>
<p>
I am a 3rd year PhD student at the <a href="https://www.ms.k.u-tokyo.ac.jp/members.html">Sugiyama-Yokoya-Ishida Lab at the University of Tokyo</a>, Department of Complexity Science and Engineering, advised by Prof. <a href="https://naotoyokoya.com/">Naoto YOKOYA</a>. I am also the Junior Research Associate of the <a href="https://geoinformatics2018.com/"> Geoinformatics Team</a> at the RIKEN Center for Advanced Intelligence Project (AIP).
</p>
<p>
Prior to that, I received the B.S. degree from the Guangdong University of Technology, China, in 2018 and the M.S. degree from the University of Macau, China, in 2021. My research interest lies in 3D vision, large scene parsing, and reconstruction. You are welcomed to contact me by email if you are interested in my work or potential collaboration.
</p>
<p style="text-align:center">
<a href="mailto:wanshuigan@gmail.com">Email</a>  / 
<a href="https://scholar.google.com.hk/citations?user=O6LP9zQAAAAJ&hl=zh-CN"> Google Scholar</a>  / 
<a href="https://github.com/GANWANSHUI"> Github </a>  / 
<a href="https://twitter.com/WOSON12">Twitter</a>
</p>
</td>
<td style="padding:2.5%;width:30%;max-width:30%">
<img style="width:100%;max-width:100%" alt="profile photo" src="images/profile.jpg">
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Experiences</heading>
<p>
<li style="margin: 5px;" >
<b>2022-04 --> Present:</b> RIKEN AIP as Junior Research Associate, Topic: NeRF, 3D occupancy estimation.
<li style="margin: 5px;" >
<b>2024-02 --> 2024-04:</b> Cyberagent AI Lab as Research Intern, Topic: 4D Gaussian splatting.
</li>
<li style="margin: 5px;" >
<b>2021-04 --> 2021-07:</b> Tencent AI Lab as Research Intern, Topic: Facial landmark detection.
</li>
<li style="margin: 5px;" >
<b>2020-06 --> 2022-02:</b> Shenzhen Institute of Advanced Technology (SIAT), as Visiting Student, Topic: 6D pose estimation, Stereo Matching, NeRF.
</li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading> Selected Publications </heading>
<p>
* indicates equal contribution
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody></tbody>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/simpleocc.gif" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>A Comprehensive Framework for 3D Occupancy Estimation in Autonomous Driving</papertitle>
<br>
<strong>Wanshui Gan</strong>, Ningkai Mo, Hongbin Xu, Naoto Yokoya
<br>
IEEE Transactions on Intelligent Vehicles, 2024
<br>
<a href="https://ieeexplore.ieee.org/abstract/document/10535213">[Paper]</a> <a href="https://github.com/GANWANSHUI/SimpleOccupancy">[Code]</a> <a href="https://arxiv.org/abs/2303.10076">[arXiv]</a>
<br>
<p> We introduce a comprehensive framework for surrounding-view 3D occupancy estimation, 3D reconstruction and depth estimation via volume rendering, featuring network design, loss design, and evaluation metric based on discrete point level sampling. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/v4d.gif" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>V4d: Voxel for 4d novel view synthesis</papertitle>
<br>
<strong>Wanshui Gan</strong>, Hongbin Xu, Yi Huang, Shifeng Chen, Naoto Yokoya
<br>
IEEE Transactions on Visualization and Computer Graphics, 2023
<br>
<a href="https://ieeexplore.ieee.org/abstract/document/10239492">[Paper]</a> <a href="https://github.com/GANWANSHUI/V4D">[Code]</a> <a href="https://arxiv.org/abs/2205.14332">[arXiv]</a>
<br>
<p> We propose the method V4D, a simple yet effective and efficient framework, for 4D novel view synthesis with the 3D voxel, which directly models the 4D neural radiance field without the need for canonical space. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/es6d.gif" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>ES6D: A Computation Efficient and Symmetry-Aware 6D Pose Regression Framework</papertitle>
<br>
Ningkai Mo*, <strong>Wanshui Gan*</strong>, Naoto Yokoya, Shifeng Chen
<br>
IEEE/CVF conference on computer vision and pattern recognition (<strong>CVPR</strong>), 2022
<br>
<a href="https://ieeexplore.ieee.org/abstract/document/10239492">[Paper]</a> <a href="https://github.com/GANWANSHUI/ES6D">[Code]</a> <a href="https://arxiv.org/abs/2204.01080">[arXiv]</a>
<br>
<p> We introduce a novel 6D pose estimation framework, ES6D, based on the XYZNet and A(M)GPD loss. The XYZNet is designed for feature extraction from RGB-D data. It has a fully convolutional architecture and achieves an excellent trade-off between efficiency and effectiveness. Additionally, the A(M)GPD loss is proposed to handle symmetric objects, and performs better than ADD(S) loss. </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:30%;max-width:30%" align="center">
<img style="width:100%;max-width:100%" src="images/LWANet.gif" alt="dise">
</td>
<td width="75%" valign="center">
<papertitle>Light-weight network for real-time adaptive stereo depth estimation</papertitle>
<br>
<strong>Wanshui Gan</strong>, Pak Kin Wong, Guokuan Yu, Rongchen Zhao, Chi Man Vong
<br>
Neurocomputing, 2021
<br>
<a href="https://www.sciencedirect.com/science/article/abs/pii/S0925231221002599">[Paper]</a> <a href="https://github.com/GANWANSHUI/LWANet">[Code]</a>
<br>
<p> We propose a novel light-weight adaptive network (LWANet) for real-time stereo depth estimation, achieving competitive performance compared with MADNet and StereoNet, and it has the advantages of low computational cost and low GPU memory space. </p>
</td>
</tr>
</table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Honors and Awards</heading>
<p>
<li style="margin: 5px;"> <a href="https://www.youtube.com/watch?v=5949pht_dkM"> The First Prize in Formula Student China (FSAE 2017) </a> </li>
<li style="margin: 5px;"> TIER IV Student scholarship (2022, 2023) </li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Academic Services</heading>
<p>
<li style="margin: 5px;">
<b>Conference Reviewer:</b> CVPR
</li>
<li style="margin: 5px;">
<b>Journal Reviewer:</b> IEEE TVCG, IEEE TIV, IEEE TCSVT
</li>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:right;font-size:small;">
<a href="https://jonbarron.info/">Website Template</a>
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
<p><center>
<div id="clustrmaps-widget" style="width:5%">
<script type="text/javascript" id="clstr_globe" src="//clustrmaps.com/globe.js?d=L9EQlZkj5iCdWBMkJkgi98zY_ACS8WXMtmi-BflmwK8"></script>
</div>
<br>
© Wanshui | Last updated: June 21, 2024
</center></p>
</body>
</html>