-
Notifications
You must be signed in to change notification settings - Fork 6.8k
porting cxxnet data science bowl 1 example to mxnet #1143
Comments
Hi @Gelu74, I will probably make an example in of original sample today and post it at here. After you successfully run it, I recommend to try example here: https://github.com/auroraxie/Kaggle-NDSB |
Thanks @antinucleon |
@Gelu74 |
Forgot the link: https://github.com/dmlc/mxnet/tree/master/example/kaggle-ndsb1 |
Thanks a lot @antinucleon for taking the time to write the example. These are my results for 5 consecutive runs: INFO:root:Epoch[34] Train-accuracy=0.539437 INFO:root:Epoch[34] Train-accuracy=0.065096 INFO:root:Epoch[34] Train-accuracy=0.570038 INFO:root:Epoch[34] Train-accuracy=0.065096 INFO:root:Epoch[34] Train-accuracy=0.601637 |
That's strange and should not have such a great gap. Are you using cudnn? Which card are you using? |
Yes cudnn. lspci | grep -i nvidia nvcc --version |
cudnn may make different output but not in such a great gap. Could you try to add this line at the beginning of the file? |
With that random seed, I did not get the low accuracy results in four runs. Although I am still getting quite some variability: INFO:root:Epoch[34] Train-accuracy=0.550869 INFO:root:Epoch[34] Train-accuracy=0.510184 INFO:root:Epoch[34] Train-accuracy=0.460114 INFO:root:Epoch[34] Train-accuracy=0.538588 I thought that by setting the random seed there wouldn't be any variability.. where does the randomness come from? |
Try to build without CuDNN. CuDNN doesn't grantee to reproduce result. On Wed, Jan 6, 2016 at 4:53 PM, Angel Lopez-Urrutia <
|
sorry, there must have been an error with my cudnn installation and I guess mxnet was not using cudnn. |
No it is not your problem, for CuDNN fastest mode, it is known that the result is not deterministic. |
ummm, I am now not sure whether I was running with cudnn at all... I have reinstalled my system and I am not able to compile with cudnn support now.. see #1207 |
I am trying to port to mxnet the example code in cxxnet for Kaggle data science bowl (1) https://github.com/dmlc/cxxnet/tree/master/example/kaggle_bowl
I have most things working but I do not understand something in the bowl.conf for cxxnet. (excuse my ignorance but I am new to deep learning, I am a marine biologist with interests in image classification).
Although images are scaled to 48x48, the bowl.conf file has "input_shape = 3,40,40", why is that?
In mxnet, I have rescaled all images to 48x48 but if I use
data_shape = (3, 48, 48)
when I define the ImageRecordIter as:
I get train/val accuracies of 0.44/0.59 while if I use a data_shape=(3,40,40) I get 0.58/0.59
I guess it is something to do with the crop sizes, could someone explain how to set the correct data_shape?
Thanks ,
Angel
this is the network structure I am using (translated from bowl.conf):
The text was updated successfully, but these errors were encountered: