You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Canjie-Luo
I referred the issue #44
I am training the model on my own dataset with the following alphabet list:
alphabet='0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ()-.,>:# /'
Example datasets used for training.
ELGRENZVYULJI :2))7 ,91 < 9oR1#t4h80Njw
Kgpyhzyszobme
YBOBTOVJL 6<<'830 :)-4<9 .17
I have following doubts:
As my data is having maximum width upto 565 pixels, Do i need to make any changes in targetW or ImageW parameter?
Is it works for long text predictions?
Other than this I got the following error while running the main.py code.
To solve the below error, I used the following solution as given in stackoverlow, but did not work for me.
import sys
sys.setrecursionlimit(3000)
RequestsDependencyWarning)
Namespace(BidirDecoder=True, MORAN='', adadelta=True, adam=False, alphabet='0;1;2;3;4;5;6;7;8;9;a;b;c;d;e;f;g;h;i;j;k;l;m;n;o;p;q;r;s;t;u;v;w;x;y;z;A;B;C;D;E;F;G;H;I;J;K;L;M;N;O;P;Q;R;S;T;U;V;W;X;Y;Z;(;);-;.;,;>;:;#; ;/;$', batchSize=32, beta1=0.5, cuda=True, displayInterval=100, experiment='output/', imgH=64, imgW=200, lr=1.0, n_test_disp=10, ngpu=1, nh=256, niter=100, saveInterval=40000, sep=';', sgd=False, targetH=32, targetW=100, train_nips='/home/payal/FRSLABS/MORAN_v2-master/demo/mdbiit5k/', valInterval=100, valroot='/home/payal/FRSLABS/MORAN_v2-master/demo/mdbiit5k/', workers=1)
mkdir: cannot create directory ‘output/’: File exists
Random Seed: 1818
Start val
Traceback (most recent call last):
File "main.py", line 245, in <module>
acc_tmp = val(test_dataset, criterion)
File "main.py", line 159, in val
t, l = converter.encode(cpu_texts, scanned=True)
File "/home/payal/FRSLABS/MORAN_v2-master/tools/utils.py", line 86, in encode
text, _ = self.encode(text)
File "/home/payal/FRSLABS/MORAN_v2-master/tools/utils.py", line 86, in encode
text, _ = self.encode(text)
File "/home/payal/FRSLABS/MORAN_v2-master/tools/utils.py", line 86, in encode
text, _ = self.encode(text)
[Previous line repeated 985 more times]
File "/home/payal/FRSLABS/MORAN_v2-master/tools/utils.py", line 82, in encode
elif isinstance(text, collections.Iterable):
File "/home/payal/anaconda3/lib/python3.6/abc.py", line 184, in __instancecheck__
if subclass in cls._abc_cache:
File "/home/payal/anaconda3/lib/python3.6/_weakrefset.py", line 75, in __contains__
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
Exception ignored in: <bound method DataLoaderIter.__del__ of <torch.utils.data.dataloader.DataLoaderIter object at 0x7fb9dc3e04e0>>
Traceback (most recent call last):
File "/home/payal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 333, in __del__
self._shutdown_workers()
File "/home/payal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 319, in _shutdown_workers
self.data_queue.get()
File "/home/payal/anaconda3/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "/home/payal/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
fd = df.detach()
File "/home/payal/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/payal/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/payal/anaconda3/lib/python3.6/multiprocessing/connection.py", line 487, in Client
c = SocketClient(address)
File "/home/payal/anaconda3/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient
s.connect(address)
ConnectionRefusedError: [Errno 111] Connection refused
Exception ignored in: <bound method DataLoaderIter.__del__ of <torch.utils.data.dataloader.DataLoaderIter object at 0x7fb9dc3b83c8>>
Traceback (most recent call last):
File "/home/payal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 333, in __del__
File "/home/payal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 319, in _shutdown_workers
File "/home/payal/anaconda3/lib/python3.6/multiprocessing/queues.py", line 337, in get
ImportError: sys.meta_path is None, Python is likely shutting down
Please help!
The text was updated successfully, but these errors were encountered:
Yes, you should enlarge the image width, both input width and target width. Please make sure the data is correctly prepared for the network: File "/home/payal/FRSLABS/MORAN_v2-master/tools/utils.py", line 82, in encode elif isinstance(text, collections.Iterable)
Actually, attention mechanism works well on short text. I am not sure that it is fine for your task.
Hi Canjie-Luo
I referred the issue #44
I am training the model on my own dataset with the following alphabet list:
alphabet='0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ()-.,>:# /'
Example datasets used for training.
ELGRENZVYULJI :2))7 ,91 < 9oR1#t4h80Njw
Kgpyhzyszobme
YBOBTOVJL 6<<'830 :)-4<9 .17
I have following doubts:
As my data is having maximum width upto 565 pixels, Do i need to make any changes in targetW or ImageW parameter?
Is it works for long text predictions?
Other than this I got the following error while running the
main.py
code.To solve the below error, I used the following solution as given in stackoverlow, but did not work for me.
import sys
sys.setrecursionlimit(3000)
Please help!
The text was updated successfully, but these errors were encountered: