Hi, it looks like your ipynb code has some bugs: first, the adjacency matrix with shape [100,100] is created. However, after train_test split X_train shape doesn't have 100 dimensions shapes (tested on IP and PaviaU). And even before, when returned from segmentation it just loses channels dim.
And for me it makes no sense: if you pass 2d image (after superpixel segmentation) to adjacency matrix there's no information about channels in the model or matrix. Will you please, elaborate on that?

Hi, it looks like your
ipynbcode has some bugs: first, the adjacency matrix with shape [100,100] is created. However, after train_test split X_train shape doesn't have 100 dimensions shapes (tested on IP and PaviaU). And even before, when returned from segmentation it just loses channels dim.And for me it makes no sense: if you pass 2d image (after superpixel segmentation) to adjacency matrix there's no information about channels in the model or matrix. Will you please, elaborate on that?