Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupervised Heterogeneous Graph Learning #3189

Merged
merged 26 commits into from
Mar 27, 2022
Merged
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
10bbdea
added unsupervised hetero method
Yonggie Sep 21, 2021
4058254
used 2 meta paths
Yonggie Sep 22, 2021
33c7fee
used 2 meta paths
Yonggie Sep 22, 2021
6ddf312
used 2 meta paths
Yonggie Sep 22, 2021
e8e8145
used 2 meta paths
Yonggie Sep 22, 2021
b469c5a
used 2 meta paths
Yonggie Sep 22, 2021
e79a830
Merge branch 'master' into master
rusty1s Feb 4, 2022
5d9ca86
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 4, 2022
4388a4e
Merge branch 'master' into master
Yonggie Mar 10, 2022
8804228
added unsupervised hetero method dmgi, which seems not working
Yonggie Mar 10, 2022
0599874
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 10, 2022
6f3c508
re-implemented DMGI from AAAI, seems not working
Yonggie Mar 10, 2022
3c85311
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 10, 2022
229cb4d
Merge branch 'master' into master
Yonggie Mar 11, 2022
da6e6eb
changed dmgi encoder from single to multiple.
Yonggie Mar 11, 2022
e93bca5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 11, 2022
a43b74e
make it flake8 style.
Yonggie Mar 11, 2022
62775d5
make it flake8 style
Yonggie Mar 11, 2022
4980f99
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 11, 2022
d39120e
Merge branch 'master' into master
rusty1s Mar 11, 2022
dd16651
Merge branch 'master' into master
Yonggie Mar 12, 2022
15ebaeb
Merge branch 'master' into master
rusty1s Mar 14, 2022
de50f85
update
rusty1s Mar 27, 2022
e9d614f
typo
rusty1s Mar 27, 2022
16e6315
typo
rusty1s Mar 27, 2022
690f14b
Merge branch 'master' into master
rusty1s Mar 27, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] committed Feb 4, 2022
commit 5d9ca86f62bb2fee957e06cd7ed211de862b5c08
35 changes: 15 additions & 20 deletions examples/hetero/hetero_unsupervised_dblp.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
import numpy as np
from torch.nn import Parameter
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
import os.path as osp
import math
import os.path as osp

import numpy as np
import torch
import torch.nn.functional as F
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from torch.nn import Parameter

rusty1s marked this conversation as resolved.
Show resolved Hide resolved
from torch_geometric.datasets import DBLP
from torch_geometric.nn import SAGEConv, HeteroConv, GATConv
from torch_geometric.nn import GATConv, HeteroConv, SAGEConv

EPS = 1e-15

Expand Down Expand Up @@ -151,33 +153,26 @@ def loss(self, pos_embeds, neg_embeds, summaries):
zip(pos_embeds, neg_embeds, summaries):

pos_loss = -torch.log(
self.discriminate(pos_embed, summary, sigmoid=True) + EPS
).mean()
neg_loss = -torch.log(
1 - self.discriminate(neg_embed, summary, sigmoid=True) + EPS
).mean()
self.discriminate(pos_embed, summary, sigmoid=True) +
EPS).mean()
neg_loss = -torch.log(1 - self.discriminate(
neg_embed, summary, sigmoid=True) + EPS).mean()
total_loss += (pos_loss + neg_loss)

return total_loss


model = HeteroUnsupervised(
data.metadata(),
out_channels=64,
hidden_channels=64,
num_layers=2
)
model = HeteroUnsupervised(data.metadata(), out_channels=64,
hidden_channels=64, num_layers=2)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')


data, model = data.to(device), model.to(device)

with torch.no_grad(): # Initialize lazy modules.
out = model(data.x_dict, data.edge_index_dict)

optimizer = torch.optim.Adam(model.parameters(),
lr=0.005, weight_decay=0.001)
optimizer = torch.optim.Adam(model.parameters(), lr=0.005, weight_decay=0.001)


def train():
Expand Down