[์ฝ๋ ๋ฆฌ๋ทฐ] ๋
ธ๋
์ธต ๋ํ ๊ฐ์ฑ ๋ถ๋ฅ ๋ชจ๋ธ ๊ตฌํ (2) : RNN
๊ฐ์ฑ ๋ถ๋ฅ ๋ชจ๋ธ ๊ตฌํ ์๋ฆฌ์ฆ (1) | CNN ๐ฉ๐ซ ๋ชจ๋ธ ํด๋์ค class RNN(nn.Module): def __init__(self, vocab_size, embed_dim, hidden_dim, n_layers, dropout, num_class, device): super(RNN, self).__init__() self.device = device self.n_layers = n_layers self.hidden_dim = hidden_dim self.embed = nn.Embedding(vocab_size, embed_dim) self.dropout = nn.Dropout(p=dropout) self.gru = nn.GRU(embed_dim, self.hidden_dim, self.n_laye..
2022. 12. 21.
[์ฝ๋ ๋ฆฌ๋ทฐ] ๋
ธ๋
์ธต ๋ํ ๊ฐ์ฑ ๋ถ๋ฅ ๋ชจ๋ธ ๊ตฌํ (1) : CNN
์ด๋ฒ ํ๊ธฐ ํ๋ ์๋ ํ
ํ๋ก์ ํธ๊ฐ ๋๋ ํ ์ฌ์ฉํ ์ฝ๋์ ๋ํด ๋ณต์ต ๋ชฉ์ ์ผ๋ก ๊ธ์ ์์ฑํ๋ค. ๋ชจ๋ธ์ ์ด 3๊ฐ์ธ๋ฐ CNN, RNN, Transformer ์์ผ๋ก ์ ๋ฆฌํ ์์ ์ด๋ค. ๐ฉ๐ซ ๋ชจ๋ธ ํด๋์ค class CNN(nn.Module): def __init__(self, vocab_size, embed_dim, n_filters, filter_size, dropout, num_class): super(CNN, self).__init__() self.embedding = nn.Embedding(vocab_size, embed_dim) self.conv1d_layers = nn.ModuleList([nn.Conv1d(in_channels=embed_dim, out_channels=n_filters[i], ke..
2022. 12. 13.