Documente Academic
Documente Profesional
Documente Cultură
Final Report
• Epochs : The model mostly learns in • For bi-LSTM, Statement and informa-
first 25 to 30 epochs. Later it just overfits tion about the subject of the sentence
produced good results than statement and ment, hence improving the prediction accu-
anyone of the other metadata. i.e. Sub- racy.
ject information is useful for the model
to predict more accurately. 3.6 Code
The code can be found here: Github Repo.
• For bi-LSTM, Statement + metadata and The readme file in the repo contains details
Statement + metadata + POS gave almost about the code run parameters and comments
the same results. Including POS did not in the code are written to explain each step.
affect the performance much.
4 Conclusions
• For bi-LSTM, Statement + metadata
+ DEP produced the best results of Our bi-LSTM model with rich input features
28.41%. This is an improvement (statement + metadata + POS + DEP) outper-
over Wang(2017) which quotes its best forms the best results from Wang(2017). POS
model to have 27.40% accuracy . De- tags and DEP parse information improves the
pendency relations were indeed impor- performance of the baseline models. LSTM
tant features which helped the model pre- are better than CNN for textual data. The take
dict more accurately. away from this project is that, we understood
what kind of problem requires which model
• For CNN, the same set of features, State- architecture, based on the dataset spread and
ment + metadata + DEP decreased the metadata.
performance. CNN are not able to use
the temporal information from DEP fea- 5 Sample Predictions by our model
tures accurately.