• Ingen resultater fundet

Iterative With Retraining

Joint Models

5.1 An Iterative Approach

5.1.4 Iterative With Retraining

In the approach described above the only thing that changes after each iter-ation is the input data to the extended parsers and the input to the aligner.

In the experiments with the training data (sections 3.3.2 and 4.3.2) we saw the importance of the training data matching the test data. If we train on

5.1 An Iterative Approach 101 gold-standard data and then use non gold-standard data at test time we will not get good results. This also points to a possible problem with the iterative approach as described above. The models are static and therefore reflect the quality of the data used for training them. This data was created using jack-knifing of a standard parser. As the quality of the input data at test time hopefully increases as a result of the iterative approach, there will be a gap between the quality of the training data used for the models and the input data at test time. We will now describe an approach that tries to deal with this problem.

We want to make the quality of the training data match the quality of the test data. To do this we need to retrain the models in each iteration.

We cannot use the trained model that we use when parsing the test data, to parse the training data and then use this as input when training the ex-tended parser in the other language, because the quality of the parses on the data used for training will be too high. To deal with this we can use jack-knifing in the same way we used it for creating the original training data. This means splitting the training data for the extended parser into n parts and then training n parsers on n− 1 parts and use each of these for training the held out part. We choose a similar approach that leads to a little less retraining. In each iteration we choosen−1parts for training and 1part as a left-out part. We then train the parsers on the n−1 parts and parse the left-out part. If the new parses of the left-out part is better than the previous parses on this part, we replace the old parses with the new. This means that we update the training data for the extended parser on the target language if the parses on the source languages gets better and vice versa. In each iteration we also train a model on all the parts and use this to parse the test data. The idea is as described above. In each iteration the quality of the left-out part should increase which leads to an increase in the quality of the training data for the extended parser in the next iteration.

And hopefully this retraining of the model leads to a better correspondence between the model and the quality of the input to the extended parser at test time.

This method has the validation-approach described above build-in. Af-ter each iAf-teration we only update the parses on the left-out part if they are better than the best parses so far. We also only update the test data input if

the left-out part improves. Algorithm 4 describes the algorithm. We have left out the alignment part of the algorithm for clarity.

Algorithm 4:Iterative - with retraining

Data: trainA, trainB, trainAB, extTrainA, extTrainB, trainAB, testA, testB, testAB, testAparsed, testBparsed

Result: testparsed and aligned

splittrainA, trainB, extTrainA, extTrainBintonparts;

fori←1tomaxIterdo forlef tOut←1tondo

lef tOutA=lef tout-part oftrainA;

lef tOutB =lef tout-part oftrainB;

train extended parser onn−1parts ofextTrainA→modAi; train extended parser onn−1parts ofextTrainB→modBi; train extended parser onextTrainA→modT Ai;

train extended parser onextTrainB→modT Bi; applymodAionlef tOutA→parsedAi;

applymodBionlef tOutB→parsedBi;

applymodT AiontestAwithtestBparsedas input parsedT Ai;

applymodT BiontestBwithtestAparsedas input parsedT Bi;

ifparsedAibetter thanlef tOutpart of extTrainBthen updateextTrainBwithparsedAi;

testAparsed=parsedT Ai; end

ifparsedBibetter thanlef tOutpart of extTrainBthen updateextTrainAwithparsedAi;

testAparsed=parsedT Ai; end

end end

Figures 5.4 and 5.5 show the accuracy after each iteration of both the data used as input to the extended parsers and the accuracy of the output. We

5.1 An Iterative Approach 103 see that the accuracy on the training data increases consistently. The biggest increase is in the first 10 iteration where baseline parser output is replaced with extended parser output. But also after iteration 10 we see improve-ments. The accuracy is monotone as we only update the training data if the accuracy on the left-out part increases.

Unfortunately, the correspondence between the accuracy of the training data and the output from the extended parsers are difficult to see. We do not see any consistent increase in accuracy on the test data. The best results of all iterations (Danish, 89.26 and English 86.41) are actually better than the best results from the other iterative approaches (Danish 89.15 and English 86.17), but it seems very difficult to predict in advance which iteration will yield the best results.

0 5 10 15 20 25 30

8384858687888990

Danish

Iteration

UAS OutputInput

Figure 5.4: Accuracy of input and output of Danish extended parser in the iterative-with-retraining approach.

0 5 10 15 20 25 30

85.085.586.086.587.087.588.0

English

Iteration

UAS

Output Input

Figure 5.5: Accuracy of input and output of English extended parser in the iterative-with-retraining approach.