Unlike other word embedding models that learn word vectors for a collection of words sequentially, this paper proposes a non-sequential refinement approach to improve the vectors of particular words non-sequentially using a string matching algorithm to speed up the process. The key idea is to change the order of training in the embedding learning model and force it to learn the vector of a particular word completely before skipping to other target words. The learned vector of the given word and its context vectors are then used to train other target words. In this case, later words can be trained by the word vectors that are more accurate. In this study, the effect of training order in the Skip-gram model is investigated and a quantitative and qualitative comparison is made between the learned vectors in the word similarity task. To speed up the process, a GPU based string matching algorithm is used to find the occurrences of the given word in the training corpus. Incorporating the GPU-based string matching algorithm into the Skip-gram model to refine particular word vectors is, to our best knowledge, the first use case in the literature. Additionally, we provide in-depth analysis of GPU parallelization and identification of string matching algorithms that are suitable for integrating into word embedding models.