Making a slight mistake during live music performance can easily be spotted by an astute listener, even if the performance is an improvisation or an unfamiliar piece. An example might be a highly dissonant chord played by mistake in a classical-era sonata, or a sudden off-key note in a recurring motif. The problem of identifying and correcting such errors can be approached with artificial intelligence -- if a trained human can easily do it, maybe a computer can be trained to spot the errors quickly and just as accurately. The ability to identify and auto-correct errors in real-time would be not only extremely useful to performing musicians, but also a valuable asset for producers, allowing much fewer overdubs and re-recording of takes due to small imperfections. This paper examines state-of-the-art solutions to related problems and explores novel solutions for music error detection and correction, focusing on their real-time applicability. The explored approaches consider error detection through music context and theory, as well as supervised learning models with no predefined musical information or rules, trained on appropriate datasets. Focusing purely on correcting musical errors, the presented solutions operate on a high-level representation of the audio (MIDI) instead of the raw audio domain, taking input from an electronic instrument (MIDI keyboard/piano) and altering it when needed before it is sent to the sampler. This work proposes multiple general recurrent neural network designs for real-time error correction and performance aid for MIDI instruments, discusses the results, limitations, and possible future improvements. It also emphasizes on making the research results easily accessible to the end user - music enthusiasts, producers and performers -- by using the latest artificial intelligence platforms and tools.