The entire system is implemented as a series of modules, that form several conceptual "layers" outlined below. Emacspeak itself talks to whatever speech output device you use via an appropriate tcl script. This is the "driver" for your particular output device. The driver provides a simplified interface to the speech output device. The next layer of emacspeak provides the interface between elisp and this driver. All layers above this are implemented in elisp. The layer above this provides core speech output functionality, e.g. functions of the form emacspeak-speak-line, emacspeak-speak-word and so on. There is a module for providing "voices" and "personalities" --Emacspeak's aural analogue to fonts and faces. There is also an "auditory icons" module that can be used to produce short snippets of digitized audio to enhance the user interaction. The rest of emacspeak is implemented using advice. Thus, next-line is advised to do
(defadvice next-line (after emacspeak pre act ) (when (interactive-p) (emacspeak-speak-line )))The core part of the advice, ie all the functions you need to advice to get a base emacs talking make up the base system. You also need to do some hairy things to work around things in emacs that get called directly from the C level and therefore defeat a simple advice approach. To date, I've managed to find work arounds in the elisp world, so emacspeak does not contain any C code, nor does it require modifications to the GNU Emacs sources, either the C or Elisp codebase.