There is a big difference between sound-synthesis program which runs in real time and one which simply produces output samples at a greater rate than the sound device chooses to swallow them. For a program to be a real-time synthesiser, it must respond apparently instantaneously to a change in input parameter. For example, the CSound application mentioned previously is not real-time, because it reads the specification of a score and orchestra at initialization, then produces audio output. It isn't possible to influence the sound that the program produces as it produces it (actually, some real-time extensions have become available, but I am choosing to ignore them for the sake of example). Running CSound on a powerful workstation usually means it will produce samples faster than actual speed, but this does not qualify it as real-time.
To design a real-time program, one of the most important design considerations is the user interface, and this is in turn strongly influenced by the desired effects. So the next stage in the design process must be to consider the kind of manipulations required from such an application.