I was working on second-gen knowledge-based systems, and I was particularly interested in modelling uncertainty. I didn't feel any of the main methods around then (Bayesian, Certainty Factors, fuzzy logic etc) really replicated human reasoning of uncertainty, and came up with a more qualitative approach, that used words to express uncertainty rather some numeric method. (PS. Genetic algorithms were around in the early 90s by the way
)
The idea was you could state facts with a level of uncertainty, and they could be combined with other facts in a chain of reasoning, which would lead to a lower level of certainty the longer the chain got, and depending on what qualitative certainties were in the chain. The idea was a chain is only as strong as its weakest link.
A means you're fairly certain of B, B means you're not very certain of C, so A means you're not very certain of C is a very simple example.
In addition, there could be a number of different chains of reasoning leading to a certain point, and the more chains leading to a point which have strong certainties could combine to make you more certain of the end point. Another example:
X makes you reasonably certain of A
Y makes you reasonably certain of A
Z makes you reasonable certain of A
X+Y+Z makes you feel A is strongly certain.
I've explained it in simple terms, and hope I get it across.
I called the reduction in certainty as your individual chain of reasoning got longer 'knowledge senescence', and the increase in certainty as your individual chains of reasoning combined was 'knowledge renascence'.
There were other benefits found in looking at the machines output. It was able to match a heuristics-based first gen KBS in terms of correctness compared to a human expert (both were right about 80% of the time compared to the human experts opinions, but they varied in which cases they found right by some way!) but also in terms of self-explanation - the machine could explain in plain English how it reached certain conclusions by laying out the individual facts in order, and I found this proved educationally comparable to a hand-drafted (by a human expert) explanation of reasoning getting to a certain point.
(I got very peeved with the slow speed of academia on completion, and moved into industry, btw)