In poker terms, I’ll meet your 20 years and raise it to 60. When I was a freshman, Harvard had optional non-credit seminars to introduce us to some advanced work. The one I took was on computer science before that was a phrase, much less a department. My project was with Anthony Oettinger and Susumo Kuno on automatic language translation. This was when they were discovering how hard it is — some of the examples I remember are “Couples applying for marriage licenses wearing pedal pushers will be denied licenses” has about 120 grammatical interpretations, and “Time flies” has several (noun ‘time’, verb ‘flies’ or imperative verb ‘time’ and insect subject ‘flies’).
They were still excited about the usefulness of a software technique they called a ‘pushdown store’ for parsing sentences. Today we call it a stack. Chomsky’s transformational grammar work was still very new then.
I’m not surprised that the methods that succeeded in language translation seem to be an end run around the early methods used then.
—Barry
On 25 Feb 2021, at 10:39, Jochen Fromm wrote:
I remember posting on Usenet about 15 or 20 years ago (I think it was about neural networks on comp.ai or so) and then suddenly Marvin Minsky himself replied "look I have done that already in 1960 or 1970). I was impressed to get a response from him, after all he was at MIT and had written "The society of Mind" etc.
I am still impressed by the progress Google has made. If you look at Google Translate it is just amazing to see how good the translations are already. This was unthinkable 20 years ago. I believe the success comes from the amount of data they use in a smart way. Halevy, Norvig and Pereira called it "The Unreasonable Effectiveness of Data"
<https://research.google/pubs/pub35179>
Free forum by Nabble | Edit this page |