The rules for the race: Both contenders waited for Denny’s, the diner company, to come out with an earnings report. Once that was released, the stopwatch started. Both wrote a short radio story and get [sic] graded on speed and style.
Unlike John Henry’s race against the steam hammer, this contest wasn’t just a question of whether Scott Horsley could write a news story faster than Wordsmith. The stories were compared side-by-side and the results are instructive.
Before getting to my conclusions, I encourage you to pop over to the NPR site to read these two short articles yourself to decide which one you like best and see if you can tell which one was written by a machine.
Click here to read the stories first if you don’t want to be influenced by my conclusions.
Could you tell which story was written by the experienced human journalist?
The story written by Wordsmith was factual and dense enough that I felt that it required a high level of concentration to read. Horsley’s was equally informative, but written in a way that I found comparably easy to digest. Additionally, Horsley’s story had at least two things that Wordsmith’s didn’t. Firstly, it had context, for example briefly reminding the reader what business Denny’s is in, and secondly it had analysis, helping the reader understand why the earnings results were better than expected.
In other words, compared to a human reporter Wordsmith’s story lacked insight. For me, it was no contest. The machine’s story lacked that ineffable quality of personality. Scott Horsley’s story was way better, even if it took him 7 minutes to write compared to Wordsmith’s 2 minutes. According to the results of the poll, the majority of readers thought Horsley’s story was better too.
Smith ended by pointing out that Wordsmith’s style can be tuned to produce a snappier tone. Automatic Insights’ machine learning algorithms could also be used to improve the prosaic output of the machine. Likewise, the algorithms could certainly be tweaked to present an analytical treatment in addition to the factual data. The much harder problem will be for a machine to make meaningful connections with out-of-context associations and the shared cultural knowledge and assumptions that a human writer would bring.
If this article had been written by a machine, would it have likely included a reference to John Henry? Would the machine have been able to understand the meaning that reference would have to most readers?
I don’t know the answer to that question, but I suspect not yet.
Thanks to Mike Cane for reminding me on Twitter to think about John Henry.