In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects (Nick Mokey/VentureBeat)

Nick Mokey / VentureBeat:
In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects  —  Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans.



from Techmeme https://ift.tt/UtVn3BL
Previous Post
Next Post
Related Posts

0 comments:

Please do not enter any spam in the comment box!