====== Multiple-choice question ====== {{rss>https://pubmed.ncbi.nlm.nih.gov/rss/search/1NksTF9hJaynugspbr8UICXuM4A0S7ri5cYmixhSNFr5lg_sUs/?limit=15&utm_campaign=pubmed-2&fc=20250209094352}} ---- ---- [[ChatGPT]]-4o demonstrates the potential for efficiently generating MCQs but lacks the depth needed for complex [[assessment]]s. Human [[review]] remains essential to ensure [[quality]]. Combining AI [[efficiency]] with [[expert]] oversight could optimize [[question]] creation for high-stakes exams, offering a scalable model for [[medical education]] that balances time [[efficiency]] and content [[quality]] ((Law AK, So J, Lui CT, Choi YF, Cheung KH, Kei-Ching Hung K, Graham CA. AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination. BMC Med Educ. 2025 Feb 8;25(1):208. doi: 10.1186/s12909-025-06796-6. PMID: 39923067.))