Show pageBacklinksCite current pageExport to PDFBack to top This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Internet ====== {{rss>https://pubmed.ncbi.nlm.nih.gov/rss/search/1DaoU-ci3R9h_IL6ZZZRvL9YZOavVq0xwT9AySGkPuCdYa9sgv/?limit=15&utm_campaign=pubmed-2&fc=20250410161728}} The [[Internet]] is the global system of interconnected computer [[network]]s that use the Internet protocol suite (TCP/IP) to link billions of [[device]]s worldwide. It is a [[network]] of networks that consists of millions of private, public, academic, business, and government networks of local to global scope, [[link]]ed by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of [[information]] [[resource]]s and [[service]]s, such as the inter-linked [[hypertext]] [[document]]s and applications of the [[World Wide Web]] (WWW), [[electronic mail]], telephony, and [[peer-to-peer]] [[network]]s for [[file]] sharing. The upsurge in [[big data]] [[application]]s is a direct consequence of the drastic [[boom]] in [[information technology]], as well as the growing number of internet-connected devices, called the [[Internet of Things]] in healthcare. ---- The Internet has become a primary source of health [[information]], leading patients to seek answers online before consulting health care providers. This study aims to evaluate the implementation of Chat Generative Pre-Trained Transformer ([[ChatGPT]]) in neurosurgery by assessing the accuracy and helpfulness of artificial intelligence (AI)-generated responses to common postsurgical questions. A list of 60 commonly asked questions regarding neurosurgical procedures was developed. ChatGPT-3.0, ChatGPT-3.5, and ChatGPT-4.0 responses to these questions were recorded and graded by numerous practitioners for accuracy and helpfulness. The understandability and actionability of the answers were assessed using the Patient Education Materials Assessment Tool. Readability analysis was conducted using established scales. A total of 1080 responses were evaluated, equally divided among ChatGPT-3.0, 3.5, and 4.0, each contributing 360 responses. The mean helpfulness score across the 3 subsections was 3.511 ± 0.647 while the accuracy score was 4.165 ± 0.567. The Patient Education Materials Assessment Tool analysis revealed that the AI-generated responses had higher actionability scores than understandability. This indicates that the answers provided practical guidance and recommendations that patients could apply effectively. On the other hand, the mean Flesch Reading Ease score was 33.5, suggesting that the readability level of the responses was relatively complex. The Raygor Readability Estimate scores ranged within the graduate level, with an average score of the 15th grade. The artificial intelligence [[chatbot]]'s responses, although factually accurate, were not rated highly beneficial, with only marginal differences in perceived helpfulness and accuracy between ChatGPT-3.0 and ChatGPT-3.5 versions. Despite this, the responses from ChatGPT-4.0 showed a notable improvement in understandability, indicating enhanced readability over earlier versions ((Gajjar AA, Kumar RP, Paliwoda ED, Kuo CC, Adida S, Legarreta AD, Deng H, Anand SK, Hamilton DK, Buell TJ, Agarwal N, Gerszten PC, Hudson JS. Usefulness and Accuracy of Artificial Intelligence Chatbot Responses to Patient Questions for Neurosurgical Procedures. Neurosurgery. 2024 Feb 14. doi: 10.1227/neu.0000000000002856. Epub ahead of print. PMID: 38353558.)). ===== Resources ===== see [[Internet resources]]. internet.txt Last modified: 2025/04/10 20:17by 127.0.0.1