|
Newsletter continues after sponsor message
|
|
|
J Studios/Getty Images/Digital Vision |
|
By age 40, more than half of Americans have high blood pressure, but many are unaware of it, since most people have no symptoms. Hypertension is a leading cause of heart disease, and can also increase the risk of kidney disease, dementia and cognitive decline. New recommendations from the American Heart Association aim for early treatment, including lifestyle changes and medications, as NPR’s Allison Aubrey reports.
For people with systolic blood pressure in the 130s, the recommendation is to start with diet and lifestyle changes. That means adopting a low-salt diet, exercising, limiting alcohol consumption and reducing stress in the form of meditation, yoga or deep breathing. If blood pressure doesn’t improve, the recommendations say to begin medication.
For people with systolic blood pressure over 140, medications are recommended. Medications for high blood pressure include diuretics, ACE inhibitors that help block the production of a specific hormone and help relax the blood vessels, and calcium channel blockers that slow down the movement of calcium into cells.
For the many people with hypertension who don't get enough reduction with current medications, there may be hope in a new class of medications not yet on the market, which target a different hormone.
Here’s more about how to control high blood pressure.
Plus: A misplaced arm position can skew blood pressure readings |
|
Ever wished to be a fly on the wall in the meetings and conversations that are shaping the future of the U.S. and the world? Now you can with NPR’s newest podcast, Sources & Methods.
Every Thursday, host Mary Louise Kelly sits down with a team of NPR correspondents to discuss the biggest national security stories of the week, taking you inside the Pentagon, State Department, and CIA with military officials, intelligence experts and diplomatic leaders.
Listen to Sources & Methods on the NPR App, or wherever you get your podcasts. |
|
|
|
Smith Collection/Gado/Archive Photos/Getty Images |
|
Earlier this year my doctor ordered an MRI to investigate pelvic pain that had persisted three years post hysterectomy. When she reviewed the results with me, she said the MRI revealed the likely sources of pain: Damage and swelling in a muscle that helps rotate my left hip and in the tendons that connect my hamstrings to my pelvic bone. She recommended physical therapy.
Then I asked Google’s Gemini AI for its take on the MRI report. Gemini went line-by-line translating medical terminology into plain language. Its conclusions generally echoed the doctor’s. One stood out to me however: “Hysterectomy scar tissue ruled out.” That wasn’t in the report. So I asked it where it saw that. The AI replied that the MRI report didn’t explicitly rule out scar tissue from the surgery; but scar tissue didn’t show up on the MRI, and the radiologist identified other specific causes for my symptoms. Not bad, Dr. Gemini.
Many patients are using large language models, or LLMs, like OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini to interpret their records, as Kate Ruder reports for KFF Health News. Physicians and patient advocates warn that AI chatbots can produce wrong answers and that sensitive medical information might not remain private. At the same time, some physicians are using AI assistants to transcribe conversations with patients and draft interpretations of clinical tests and lab results.
"Ultimately, it's just the need for caution overall with LLMs. With the latest models, these concerns are continuing to get less and less of an issue but have not been entirely resolved," says Justin Honce, a neuroradiologist at UCHealth in Colorado.
Here are a few things to keep in mind about using LLMs to generate medical notes and interpret test results.
How you ask matters
Researchers at Beth Israel Deaconess Medical Center in Massachusetts analyzed the accuracy of ChatGPT, Claude, and Gemini responses to patients' questions about a clinical note. All three performed well, but how patients framed their questions mattered. For example, telling the AI chatbot to take on the persona of a clinician and asking it one question at a time improved the accuracy of its responses.
Be wary of hallucinations
A hallucination is when an AI spits out a response that may appear sensible but is inaccurate. For example, OpenAI's Whisper, an AI-assisted transcription tool used in hospitals, introduced an imaginary medical treatment into a transcript, according to a report by The Associated Press.
Protect your privacy
Be sure to remove personal information like your name or social security number from all prompts. Data goes directly to tech companies that have developed AI models, says Adam Rodman, an internist who chairs a steering group on generative AI at Harvard Medical School. Rodman says that he is not aware of any LLM’s that comply with federal privacy law or consider patient safety.
Find out more about artificial intelligence and test results.
ICYMI: Research suggests doctors might quickly become dependent on AI |
|
Therese McRae; Stephan Neidenbach; and Jason Mitton |
|
These people want COVID shots to protect their health or family, but they can't get them
Concerned about federal vaccine policies, some states are crafting their own
You're more likely to reach for a sugary drink when it's hot outside, study finds
Health care costs are soaring. Blame insurers, drug companies — and your employer
|
|
We hope you enjoyed these stories. Find more of NPR's health journalism online.
All our best,
Andrea Muraskin and your NPR Health editors |
|
Listen to your local NPR station. |
|
Visit NPR.org to find your local station stream. |
|
|
| |
|
|
|
|
|
|
|
You received this message because you're subscribed to Health emails. This email was sent by National Public Radio, Inc., 1111 North Capitol Street NE, Washington, DC 20002
Unsubscribe | Privacy Policy |
 |
|
|
|
|
|
|
|