Jump to content
RemedySpot.com

How flimsy research gets inferior drugs to market

Rate this topic


Guest guest

Recommended Posts

Guest guest

http://m.guardian.co.uk/commentisfree/2011/may/07/ben-goldacre-bad-science-new-d\

rugs?cat=commentisfree & type=article

How flimsy research gets inferior drugs to market

Bad evidence on whether drugs really work can arise simply because nobody asked

the right research question

Ben Goldacre

The Guardian, Sat 7 May 2011 00.08 BST

Comment

Some of the biggest problems in medicine don't get written about, because they

don't concern eye-catching things such as one patient's valiant struggle:

they're protected from public scrutiny by a wall of tediousness.

Here is one problem that affects millions of people. What if we had rubbish

evidence on whether hundreds of common treatments really work, simply because

nobody asked the right research question? A paper published this week looks at

how much evidence there was for every one of the new drugs approved by the FDA

between 2000 and 2010, at the time they were approved.

You might think drugs only get on the market if they've been shown to be useful.

But " useful " can mean many different things: for FDA approval, for example, you

only need trials to show your drug is better than a placebo. That's nice, but

with most medical problems, we've already got some kind of treatment. We're not

interested in whether your drug is better than nothing. We're interested in

whether it's better than the best currently available option.

So it turns out that, out of all the 197 new drugs approved in the past decade,

only 70% had data to show they were better than other treatments (and that's

after you ignore drugs for conditions where there was no current treatment).

But the problems go beyond just using the wrong comparator: most of the trials

we rely on to make real-world decisions also study drugs on highly

unrepresentative, freakishly ideal patients. These patients are younger, with

perfect single diagnoses, fewer other health problems, and so on.

This can stretch to absurd extremes. Earlier this year, some researchers from

Finland took every patient who'd ever had a hip fracture and worked out if they

would have been eligible for the trials that have been done on

fracture-preventing bisphosphonate drugs, which are in wide use.

Starting with all 7,411 fractures, 2,134 patients get excluded straight off,

because they're men, and the trials have been done on women. Then, from the

5,277 remaining, 3,596 get excluded again, because they're the wrong age:

patients in trials had to be between 65 and 79. Then, finally, 609 more fracture

patients get excluded, because they've not got osteoporosis.

This leaves 1,072 patients. So the data from the trials on these

fracture-preventing drugs are only strictly applicable to about one of every

seven patients with a fracture: they might still work in those who've been

excluded, though that's not a judgment call you should have to make; and one

problem, in particular, is that the size of the benefit might be different in

different people.

To understand why this matters, finally, we need to go through one more study

(written by people I work with, though I don't know if that's transparency or a

boast). The new " coxib " painkiller drugs are sold on the basis that they cause

fewer gastrointestinal bleeds than cheap old painkillers such as ibuprofen: and

coxibs do seem to do this.

But the trials were conducted in ideal patients, who were at much higher risk of

having a GI bleed, and this causes problems when you do a cost benefit analysis.

Nice (National Institute for Healthcare and Clinical Excellence) estimated the

cost of preventing one bleed, if you use a coxib instead of an older drug, at

$20,000. But that's a huge underestimate, and here's why: they estimated the

number of avoided bleeds from the figures in the trials, where patients were at

high risk of bleeds.

If, instead, you look at the real data on people prescribed coxibs, in a

database of GP records, the overall number of bleeds among people getting

painkillers is much smaller: so the number of bleeds avoided is also smaller,

and so the cost of each avoided bleed is higher: $104,000, in fact.

This explanation might make your eyes glaze over. You assume someone else is

dealing with it. And that's why problems like these don't get fixed.

Sent via BlackBerry by AT & T

Link to comment
Share on other sites

Guest guest

http://m.guardian.co.uk/commentisfree/2011/may/07/ben-goldacre-bad-science-new-d\

rugs?cat=commentisfree & type=article

How flimsy research gets inferior drugs to market

Bad evidence on whether drugs really work can arise simply because nobody asked

the right research question

Ben Goldacre

The Guardian, Sat 7 May 2011 00.08 BST

Comment

Some of the biggest problems in medicine don't get written about, because they

don't concern eye-catching things such as one patient's valiant struggle:

they're protected from public scrutiny by a wall of tediousness.

Here is one problem that affects millions of people. What if we had rubbish

evidence on whether hundreds of common treatments really work, simply because

nobody asked the right research question? A paper published this week looks at

how much evidence there was for every one of the new drugs approved by the FDA

between 2000 and 2010, at the time they were approved.

You might think drugs only get on the market if they've been shown to be useful.

But " useful " can mean many different things: for FDA approval, for example, you

only need trials to show your drug is better than a placebo. That's nice, but

with most medical problems, we've already got some kind of treatment. We're not

interested in whether your drug is better than nothing. We're interested in

whether it's better than the best currently available option.

So it turns out that, out of all the 197 new drugs approved in the past decade,

only 70% had data to show they were better than other treatments (and that's

after you ignore drugs for conditions where there was no current treatment).

But the problems go beyond just using the wrong comparator: most of the trials

we rely on to make real-world decisions also study drugs on highly

unrepresentative, freakishly ideal patients. These patients are younger, with

perfect single diagnoses, fewer other health problems, and so on.

This can stretch to absurd extremes. Earlier this year, some researchers from

Finland took every patient who'd ever had a hip fracture and worked out if they

would have been eligible for the trials that have been done on

fracture-preventing bisphosphonate drugs, which are in wide use.

Starting with all 7,411 fractures, 2,134 patients get excluded straight off,

because they're men, and the trials have been done on women. Then, from the

5,277 remaining, 3,596 get excluded again, because they're the wrong age:

patients in trials had to be between 65 and 79. Then, finally, 609 more fracture

patients get excluded, because they've not got osteoporosis.

This leaves 1,072 patients. So the data from the trials on these

fracture-preventing drugs are only strictly applicable to about one of every

seven patients with a fracture: they might still work in those who've been

excluded, though that's not a judgment call you should have to make; and one

problem, in particular, is that the size of the benefit might be different in

different people.

To understand why this matters, finally, we need to go through one more study

(written by people I work with, though I don't know if that's transparency or a

boast). The new " coxib " painkiller drugs are sold on the basis that they cause

fewer gastrointestinal bleeds than cheap old painkillers such as ibuprofen: and

coxibs do seem to do this.

But the trials were conducted in ideal patients, who were at much higher risk of

having a GI bleed, and this causes problems when you do a cost benefit analysis.

Nice (National Institute for Healthcare and Clinical Excellence) estimated the

cost of preventing one bleed, if you use a coxib instead of an older drug, at

$20,000. But that's a huge underestimate, and here's why: they estimated the

number of avoided bleeds from the figures in the trials, where patients were at

high risk of bleeds.

If, instead, you look at the real data on people prescribed coxibs, in a

database of GP records, the overall number of bleeds among people getting

painkillers is much smaller: so the number of bleeds avoided is also smaller,

and so the cost of each avoided bleed is higher: $104,000, in fact.

This explanation might make your eyes glaze over. You assume someone else is

dealing with it. And that's why problems like these don't get fixed.

Sent via BlackBerry by AT & T

Link to comment
Share on other sites

Guest guest

http://m.guardian.co.uk/commentisfree/2011/may/07/ben-goldacre-bad-science-new-d\

rugs?cat=commentisfree & type=article

How flimsy research gets inferior drugs to market

Bad evidence on whether drugs really work can arise simply because nobody asked

the right research question

Ben Goldacre

The Guardian, Sat 7 May 2011 00.08 BST

Comment

Some of the biggest problems in medicine don't get written about, because they

don't concern eye-catching things such as one patient's valiant struggle:

they're protected from public scrutiny by a wall of tediousness.

Here is one problem that affects millions of people. What if we had rubbish

evidence on whether hundreds of common treatments really work, simply because

nobody asked the right research question? A paper published this week looks at

how much evidence there was for every one of the new drugs approved by the FDA

between 2000 and 2010, at the time they were approved.

You might think drugs only get on the market if they've been shown to be useful.

But " useful " can mean many different things: for FDA approval, for example, you

only need trials to show your drug is better than a placebo. That's nice, but

with most medical problems, we've already got some kind of treatment. We're not

interested in whether your drug is better than nothing. We're interested in

whether it's better than the best currently available option.

So it turns out that, out of all the 197 new drugs approved in the past decade,

only 70% had data to show they were better than other treatments (and that's

after you ignore drugs for conditions where there was no current treatment).

But the problems go beyond just using the wrong comparator: most of the trials

we rely on to make real-world decisions also study drugs on highly

unrepresentative, freakishly ideal patients. These patients are younger, with

perfect single diagnoses, fewer other health problems, and so on.

This can stretch to absurd extremes. Earlier this year, some researchers from

Finland took every patient who'd ever had a hip fracture and worked out if they

would have been eligible for the trials that have been done on

fracture-preventing bisphosphonate drugs, which are in wide use.

Starting with all 7,411 fractures, 2,134 patients get excluded straight off,

because they're men, and the trials have been done on women. Then, from the

5,277 remaining, 3,596 get excluded again, because they're the wrong age:

patients in trials had to be between 65 and 79. Then, finally, 609 more fracture

patients get excluded, because they've not got osteoporosis.

This leaves 1,072 patients. So the data from the trials on these

fracture-preventing drugs are only strictly applicable to about one of every

seven patients with a fracture: they might still work in those who've been

excluded, though that's not a judgment call you should have to make; and one

problem, in particular, is that the size of the benefit might be different in

different people.

To understand why this matters, finally, we need to go through one more study

(written by people I work with, though I don't know if that's transparency or a

boast). The new " coxib " painkiller drugs are sold on the basis that they cause

fewer gastrointestinal bleeds than cheap old painkillers such as ibuprofen: and

coxibs do seem to do this.

But the trials were conducted in ideal patients, who were at much higher risk of

having a GI bleed, and this causes problems when you do a cost benefit analysis.

Nice (National Institute for Healthcare and Clinical Excellence) estimated the

cost of preventing one bleed, if you use a coxib instead of an older drug, at

$20,000. But that's a huge underestimate, and here's why: they estimated the

number of avoided bleeds from the figures in the trials, where patients were at

high risk of bleeds.

If, instead, you look at the real data on people prescribed coxibs, in a

database of GP records, the overall number of bleeds among people getting

painkillers is much smaller: so the number of bleeds avoided is also smaller,

and so the cost of each avoided bleed is higher: $104,000, in fact.

This explanation might make your eyes glaze over. You assume someone else is

dealing with it. And that's why problems like these don't get fixed.

Sent via BlackBerry by AT & T

Link to comment
Share on other sites

Guest guest

http://m.guardian.co.uk/commentisfree/2011/may/07/ben-goldacre-bad-science-new-d\

rugs?cat=commentisfree & type=article

How flimsy research gets inferior drugs to market

Bad evidence on whether drugs really work can arise simply because nobody asked

the right research question

Ben Goldacre

The Guardian, Sat 7 May 2011 00.08 BST

Comment

Some of the biggest problems in medicine don't get written about, because they

don't concern eye-catching things such as one patient's valiant struggle:

they're protected from public scrutiny by a wall of tediousness.

Here is one problem that affects millions of people. What if we had rubbish

evidence on whether hundreds of common treatments really work, simply because

nobody asked the right research question? A paper published this week looks at

how much evidence there was for every one of the new drugs approved by the FDA

between 2000 and 2010, at the time they were approved.

You might think drugs only get on the market if they've been shown to be useful.

But " useful " can mean many different things: for FDA approval, for example, you

only need trials to show your drug is better than a placebo. That's nice, but

with most medical problems, we've already got some kind of treatment. We're not

interested in whether your drug is better than nothing. We're interested in

whether it's better than the best currently available option.

So it turns out that, out of all the 197 new drugs approved in the past decade,

only 70% had data to show they were better than other treatments (and that's

after you ignore drugs for conditions where there was no current treatment).

But the problems go beyond just using the wrong comparator: most of the trials

we rely on to make real-world decisions also study drugs on highly

unrepresentative, freakishly ideal patients. These patients are younger, with

perfect single diagnoses, fewer other health problems, and so on.

This can stretch to absurd extremes. Earlier this year, some researchers from

Finland took every patient who'd ever had a hip fracture and worked out if they

would have been eligible for the trials that have been done on

fracture-preventing bisphosphonate drugs, which are in wide use.

Starting with all 7,411 fractures, 2,134 patients get excluded straight off,

because they're men, and the trials have been done on women. Then, from the

5,277 remaining, 3,596 get excluded again, because they're the wrong age:

patients in trials had to be between 65 and 79. Then, finally, 609 more fracture

patients get excluded, because they've not got osteoporosis.

This leaves 1,072 patients. So the data from the trials on these

fracture-preventing drugs are only strictly applicable to about one of every

seven patients with a fracture: they might still work in those who've been

excluded, though that's not a judgment call you should have to make; and one

problem, in particular, is that the size of the benefit might be different in

different people.

To understand why this matters, finally, we need to go through one more study

(written by people I work with, though I don't know if that's transparency or a

boast). The new " coxib " painkiller drugs are sold on the basis that they cause

fewer gastrointestinal bleeds than cheap old painkillers such as ibuprofen: and

coxibs do seem to do this.

But the trials were conducted in ideal patients, who were at much higher risk of

having a GI bleed, and this causes problems when you do a cost benefit analysis.

Nice (National Institute for Healthcare and Clinical Excellence) estimated the

cost of preventing one bleed, if you use a coxib instead of an older drug, at

$20,000. But that's a huge underestimate, and here's why: they estimated the

number of avoided bleeds from the figures in the trials, where patients were at

high risk of bleeds.

If, instead, you look at the real data on people prescribed coxibs, in a

database of GP records, the overall number of bleeds among people getting

painkillers is much smaller: so the number of bleeds avoided is also smaller,

and so the cost of each avoided bleed is higher: $104,000, in fact.

This explanation might make your eyes glaze over. You assume someone else is

dealing with it. And that's why problems like these don't get fixed.

Sent via BlackBerry by AT & T

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...