Jump to content
RemedySpot.com

Mortality Spike was .....Swedish Study

Rate this topic


Guest guest

Recommended Posts

Guest guest

Mike,

In writing this piece I want to

acknowledge that I am no statistician. All I know about statistics comes from

many years of working in the insurance industry where I gained a clear insight

into how inaccurate statistics can be because of basic errors or assumptions

and how easily statistics can be manipulated by the adjustment of assumptions.

There is an old joke about the insurance manager looking for an actuary. All applicants

were presented with base data and asked to make a series of projections. All were

rejected until at last an applicant asked the manager what result he wanted to

see.

The figures that demonstrate the significant

rise in prostate cancer deaths after the introduction come from the SEER (Surveillance,

Epidemiology, and End Results Program) statistics - http://seer.cancer.gov/

- which are accessible to all of us. They are said to be the most accurate

figures available but of course there are, as is the case for all statistics, some

issues in the collection and compilation of the data which can affect the

reported outcome and estimates. Here are some of the basic points from my limited

point of view:

1. It is said that the SEER data represents

about 10% of the US

population. National statistics are extrapolated from this data. Clearly an

error in the base data will be significantly compounded when extrapolated to

this extent.

2. The base data is collected from a

limited number of institutions and not from all institutions in the US. A NCI

pre4ss release in 2002 said in part <snip> Ten population-based

registries report this

information to the NCI's Surveillance,

Epidemiology, and End Results (SEER) Program. <snip> Clearly they must be

very large institutions if threes ten represent 10% of the population. My personal experiences tend to make me think

that large institutions often have built in error rates that may be higher than

smaller institutions.

3. Calculations for incidence rates and

the like are made using the data from the Census. Just which Census is

used can make a significant difference in projected, extrapolated data. For

many years data from an old Census was used (I think it was 1976?) This was

then changed to a more modern Census with a warning that it might make some

figures not directly comparable.

It is important, when looking at points 2

and 3 to realize that because of racially and or economically differing

incidence and mortality rates, there may be significant discrepancies that

develop if for example, there have been large population movements or even if

the reporting institutions change or move their location. There is a good

example of the kind of distortion that can be introduced which was mentioned in

a study in Britain. The

incidence of prostate cancer was very much higher in an institution in inner

London where the inhabitants might be described as economically disadvantaged compared

with the more affluent area served by an institution away from the city in what

might be termed the ‘broker belt. The study suggested that diet, living

conditions, ability to seek good advice, levels of service etc accounted for

the difference. The point of this is that if the inner city data were

extrapolated into a national statistic, there would be a very different set of

rates than if the second institution’s data was used.

4. There is a time lag in reporting data

which, incredibly, seems to have worsened despite the advent of sophisticated computer

based systems. The 2002 press release referred to above (15-Oct-2002

Journal of the National Cancer Institute) was issued because there was some

puzzlement about the apparent decline in prostate cancer incidence rates which

curiously enough occurred in the same time frame as the reduction in prostate

cancer mortality rates. This report stated in part <snip> The standard

time between a cancer diagnosis and its initial inclusion in cancer incidence

statistics is about 2 years………. They found that it would take

4 to 17 years for 99% or more of the cancer cases to be reported. <snip> This

raised the question of accuracy in my mind – bering in mind my views on

the way in which large institutions may operate.

I have tried on many occasions to get an

informed discussion going on the lock step rises and falls of mortality and

incidence rates following the introduction of PSA testing in 1986. We know why

the incidence rate rose – because more men were biopsied and the more men

you biopsy the more prostate cancer you will find. As the 2004 study

demonstrated so clearly if you biopsy men with a PSA under 4.0 ng/ml you will

find more cells currently defined as prostate cancer than if you only

biopsy men with a PSA higher than 4.0 ng/ml. But why did the mortality rate

rise. And why did the incidence rate AND the mortality rate start to fall after

1991, five years later?

No one has ever come up with a definitive

answer. One of the leading activists has always dismissed my desire for such a

discussion as saying that the two items have nothing to do with each other and doggedly

repeats his mantra that the mortality rate has fallen, as indeed it has,

depending on where you measure it from. He no longer responds to my postings.

My point has always been that we need an explanation as to why both measures

changed. We have to understand that if we are to attribute the observed fall in

mortality rates from it’s all time high in 1991. From my limited

viewpoint, significant changes like these in a population statistical base can come

from a change in definition. Just what is a prostate cancer death? How is it

defined? Did all reporting institutions use the same definitions? Did any

definitions change? Those are the questions that no one can answer for me. There

have been some glimmers of light shone on the subject:

a. No less a person than Dr Walsh (who

was apparently at some conference discussing the importance of screening in

reducing mortality) was quoted last year as being questioned as to why

the mortality rates in Britain had fallen when there was not the same emphasis

or amount of screening in that country. His response was that they had changed

the definition of prostate cancer related eeath to exclude pneumonia and this

one factor had led to a reduction in prostate cancer related deaths. On raising

this point in a forum I was told that this could not happen in the US. End of

discussion.

b. Soon after I read of Dr Walsh’s

remarks, I came on a piece describing how the large European study (ERSPC) had

agreed to ‘standardize’ their definition of prostate cancer to only

include needle biopsy results that result in a Gleason Score of 6 or higher –

in other words anyone who had a diagnosis of GS 5 would not be included in

their data. This single change may well have produced the discrepancy in the

ERSPC reported results of ‘lives saved’ when compared with the US study that

showed ‘no lives saved’. Again I tried to discuss this change in definition

and wondered if this was a contributing fator in the decline in reported deaths

– after all if you are not diagnosed with PCa, you cannot die of

the disease. My thoughts on the

subject were ignored ar denied. Spurred on by this item I then found that in

practical terms, leading pathologists in the US had been applying this

criterion since about 2005 and in January this year the proposal was formally adopted

– see http://www.yananow.org/StrangePlace/forest.html#gleason

for a summary of the changes. This has led to a significant change in the

profile of diagnosed prostate cancer and the so-called ‘Migration”

of Gleason Grades and Scores – see http://tinyurl.com/2jnpbu

In summary after what many will have

considered a long and boring post, in answer to your question <snip> Your

thoughts on the 30% increase and what it means? <snip>

I think that a number of changes in diagnosis, definition, formulae and focus

may have all resulted in the changes observed in the population data.

All the best

Prostate men need enlightening, not

frightening

Terry Herbert - diagnosed in 1996 and

still going strong

Read A Strange Place for unbiased information at http://www.yananow.org/StrangePlace/index.html

1 of 1 Photo(s)

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...