Medical ethics are very complex. Often demographics are used to justify particular screening but it might exclude other demographics that as seen as low risk. Usually to save on screening costs. There are not infinite funds to spend on medical care but in other areas excessive amounts are being spend on keeping humans alive that either have a very low quality of life, prolonging suffering or extremely low chances of survival.
This is true, and I agree with you here. I not infrequently see patients in the ICU who have a near zero chance of survival, and if they did survive, would be permanently disabled or brain damaged so severely that the resulting quality of life would be unacceptable to them or their families. Fortunately, in most of these cases the family agrees with the medical team and the focus moves to palliation and end-of-life care. There are a few cases, however, where the family refuses to accept the prognosis, and we end up providing very costly treatment to keep alive a patient who has no chance of recovery, while we apply to the courts (also very costly) to allow withdrawal of care. The usual cases are young people with either a progressive degenerative disease like Duchenne Muscular Dystrophy or an irrecoverable traumatic brain injury.
Screening isn't just about cost, however. We also need to pay attention to the test's sensitivity and specificity. Unfortunately, no test is 100% - there will always be false positives and false negatives. A test that is
sensitive will not miss the disease if it is present, and so has a low rate of false negatives. A test that is
specific will not come back positive unless the disease is present, and so has a low rate of false positives.
Let's say we have a test that is 98% specific and 98% sensitive, and we are using that test to screen a population of 1 million people which has a disease prevalence of 2% (some cancers in over 50s, for example). So of those 1 million people, there will be 20,000 with this type of cancer. Our screening test will
correctly pick up 98% of those 20,000 people (19,600), and the remaining 2% of people (400) will be labelled as a false negative. Of the 980,000 people that do not have cancer, 98% of them (960,400) will be correctly identified as not having cancer, but crucially
2% of them (19,600) will be labelled as false positives. So we now have 39,200 with a positive test result, and only 50% will actually have the disease, despite our test being "98% accurate".
Now lets take that same test, and apply it to groups that are lower risk (the same cancer in under 30s, for example). The same disease in this younger population might only have a prevalence of 0.01%. If we run those numbers again, again in a population of 1 million people, we have 100 people with the disease, and
98 of them being picked up by the screening test. We also have 999,900 without the disease, but
19,998 of them being identified as false positives. So if you get a positive test result, you actually only have a
0.49% chance of having the disease, despite our test being "98% accurate".
This becomes an issue as there is risk involved as well. For example, if you test positive for the bowel cancer screening test (which is blood showing up in a stool sample), you will then go on to have a colonoscopy. Colonoscopy carries a 0.5% risk of bowel perforation. In our example population of 1 million young people, we would be causing more bowel perforations in people with false positive tests than we would be picking up actual cancers in people with true positive tests.
My point is that tests which are effective in a high-risk category of people can cause more harm than good if applied to a low-risk category of people. Although cost-effectiveness certainly plays a part, there are other factors to consider.