Paper instructions:History of changes (Paper instructions):
[May 11, 2014 at 8:53 AM] – text modifications.
” style=”cursor: help; display: inline-block; background-image: url(https://eshrk-static3.netdna-ssl.com/w3t_img/images/actions.png); margin: 0px 3px; width: 18px; overflow: hidden; text-indent: 50px; vertical-align: middle; background-position: -11px -591px; background-repeat: no-repeat no-repeat;”>i
Not looking for massive rewrites, just working this somewhat rambling set of paragraphs into a coherent paper.
Having a clear, coherent structure is the best. I’m having trouble with that last part. Want to make sure readers can follow (I’d also just appreciate comments on anything you don’t understand.)
Tone: Conversational is good. Rigorous, but not excessively academic.
If anybody can do bluebook citations I would be SO grateful, but I can of course just do that myself. I just haven’t slept for, like 3 days and am running out of steam.
THE PAPER (Basic gist of what I’m trying to say):
Need a Professional Writer to Work on this Paper and Give you Original Paper? CLICK HERE TO GET THIS PAPER WRITTEN
Right after Snowden the government released (and everybody fixated on) this 54 attacks prevented figure. While its intially compelling, its really the wrong way to measure the value of intelligence.
Months later, the Privacy and civil liberties oversight board released its report on the 215 metadata program. Their goal was to determine effectiveness and assess privacy implications — and even though they noted that “attacks prevented” is a bad way to measure value of intelligence… their analysis basically falls into the same trap.
Everybody seems to want a better way to guage value, but nobody really knows what that is. (all of this is the intro)
(then I want a paragraph explaining what the paper aims to do — the current one is not usable as it doesn’t actually describe what the paper goes on to say!)
Foundational question: why is it hard to find good metrics for intelligence?
Basically because its predictive (so the goal is non-events), collaborative (hard to asses any particular program in isolation), and any possible outcomes are really remote from the creation of the intelligence. (Plenty of good intelligence, in other words, doesn’t prevent attacks..)
With that in mind,
Is it even POSSIBLE to measure intelligence using outcome metrics (attacks prevented, early warning)?
– Lots of people in the intelligence community seem to think its not,
– Outcome metrics are also seen as counterproductive in similar areas like scientific research and industrial R&D departments. Like intelligence, these things aren’t closely collected to the final outcomes they hope to produce.
If we shouldn’t measure intelligence based on outcomes (whether it prevents an attack), what kinds of things SHOULD we measure? What qualities would a good metric have?
-probably should be really closely tied to process
-responsive to the specific goals of a program (metadata collection, for instance, aimed to increase speed)]