Database Reference
In-Depth Information
it is important to also point out that CURSOR_SHARING=FORCE will not fix SQl injection bugs. the binding comes
after the query was rewritten by your end user; the SQl injection has already happened. CURSOR_SHARING=FORCE makes
you no more secure than you were before. only by using bind variables themselves can a developer implement a SQl
injection-proof application.
Note
Basically, it is important to keep in mind that simply turning on CURSOR_SHARING = FORCE will not necessarily
fix your problems. It may very well introduce new ones. CURSOR_SHARING is, in some cases, a very useful tool, but it
is not a silver bullet. A well-developed application would never need it. In the long term, using bind variables where
appropriate, and constants when needed, is the correct approach.
there are no silver bullets, none. if there were, they would be the default behavior and you would never hear
about them.
Note
Even if there are some switches that can be thrown at the database level, and they are truly few and far between,
problems relating to concurrency issues and poorly executing queries (due to poorly written queries or poorly
structured data) can't be fixed with a switch. These situations require rewrites (and frequently a re-architecture).
Moving data files around, adjusting parameters, and other database-level switches frequently have a minor impact
on the overall performance of an application. Definitely not anywhere near the two, three, ... n times increase in
performance you need to achieve to make the application acceptable. How many times has your application been
10 percent too slow? 10 percent too slow, no one complains about. Five times too slow, people get upset. I repeat: you
will not get a five times increase in performance by moving data files around. You will only achieve large increments
in performance by fixing the application, perhaps by making it do significantly less I/O.
this is just to note how things change over time. i've often written that you will not get a five-times increase
in performance by moving data files around. With the advent of hardware solutions such as oracle exadata (a storage
area network device designed as an extension to the database), you can, in fact, get a five times, ten times, fifty times, or
more decrease in response time (the time it takes to return data) by simply moving data files around. But that is more of a
“we completely changed our hardware architecture” story than a “we reorganized some of our storage.” also, getting an
application running only five or ten times faster on exadata would be disappointing to me—i'd want it to be fifty times or
more “faster”—and would require a rethinking of how the application is implemented.
Note
Performance is something you have to design for, build to, and test for continuously throughout the development
phase. It should never be something to be considered after the fact. I am amazed at how often people wait until
the application has been shipped to the customer, put in place, and is actually running before they even start to
tune it. I've seen implementations where applications are shipped with nothing more than primary keys—no other
indexes whatsoever. The queries have never been tuned or stress-tested. The application has never been tried out
with more than a handful of users. Tuning is considered to be part of the installation of the product. To me, that
is an unacceptable approach. Your end users should be presented with a responsive, fully tuned system from day
one. There will be enough “product issues” to deal with without having poor performance be the first thing users
experience. Users expect a few bugs from a new application, but at least don't make the users wait a painfully long
time for those bugs to appear on screen.
 
 
Search WWH ::




Custom Search