Database Reference
In-Depth Information
There are many instances where people depend on data stores, such as
web repositories, outside of their organization. You can use Data Explorer to
add these remote sources to the unified search environment you've built for
your internal sources as well. Data Explorer doesn't index these remote sites,
but interfaces with the remote site's search engine to pass queries to them. It
then receives result sets, interprets them, and presents them to end users
alongside data from local sources.
A sophisticated security model enables Data Explorer to map the access
permissions of each indexed data element according to the permissions
maintained in the repository where it's managed, and to enforce these per-
missions when users access the data. This security model extends to the field
level of individual documents, so that passages or fields within a document
can be protected with their own permissions and updated without having to
re-index the full document. As such, users only see data that would be visible
to them if they were directly signed in to the target repository. For example,
if a content management system's field-level security governs access to an
Estimated Earnings report, it might grant a specific user access to the Execu-
tive Summary section, but not to the financial details such as pre-tax income
(PTI), and so on. Quite simply, if you can't see the data without Data Explorer,
you won't be able to see the data with Data Explorer .
Data Explorer connectors detect when data in the target data source is added
or changed. Through these connectors, the Connector Framework ensures that
the indexes reflect an up-to-date view of information in target systems.
The Data Explorer Processing Layer
The Data Explorer Processing Layer serves two purposes, each reflecting a
distinct stage: indexing content as it becomes available and processing search
queries from users and applications. At the beginning of this workflow, the
Connector Framework makes data from each repository available to be
crawled. As the data is parsed, it is transformed and processed using a
number of different analytic tools, including entity extraction, tagging, and
extraction of metadata for faceted navigation. Throughout this data-crunching
stage, the processing layer maintains an index for the content from
connected data sources. If your enterprise has existing information that
describes your data sets, such as taxonomies, ontologies, and other knowl-
edge representation standards, this information can also be factored into the
index that Data Explorer builds.
 
Search WWH ::




Custom Search