Search Results

Search found 32757 results on 1311 pages for 'database cursor'.

Page 114/1311 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Clipping users cursor around the X button.

    - by Nowayz
    This should be simple, and I was hoping to do it in Delphi. The purpose of this is just supposed to be a joke. On a windows form application I don't want the user to be able to clip the X button on the main form. I want the cursor to either clip around the X button or just set it's position elsewhere.

    Read the article

  • JSplitPane giving wrong resize cursor with GTK LAF?

    - by carneades
    Running the official JSplitPane demo http://java.sun.com/docs/books/tutorial/uiswing/components/splitpane.html on my Ubuntu system gives a resize cursor like -- over the divider. The behavior of native split panes gives <-- resize cursors. Does this qualify as a bug that should be reported to Sun (eh, Oracle)? Is there a workaround?

    Read the article

  • Want to get cursor position within textInput .

    - by Anshuman Jaiswal
    I am using textInput within grid using rendrer. I am populating a suggestion box just below the text input field on the basis of typed character and index of text input.Problem is that if i shrink grid column then suggestion box is not populating at the right place so I want global position of cursor in the text input field .

    Read the article

  • Setting Busy cursor for the html page with iframes having flex applications

    - by Prashant Dubey
    Hi Friends, I have a html page with 4 iframes out of these 4 one is a static html page and other 3 are html generated by flex. I have a button and list in one of the flex applications and the list will be populated on click of the button. Now what I want is to have a custom busy cursor to appear on the top of the whole html page until the list gets populated. Please tell me if its possible with an example. Thanks in advance Prahsant Dubey

    Read the article

  • hide cursor on remote terminal

    - by Tyler
    I have an open socket to a remote terminal. Using this SO answer I was able to put that terminal into character mode. My question is, how do I hide the cursor in the remote terminal using this method? Thanks!

    Read the article

  • Pango Cursor Rendering and Highlighting....

    - by user367255
    How do you render the cursor and create highlighting for selected text in Pango? Also, can I use PangoCairo to render parts of a layout at a time, so I can draw it in different ways (such as with an outline)? Finally, how did you get your understanding of Pango, there doesn't seem to be very many tutorials and the technical documentation only describes the functions. Thanks!

    Read the article

  • Oracle Cursor and JDBC ODBC

    - by BeginnerAmongBeginners
    I have some procedures to execute in the database that has an OUT REFCURSOR parameter. When I am connecting with the JDBC thin client, everything works fine. However, if I were to change the connection string to refer to the ODBC data source pointing to the same database, those procedures will fail. The procedures that do not use an OUT REFCURSOR will still work fine though. Is this situation the result of an error on my part or expected?

    Read the article

  • MongoDB with OR and Range Indexes

    - by LMH
    I have a query: {"$query"=>{"user_id"=>"512f7960534dcda22b000491", "$or"=>[{"when_tz"=>{"$gte"=>2010-06-24 04:00:00 UTC, "$lt"=>2010-06-25 04:00:00 UTC}}, {"when_tz"=>{"$gte"=>2011-06-24 04:00:00 UTC, "$lt"=>2011-06-25 04:00:00 UTC}}, {"when_tz"=>{"$gte"=>2012-06-24 04:00:00 UTC, "$lt"=>2012-06-25 04:00:00 UTC}}], "_type"=>{"$in"=>["FacebookImageItem", "FoursquareImageItem", "InstagramItem", "TwitterImageItem", "Image"]}}, "$explain"=>true, "$orderby"=>{"when_tz"=>1}} And an index: { user_id: 1, _type: 1, when_tz: 1 } Explain: {"cursor"="BtreeCursor user_id_1__type_1_facebook_id_1 multi", "isMultiKey"=false, "n"=28, "nscannedObjects"=15094, "nscanned"=15098, "nscannedObjectsAllPlans"=181246, "nscannedAllPlans"=241553, "scanAndOrder"=true, "indexOnly"=false, "nYields"=12, "nChunkSkips"=0, "millis"=2869, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "facebook_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}, "allPlans"=[{"cursor"="BtreeCursor user_id_1__type_1_facebook_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15098, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "facebook_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_twitter_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "twitter_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_instagram_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "instagram_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_foursquare_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "foursquare_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_phash_1", "n"=21, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "phash"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_aperature_1_shutter_speed_1_when_tz_1", "n"=25, "nscannedObjects"=35, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "aperature"=[[{"$minElement"=1}, {"$maxElement"=1}]], "shutter_speed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_image_hash_1", "n"=22, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "image_hash"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_tz_-1", "n"=23, "nscannedObjects"=32, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$maxElement"=1}, {"$minElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_tz_1", "n"=24, "nscannedObjects"=33, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_utc_-1", "n"=23, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_utc"=[[{"$maxElement"=1}, {"$minElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_utc_1", "n"=24, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_utc"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_original_shared_item_id_1", "n"=24, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "original_shared_item_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_s3_tmp_file_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "s3_tmp_file"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_processed_-1_uploaded_-1_image_device_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "processed"=[[{"$maxElement"=1}, {"$minElement"=1}]], "uploaded"=[[{"$maxElement"=1}, {"$minElement"=1}]], "image_device"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_when_tz_1 multi", "n"=28, "nscannedObjects"=28, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BasicCursor", "n"=0, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={}}], "server"=""} Any idea how to get it to hit the indexes?

    Read the article

  • getting emacs to move cursor by words on a Mac

    - by hatorade
    It's supposed to be M + cursor, but any shortcut in emacs using M (escape) on my mac sucks, because every time i need to use it, i need to release M (the escape key) and then press it again. Is there a better shortcut for moving along words in emacs (kind of like ctr + arrow in windows?)

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • SQL SERVER – History of SQL Server Database Encryption

    - by pinaldave
    I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly give me permission to re-produce relevant part of history from his book. Encryption in SQL Server 2000 Built-in cryptographic encryption functionality was nonexistent in SQL Server 2000 and prior versions. In order to get server-side encryption in SQL Server you had to resort to purchasing or creating your own SQL Server XPs. Creating your own cryptographic XPs could be a daunting task owing to the fact that XPs had to be compiled as native DLLs (using a language like C or C++) and the XP application programming interface (API) was poorly documented. In addition there were always concerns around creating wellbehaved XPs that “played nicely” with the SQL Server process. Encryption in SQL Server 2005 Prior to the release of SQL Server 2005 there was a flurry of regulatory activity in response to accounting scandals and attacks on repositories of confidential consumer data. Much of this regulation centered onthe need for protecting and controlling access to sensitive financial and consumer information. With the release of SQL Server 2005 Microsoft responded to the increasing demand for built-in encryption byproviding the necessary tools to encrypt data at the column level. This functionality prominently featured the following: Support for column-level encryption of data using symmetric keys or passphrases. Built-in access to a variety of symmetric and asymmetric encryption algorithms, including AES, DES, Triple DES, RC2, RC4, and RSA. Capability to create and manage symmetric keys. Key creation and management. Ability to generate asymmetric keys and self-signed certificates, or to install external asymmetric keys and certificates. Implementation of hierarchical model for encryption key management, similar to the ANSI X9.17 standard model. SQL functions to generate one-way hash codes and digital signatures, including SHA-1 and MD5 hashes. Additional SQL functions to encrypt and decrypt data. Extensions to the SQL language to support creation, use, and administration of encryption keys and certificates. SQL CLR extensions that provide access to .NET-based encryption functionality. Encryption in SQL Server 2008 Encryption demands have increased over the past few years. For instance, there has been a demand for the ability to store encryption keys “off-the-box,” physically separate from the database and the data it contains. Also there is a recognized requirement for legacy databases and applications to take advantage of encryption without changing the existing code base. To address these needs SQL Server 2008 adds the following features to its encryption arsenal: Transparent Data Encryption (TDE): Allows you to encrypt an entire database, including log files and the tempdb database, in such a way that it is transparent to client applications. Extensible Key Management (EKM): Allows you to store and manage your encryption keys on an external device known as a hardware security module (HSM). Cryptographic random number generation functionality. Additional cryptography-related catalog views and dynamic management views. SQL language extensions to support the new encryption functionality. The encryption book covers all the tools in its various chapter in one simple story. If you are interested how encryption evolved and reached to the stage where it is today, this book is must for everyone. You can read my earlier review of the book over here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology Tagged: Encryption, SQL Server Encryption, SQLPASS

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >