Search Results

Search found 589 results on 24 pages for 'jean mi'.

Page 2/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • BI fejlesztés: mi a megoldás, ha nem látod az aggregált adatokat a pivot táblában?

    - by Fekete Zoltán
    Mindenkivel elofordulhat, hogy BI elemzések, jelentések készítése közben ráébred: a táblázatban látja az adatokat, de a pivot táblában nem lát adatot. Üres... Mi a megoldás? A következo: A BI Administration eszközben be kell állítani az aggregációs szabályt a megfelelo adatpontra, s akkor a hierarchia magasabb szintjén is látszanak majd az adatok. Persze összegzo táblák, nézetek bekapcsolásával, vagy Oracle OLAP illetve Essbase kockák alkalmazásával is kezelhetjük az aggregálásokat.

    Read the article

  • Office 15 : première beta attendue mi-janvier, la prochaine version de Microsoft Office s'adaptera au tactile et proposera « Moorea »

    Première beta d'Office 15 attendue mi-janvier La prochaine version de Microsoft Office s'adaptera au tactile et proposera « Moorea » Il ne s'agit encore que d'une rumeur, mais celle-ci ne devrait pas tarder à se confirmer. Ou à être démentie. La première béta de Office 15 (nom de code de la prochaine version de la suite bureautique de Microsoft) sera rendue publique en janvier prochain. C'est en tout cas ce qu'auraient laisser entendre des sources internes anonymes. Cette version, en cours de refonte, s'inscrit dans la continuité de l'interface Metro qu'embarquera Windows 8 pour les tablettes et les écrans tactiles. [IMG]http://ftp-developpez.com/gordon-fowler/Office15/Moor...

    Read the article

  • Windows 8 utilisé par 30000 employés de Microsoft mi-juillet, la société montre sa confiance en son futur OS en l'utilisant elle-même

    Windows 8 utilisé par 30 000 employés de Microsoft mi-juillet la société montre sa confiance en son futur OS en l'utilisant elle-même Windows 8 est proche. Microsoft doit rassurer sur la qualité de son système d'exploitation entièrement repensé, qui ne serait pas totalement finalisé pour certains. La firme vient de publier un billet de blog qui décrit le modèle de test utilisé en interne par l'entreprise pour l'ensemble de ses produits, en particulier Windows 8, qui fait de ses propres employés les premiers testeurs de ceux-ci. Le cycle de développement d'un produit est marqué par une période de dogfooding ...

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • como puedo hacer funcionar el subwoofer sonicmaster en mi asus n46vb ubuntu 12.04?

    - by Leonidas Franco Carranza Oliva
    soy nuevo en ubuntu y me compre hace poco el asus n46vb con ubuntu 12.04 y que viene con un subwoofer sonicmaster, pero parece que no esta configurado en el sistema y no lo reconoce cuando lo conecto, estuve leyendo y probé con esto No sound from external subwoofer "Sonic Master" on an Asus N76VM pero no me sirvió ya que hay un archivo que no existe y creo que debe ser por el modelo, en fin si, por favor si me pueden ayudar les estaré muy agradecido.

    Read the article

  • Casse-têtes en C#, un article de Jon Skeet sur les pièges du langage, traduit par Jean-Michel Ormes

    Cette discussion est destinée à recueillir vos commentaires sur l'article Casse-têtes en C# (traduction de l'article C# Brainteasers de Jon Skeet) Citation: Régulièrement, je tombe sur une situation intéressante en C# qui donne des résultats surprenants. Cette page contient une liste d'exemples. Dans les exemples où il n'y a qu'un bout de code, nous supposerons que celui-ci est dans la m...

    Read the article

  • JSP et Servlets efficaces : production de sites dynamiques en Java de Jean-Luc Déléage, critique par Benwit

    A l'occasion de ma critique de l'ouvrage JSP et Servlets efficaces : Production de sites dynamiques en Java, j'aimerai vous demander comment vous avez appris à coder des sites web en Java ? Citation: Ce livre s'adresse aux développeurs qui utilisent Java dans la production de sites et à ceux qui souhaitent découvrir l'aspect serveur web. Il permettra aussi un apprentissage concret de ces technologies aux étudiants en informatique en fin de licence et en mas...

    Read the article

  • Programmation Flex 4. Application internet riches avec Flash ActionScript 3, Spark, MXML et Flash Builder, critique par Jean-Marie Macé

    Programmation Flex 4 Application internet riches avec Flash ActionScript 3, Spark, MXML et Flash Builder [IMG]http://ecx.images-amazon.com/images/I/416Ww2G29qL.jpg[/IMG] Eyrolles vous propose un ouvrage écrit par Aurélien Vannieuwenhuyze en collaboration avec Fabien Nicollet sur le framework Flex dans sa version 4. Je vous propose donc ma critique de ce livre : ICI, C'est pour moi le meilleur ouvrage français sur le sujet. Et vous, avez-vous feuilleté-lu cet ouvrage ? Qu'en avez-vous pensé ?...

    Read the article

  • Office 2013, Office Web Apps et Office 365 passent en RTM, la suite bureautique disponible pour les développeurs en mi-novembre via MSDN

    Office passe au tactile et s'offre un nouveau logo La Customer Preview de la prochaine suite bureautique de Microsoft est disponible Microsoft ne part pas en vacances. En tout cas pas encore. La semaine dernière a été riche d'annonces pour plusieurs de ses produits phares : Windows Server, Windows 8, son Cloud. Mais la « killer app », celle que les analystes en stratégie qualifient de « golden cow », reste ? de l'aveu même de Steve Ballmer - Microsoft Office. Le PDG n'avait...

    Read the article

  • Le lancement mondial du Projet Natal aura lieu en octobre 2010, et sa présentation officielle par Mi

    Mise à jour du 11/05/10 Le lancement mondial du Projet Natal aura lieu en octobre La technologie de reconnaissance de mouvements de Microsoft sera officiellement présentée le 13 juin 2010 Le directeur marketing de Microsoft en Arabie Saoudite vient d'annoncer que le Projet Natal, la technologie de reconnaissance de mouvements de Redmond, serait lancée en Octobre. Il s'agirait d'un lancement simultané dans le monde entier. Rappelons que le Projet Natal,

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • DBD::Oracle and utf8 issue

    - by goe
    Hi All, I have a problem where my perl code using the latest DBD::Oracle on perl v5.8.8 throws an exception on me when I try to insert characters like 'ñ'. Exception: DBD::Oracle::db do failed: ORA-01756: quoted string not properly terminated (DBD ERROR: OCIStmtPrepare) My $ENV{NLS_LANG} is set to 'AMERICAN_AMERICA.AL32UTF8' These are the DB params based on "SELECT * from NLS_DATABASE_PARAMETERS" 1 NLS_LANGUAGE AMERICAN 2 NLS_TERRITORY AMERICA 3 NLS_CURRENCY $ 4 NLS_ISO_CURRENCY AMERICA 5 NLS_NUMERIC_CHARACTERS ., 6 NLS_CHARACTERSET AL32UTF8 7 NLS_CALENDAR GREGORIAN 8 NLS_DATE_FORMAT DD-MON-RR 9 NLS_DATE_LANGUAGE AMERICAN 10 NLS_SORT BINARY 11 NLS_TIME_FORMAT HH.MI.SSXFF AM 12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM 13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR 15 NLS_DUAL_CURRENCY $ 16 NLS_COMP BINARY 17 NLS_LENGTH_SEMANTICS BYTE These are perl params based on "$db-ora_nls_parameters()" $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'BYTE', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'AL32UTF8', 'NLS_CURRENCY' => '$' }; Here are some other strange facts: If I set NLS_LANG to ‘'AMERICAN_AMERICA.UTF8’ the insert executes fine with ‘ñ’ character. If I leave NLS_LANG as ‘'AMERICAN_AMERICA.AL32UTF8' but use ‘Ñ’ the insert will run fine as well.

    Read the article

  • bluez 5.19 PS4 controller

    - by Athanase
    I currently have a problem when pairing my computer with a PS4 remote. On my Ubuntu 14.04 I removed everything related with bluez and bluetooth, and I built and installed bluez 5.19. Here are some useful command outputs: jean@system ~ hcitool hcitool - HCI Tool ver 5.19 jean@system ~ hcitool dev Devices: hci0 00:15:83:4C:0C:BB jean@system ~ bluetoothctl [bluetooth]# version Version 5.19 jean@system ~ bluetoothctl [NEW] Controller 00:15:83:4C:0C:BB BlueZ [default] jean@system ~ lsusb Bus 003 Device 012: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) So here is what happens. When I try to hard pair the controller with the computer, by holding the share and ps button for a while, everything works as expected and the pairing is done properly. After a hard pairing if I try the pairing by pressing the ps button only, nothings happen. In order to go it, I first power up the bluetooth dongle: jean@system ~ sudo hciconfig hciX up and then I run the bluetooh deamon bluetoothd: jean@system /usr/libexec/bluetooth ~ ./bluetoothd -d -n bluetoothd[11270]: Bluetooth daemon 5.19 bluetoothd[11270]: src/main.c:parse_config() parsing main.conf bluetoothd[11270]: src/main.c:parse_config() discovto=0 bluetoothd[11270]: src/main.c:parse_config() pairto=0 bluetoothd[11270]: src/main.c:parse_config() auto_to=60 bluetoothd[11270]: src/main.c:parse_config() name=%h-%d bluetoothd[11270]: src/main.c:parse_config() class=0x000100 bluetoothd[11270]: src/main.c:parse_config() Key file does not have key 'DeviceID' bluetoothd[11270]: src/gatt.c:gatt_init() Starting GATT server bluetoothd[11270]: src/adapter.c:adapter_init() sending read version command bluetoothd[11270]: Starting SDP server bluetoothd[11270]: src/sdpd-service.c:register_device_id() Adding device id record for 0002:1d6b:0246:0513 ... bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_sink_server_probe() path /org/bluez/hci0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_source_server_probe() path /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:btd_adapter_unblock_address() hci0 00:00:00:00:00:00 bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_C1_0D_2A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_88_5E_9A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:load_link_keys() hci0 keys 2 debug_keys 0 bluetoothd[11270]: src/adapter.c:load_ltks() hci0 keys 0 bluetoothd[11270]: src/adapter.c:load_connections() sending get connections command for index 0 bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: src/adapter.c:set_did() hci0 source 2 vendor 1d6b product 246 version 513 bluetoothd[11270]: src/adapter.c:adapter_register() Adapter /org/bluez/hci0 registered bluetoothd[11270]: src/adapter.c:set_dev_class() sending set device class command for index 0 bluetoothd[11270]: src/adapter.c:set_name() sending set local name command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:adapter_start() adapter /org/bluez/hci0 has been enabled bluetoothd[11270]: src/adapter.c:trigger_passive_scanning() bluetoothd[11270]: plugins/hostname.c:property_changed() static hostname: system bluetoothd[11270]: plugins/hostname.c:property_changed() pretty hostname: bluetoothd[11270]: plugins/hostname.c:update_name() name: system bluetoothd[11270]: src/adapter.c:adapter_set_name() name: system bluetoothd[11270]: plugins/hostname.c:property_changed() chassis: desktop bluetoothd[11270]: plugins/hostname.c:update_class() major: 0x01 minor: 0x01 bluetoothd[11270]: src/adapter.c:load_link_keys_complete() link keys loaded for hci0 bluetoothd[11270]: src/adapter.c:load_ltks_complete() LTKs loaded for hci0 bluetoothd[11270]: src/adapter.c:get_connections_complete() Connection count: 0 And then I press the ps button of the PS4 controller bluetoothd[11270]: src/adapter.c:connected_callback() hci0 device A4:15:66:C1:0D:2A connected eir_len 5 bluetoothd[11270]: profiles/input/server.c:connect_event_cb() Incoming connection from A4:15:66:C1:0D:2A on PSM 17 bluetoothd[11270]: profiles/input/device.c:input_device_set_channel() idev (nil) psm 17 bluetoothd[11270]: Refusing input device connect: No such file or directory (2) bluetoothd[11270]: profiles/input/server.c:confirm_event_cb() bluetoothd[11270]: Refusing connection from A4:15:66:C1:0D:2A: unknown device bluetoothd[11270]: src/adapter.c:dev_disconnected() Device A4:15:66:C1:0D:2A disconnected, reason 3 bluetoothd[11270]: src/adapter.c:adapter_remove_connection() bluetoothd[11270]: plugins/policy.c:disconnect_cb() reason 3 bluetoothd[11270]: src/adapter.c:bonding_attempt_complete() hci0 bdaddr A4:15:66:C1:0D:2A type 0 status 0xe bluetoothd[11270]: src/device.c:device_bonding_complete() bonding (nil) status 0x0e bluetoothd[11270]: src/device.c:device_bonding_failed() status 14 bluetoothd[11270]: src/adapter.c:resume_discovery() So I don't know what is happening here and a bit of help would be appreciated.

    Read the article

  • Oracle pl\sql question for my homework in oracle 11G class [migrated]

    - by Bjolds
    I am new to oracle 11G programming and i have run into a tough situation with pl\sql funtions and automation. I ame unsure how to create the function for the automation of Registration system for a College registration system. Here is what i want to do. I want to automate the registrations system so that it automaticly registers students. Then I want a procedure to automate the grading system. I have included the code that i am written to make most of this assignment work which it does but unsure how to incorporate Pl\SQL automated fuctions for the registrations system, and the grading system. So Any help or Ideas I would greatly appreciate please. set Linesize 250 set pagesize 150 drop table student; drop table faculty; drop table Course; drop table Section; drop table location; DROP TABLE courseInstructor; DROP TABLE Registration; DROP TABLE grade; create table student( studentid number(10), Lastname varchar2(20), Firstname Varchar2(20), MI Char(1), address Varchar2(20), city Varchar2(20), state Char(2), zip Varchar2(10), HomePhone Varchar2(10), Workphone Varchar2(10), DOB Date, Pin VARCHAR2(10), Status Char(1)); ALTER TABLE Student Add Constraint Student_StudentID_pk Primary Key (studentID); Insert into student values (1,'xxxxxxxx','xxxxxxxxxx','x','xxxxxxxxxxxxxxx','Columbus','oh','44159','xxx-xxx-xxxx','xxx-xxx-xxxx','06-Mar-1957','1211','c'); create table faculty( FacultyID Number(10), FirstName Varchar2(20), Lastname Varchar2(20), MI Char(1), workphone Varchar2(10), CellPhone Varchar2(10), Rank Varchar2(20), Experience Varchar2(10), Status Char(1)); ALTER TABLE Faculty ADD Constraint Faculty_facultyId_PK PRIMARY KEY (FacultyID); insert into faculty values (1,'xxx','xxxxxxxxxxxx',xxx-xxx-xxxx','xxx-xxx-xxxx','professor','20','f'); create table Course( CourseId number(10), CourseNumber Varchar2(20), CourseName Varchar(20), Description Varchar(20), CreditHours Number(4), Status Char(1)); ALTER TABLE Course ADD Constraint Course_CourseID_pk PRIMARY KEY(CourseID); insert into course values (1,'cit 100','computer concepts','introduction to PCs','3.0','o'); insert into course values (2,'cit 101','Database Program','Database Programming','4.0','o'); insert into course values (3,'Math 101','Algebra I','Algebra I Concepts','5.0','o'); insert into course values (4,'cit 102a','Pc applications','Aplications 1','3.0','o'); insert into course values (5,'cit 102b','pc applications','applications 2','3.0','o'); insert into course values (6,'cit 102c','pc applications','applications 3','3.0','o'); insert into course values (7,'cit 103','computer concepts','introduction systems','3.0','c'); insert into course values (8,'cit 110','Unified language','UML design','3.0','o'); insert into course values (9,'cit 165','cobol','cobol programming','3.0','o'); insert into course values (10,'cit 167','C++ Programming 1','c++ programming','4.0','o'); insert into course values (11,'cit 231','Expert Excel','spreadsheet apps','3.0','o'); insert into course values (12,'cit 233','expert Access','database devel.','3.0','o'); insert into course values (13,'cit 169','Java Programming I','Java Programming I','3.0','o'); insert into course values (14,'cit 263','Visual Basic','Visual Basic Prog','3.0','o'); insert into course values (15,'cit 275','system analysis 2','System Analysis 2','3.0','o'); create table Section( SectionID Number(10), CourseId Number(10), SectionNumber VarChar2(10), Days Varchar2(10), StartTime Date, EndTime Date, LocationID Number(10), SeatAvailable Number(3), Status Char(1)); ALTER TABLE Section ADD Constraint Section_SectionID_PK PRIMARY KEY(SectionID); insert into section values (1,1,'18977','r','21-Sep-2011','10-Dec-2011','1','89','o'); create table Location( LocationId Number(10), Building Varchar2(20), Room Varchar2(5), Capacity Number(5), Satus Char(1)); ALTER TABLE Location ADD Constraint Location_LocationID_pk PRIMARY KEY (LocationID); insert into Location values (1,'Clevleand Hall','cl209','35','o'); insert into Location values (2,'Toledo Circle','tc211','45','o'); insert into Location values (3,'Akron Square','as154','65','o'); insert into Location values (4,'Cincy Hall','ch100','45','o'); insert into Location values (5,'Springfield Dome','SD','35','o'); insert into Location values (6,'Dayton Dorm','dd225','25','o'); insert into Location values (7,'Columbus Hall','CB354','15','o'); insert into Location values (8,'Cleveland Hall','cl204','85','o'); insert into Location values (9,'Toledo Circle','tc103','75','o'); insert into Location values (10,'Akron Square','as201','46','o'); insert into Location values (11,'Cincy Hall','ch301','73','o'); insert into Location values (12,'Dayton Dorm','dd245','57','o'); insert into Location values (13,'Springfield Dome','SD','65','o'); insert into Location values (14,'Cleveland Hall','cl241','10','o'); insert into Location values (15,'Toledo Circle','tc211','27','o'); insert into Location values (16,'Akron Square','as311','28','o'); insert into Location values (17,'Cincy Hall','ch415','73','o'); insert into Location values (18,'Toledo Circle','tc111','67','o'); insert into Location values (19,'Springfield Dome','SD','69','o'); insert into Location values (20,'Dayton Dorm','dd211','45','o'); Alter Table Student Add Constraint student_Zip_CK Check(Rtrim (Zip,'1234567890-') is null); Alter Table Student ADD Constraint Student_Status_CK Check(Status In('c','t')); Alter Table Student ADD Constraint Student_MI_CK2 Check(RTRIM(MI,'abcdefghijklmnopqrstuvwxyz')is Null); Alter Table Student Modify pin not Null; Alter table Faculty Add Constraint Faculty_Status_CK Check(Status In('f','a','i')); Alter table Faculty ADD Constraint Faculty_Rank_CK Check(Rank In ('professor','doctor','instructor','assistant','tenure')); Alter table Faculty ADD Constraint Faculty_MI_CK2 Check(RTRIM(MI,'abcdefghijklmnopqrstuvwxyz')is Null); Update Section Set Starttime = To_date('09-21-2011 6:00 PM', 'mm-dd-yyyy hh:mi pm'); Update Section Set Endtime = To_date('12-10-2011 9:50 PM', 'mm-dd-yyyy hh:mi pm'); alter table Section Add Constraint StartTime_Status_CK Check (starttime < Endtime); Alter Table Section Add Constraint Section_StartTime_ck check (StartTime < EndTime); Alter Table Section ADD Constraint Section_CourseId_FK FOREIGN KEY (CourseID) References Course(CourseId); Alter Table Section ADD Constraint Section_LocationID_FK FOREIGN KEY (LocationID) References Location (LocationId); Alter Table Section ADD Constraint Section_Days_CK Check(RTRIM(Days,'mtwrfsu')IS Null); update section set seatavailable = '99'; Alter Table Section ADD Constraint Section_SeatsAvailable_CK Check (SeatAvailable < 100); Alter Table Course Add Constraint Course_CreditHours_ck check(CreditHours < = 6.0); update location set capacity = '99'; Alter Table Location Add Constraint Location_Capacity_CK Check(Capacity < 100); Create Table Registration ( StudentID Number(10), SectionID Number(10), Constraint Registration_pk Primary key (studentId, Sectionid)); Insert into registration values (1, 2); Insert into Registration values (2, 3); Insert into registration values (3, 4); Insert into registration values (4, 5); Insert into registration values (5, 6); Insert into registration values (6, 7); Insert into registration values (7, 8); Insert into registration values (8, 9); insert into registration values (9, 10); insert into registration values (10, 11); insert into registration values (9, 12); insert into registration values (8, 13); insert into registration values (7, 14); insert into registration values (6, 15); insert into registration values (5, 17); insert into registration values (4, 18); insert into registration values (3, 19); insert into registration values (2, 20); insert into registration values (1, 21); insert into registration values (2, 22); insert into registration values (3, 23); insert into registration values (4, 24); insert into registration values (5, 25); Insert into registration values (6, 24); insert into registration values (7, 23); insert into registration values (8, 22); insert into registration values (9, 21); insert into registration values (10, 20); insert into registration values (9, 19); insert into registration values (8, 17); Create Table courseInstructor( FacultyID Number(10), SectionID Number(10), Constraint CourseInstructor_pk Primary key (FacultyId, SectionID)); insert into courseInstructor values (1, 1); insert into courseInstructor values (2, 2); insert into courseInstructor values (3, 3); insert into courseInstructor values (4, 4); insert into courseInstructor values (5, 5); insert into courseInstructor values (5, 6); insert into courseInstructor values (4, 7); insert into courseInstructor values (3, 8); insert into courseInstructor values (2, 9); insert into courseInstructor values (1, 10); insert into courseInstructor values (5, 11); insert into courseInstructor values (4, 12); insert into courseInstructor values (3, 13); insert into courseInstructor values (2, 14); insert into courseInstructor values (1, 15); Create table grade( StudentID Number(10), SectionID Number(10), Grade Varchar2(1), Constraint grade_pk Primary key (StudentID, SectionID)); CREATE OR REPLACE TRIGGER TR_CreateGrade AFTER INSERT ON Registration FOR EACH ROW BEGIN INSERT INTO grade (SectionID,StudentID,Grade) VALUES(:New.SectionID,:New.StudentID,NULL); END TR_createGrade; / CREATE OR REPLACE FORCE VIEW V_reg_student_course AS SELECT Registration.StudentID, student.LastName, student.FirstName, course.CourseName, Registration.SectionID, course.CreditHours, section.Days, TO_CHAR(StartTime, 'MM/DD/YYYY') AS StartDate, TO_CHAR(StartTime, 'HH:MI PM') AS StartTime, TO_CHAR(EndTime, 'MM/DD/YYYY') AS EndDate, TO_CHAR(EndTime, 'HH:MI PM') AS EndTime, location.Building, location.Room FROM registration, student, section, course, location WHERE registration.StudentID = student.StudentID AND registration.SectionID = section.SectionID AND section.LocationID = location.LocationID AND section.CourseID = course.CourseID; CREATE OR REPLACE FORCE VIEW V_teacher_to_course AS SELECT courseInstructor.FacultyID, faculty.FirstName, faculty.LastName, courseInstructor.SectionID, section.Days, TO_CHAR(StartTime, 'MM/DD/YYYY') AS StartDate, TO_CHAR(StartTime, 'HH:MI PM') AS StartTime, TO_CHAR(EndTime, 'MM/DD/YYYY') AS EndDate, TO_CHAR(EndTime, 'HH:MI PM') AS EndTime, location.Building, location.Room FROM courseInstructor, faculty, section, course, location WHERE courseInstructor.FacultyID = faculty.FacultyID AND courseInstructor.SectionID = section.SectionID AND section.LocationID = location.LocationID AND section.CourseID = course.CourseID; SELECT * FROM V_reg_student_course; SELECT * FROM V_teacher_to_course;

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • how to do loop for array which have different data for each array

    - by Suriani Salleh
    i have this file XML file.. I need to convert it form XMl to MYSQL. if it have only one array than i know how to do it.. now my question how to extract this two array Each array will have different value of data..for example for first array, pmIntervalTxEthMaxUtilization data : 0,74,0,0,48 and for second array pmIntervalRxPowerLevel data: -79,-68,-52 , pmIntervalTxPowerLevel data: 13,11,-55 . can some one help to guide how write php code to extract this xml file to MY SQL <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxUndersizedFrames</mt> [ this is first array] <mt>pmIntervalRxUnicastFrames</mt> <mt>pmIntervalTxUnicastFrames</mt> <mt>pmIntervalRxEthMaxUtilization</mt> <mt>pmIntervalTxEthMaxUtilization</mt> <mv> <moid>port:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 1st array i want to insert in DB] <r>0</r> <r>0</r> <r>5</r> <r>0</r> </mv> </mi> <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxSES</mt> [this is second array] <mt>pmIntervalRxPowerLevel</mt> <mt>pmIntervalTxPowerLevel</mt> <mv> <moid>client:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 2nd array i want to insert in DB] <r>-79</r> <r>13</r> </mv> </mi> this is the code for one array that i write..i dont know how to write code for two array because the field appear two times and have different data value for each array // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; ........................... i have do a little research on it this is the out come... $i = 0; while( $i < 5) { // Loop through the specified xpath foreach($xml->md->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_uni = $subchild->r[10]; $tx_uni = $subchild->r[11]; $rx_eth = $subchild->r[16]; $tx_eth = $subchild->r[17]; // dump into database; .............................. $i++; if( $i == 5 )break; } } // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; .......................

    Read the article

  • Reading a user input (character or string of letters) into ggplot command inside a switch statement or a nested ifelse (with functions in it)

    - by statisticalbeginner
    I have code like AA <- as.integer(readline("Select any number")) switch(AA, 1={ num <-as.integer(readline("Select any one of the options \n")) print('You have selected option 1') #reading user data var <- readline("enter the variable name \n") #aggregating the data based on required condition gg1 <- aggregate(cbind(get(var))~Mi+hours,a, FUN=mean) #Ploting ggplot(gg1, aes(x = hours, y = get(var), group = Mi, fill = Mi, color = Mi)) + geom_point() + geom_smooth(stat="smooth", alpha = I(0.01)) }, 2={ print('bar') }, { print('default') } ) The dataset is [dataset][1] I have loaded the dataset into object list a <- read.table(file.choose(), header=FALSE,col.names= c("Ei","Mi","hours","Nphy","Cphy","CHLphy","Nhet","Chet","Ndet","Cdet","DON","DOC","DIN","DIC","AT","dCCHO","TEPC","Ncocco","Ccocco","CHLcocco","PICcocco","par","Temp","Sal","co2atm","u10","dicfl","co2ppm","co2mol","pH")) I am getting error like source ("switch_statement_check.R") Select any one of the options 1 [1] "You have selected option 1" enter the variable name Nphy Error in eval(expr, envir, enclos) : (list) object cannot be coerced to type 'double' > gg1 is getting data that is fine. I dont know what to do to make the variable entered by user to work in that ggplot command. Please suggest any solution for this. The dput output structure(list(Ei = c(1L, 1L, 1L, 1L, 1L, 1L), Mi = c(1L, 1L, 1L, 1L, 1L, 1L), hours = 1:6, Nphy = c(0.1023488, 0.104524, 0.1064772, 0.1081702, 0.1095905, 0.110759), Cphy = c(0.6534707, 0.6448216, 0.6369597, 0.6299084, 0.6239005, 0.6191941), CHLphy = c(0.1053458, 0.110325, 0.1148174, 0.1187672, 0.122146, 0.1249877), Nhet = c(0.04994161, 0.04988347, 0.04982555, 0.04976784, 0.04971029, 0.04965285), Chet = c(0.3308593, 0.3304699, 0.3300819, 0.3296952, 0.3293089, 0.3289243), Ndet = c(0.04991916, 0.04984045, 0.04976363, 0.0496884, 0.04961446, 0.04954156), Cdet = c(0.3307085, 0.3301691, 0.3296314, 0.3290949, 0.3285598, 0.3280252), DON = c(0.05042275, 0.05085697, 0.05130091, 0.05175249, 0.05220978, 0.05267118 ), DOC = c(49.76304, 49.52745, 49.29323, 49.06034, 48.82878, 48.59851), DIN = c(14.9933, 14.98729, 14.98221, 14.9781, 14.97485, 14.97225), DIC = c(2050.132, 2050.264, 2050.396, 2050.524, 2050.641, 2050.758), AT = c(2150.007, 2150.007, 2150.007, 2150.007, 2150.007, 2150.007), dCCHO = c(0.964222, 0.930869, 0.8997098, 0.870544, 0.843196, 0.8175117), TEPC = c(0.1339044, 0.1652179, 0.1941872, 0.2210289, 0.2459341, 0.2690721), Ncocco = c(0.1040715, 0.1076058, 0.1104229, 0.1125141, 0.1140222, 0.1151228), Ccocco = c(0.6500288, 0.6386706, 0.6291149, 0.6213265, 0.6152447, 0.6108502), CHLcocco = c(0.1087667, 0.1164099, 0.1225822, 0.1273103, 0.1308843, 0.1336465), PICcocco = c(0.1000664, 0.1001396, 0.1007908, 0.101836, 0.1034179, 0.1055634), par = c(0, 0, 0.8695131, 1.551317, 2.777707, 4.814341), Temp = c(9.9, 9.9, 9.9, 9.9, 9.9, 9.9), Sal = c(31.31, 31.31, 31.31, 31.31, 31.31, 31.31), co2atm = c(370, 370, 370, 370, 370, 370), u10 = c(0.01, 0.01, 0.01, 0.01, 0.01, 0.01), dicfl = c(-2.963256, -2.971632, -2.980446, -2.989259, -2.997877, -3.005702), co2ppm = c(565.1855, 565.7373, 566.3179, 566.8983, 567.466, 567.9814), co2mol = c(0.02562326, 0.02564828, 0.0256746, 0.02570091, 0.02572665, 0.02575002 ), pH = c(7.879427, 7.879042, 7.878636, 7.878231, 7.877835, 7.877475)), .Names = c("Ei", "Mi", "hours", "Nphy", "Cphy", "CHLphy", "Nhet", "Chet", "Ndet", "Cdet", "DON", "DOC", "DIN", "DIC", "AT", "dCCHO", "TEPC", "Ncocco", "Ccocco", "CHLcocco", "PICcocco", "par", "Temp", "Sal", "co2atm", "u10", "dicfl", "co2ppm", "co2mol", "pH"), row.names = c(NA, 6L), class = "data.frame") As per the below suggestions I have tried a lot but it is not working. Summarizing I will say: var <- readline("enter a variable name") I cant use get(var) inside any command but not inside ggplot, it wont work. gg1$var it also doesnt work, even after changing the column names. Does it have a solution or should I just choose to import from an excel sheet, thats better? Tried with if else and functions fun1 <- function() { print('You have selected option 1') my <- as.character((readline("enter the variable name \n"))) gg1 <- aggregate(cbind(get(my))~Mi+hours,a, FUN=mean) names(gg1)[3] <- my #print(names(gg1)) ggplot (gg1,aes_string(x="hours",y=(my),group="Mi",color="Mi")) + geom_point() } my <- as.integer(readline("enter a number")) ifelse(my == 1,fun1(),"") ifelse(my == 2,print ("its 2"),"") ifelse(my == 3,print ("its 3"),"") ifelse(my != (1 || 2|| 3) ,print("wrong number"),"") Not working either...:(

    Read the article

  • How can I perform a web query in C# similar to the Data > Import External Data > New Web Query in Mi

    - by TNT
    I need to pull data from a table on a website. I can easily do this in VBA using the Web Query, but I need to do this in C#. I'm just having some trouble figuring out how to properly convert the code. I got something close, but it's returning HTML with the data. I just want the data. Any help would be great. The block of code giving me issues is below: Private Function GetData(ByVal theURL As String, ByVal theRow As Integer, ByVal thePosition As Integer, ByVal theColumn As String, ByVal theTable As Integer) With ActiveSheet.QueryTables.Add(Connection:="URL;" + theURL, Destination:=Sheet2.Range(theColumn & theRow + 1)) .Name = "op?s=" + Mid(theURL, thePosition + 1) + "_1" .PreserveFormatting = True .AdjustColumnWidth = True .WebTables = theTable .Refresh BackgroundQuery:=False End With End Function

    Read the article

  • DBD::Oracle and utf8

    - by golemwashere
    Hello, I have some troubles inserting an UTF8 string into an oracle 10 database on Solaris, using the latest DBD::Oracle on perl v5.8.4. This are my DB settings > --------SELECT * from NLS_DATABASE_PARAMETERS------------------------------- > NLS_NCHAR_CHARACTERSET AL16UTF16 > NLS_LANGUAGE AMERICAN > NLS_TERRITORY AMERICA NLS_CURRENCY $ > NLS_ISO_CURRENCY AMERICA > NLS_NUMERIC_CHARACTERS ., > NLS_CHARACTERSET UTF8 > NLS_CALENDAR GREGORIAN > NLS_DATE_FORMAT DD-MON-RR > NLS_DATE_LANGUAGE AMERICAN > NLS_SORT BINARY > NLS_TIME_FORMAT HH.MI.SSXFF AM > NLS_TIMESTAMP_FORMAT DD-MON-RR > HH.MI.SSXFF AM > NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR > NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR > HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ > NLS_COMP BINARY > NLS_LENGTH_SEMANTICS CHAR > NLS_NCHAR_CONV_EXCP FALSE > NLS_RDBMS_VERSION 10.2.0.4.0 > -------------------------------------------------------------------------- This are my perl $dbh-ora_nls_parameters() $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'CHAR', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'UTF8', 'NLS_CURRENCY' => '$' }; In my script I have: use utf-8; $ENV{NLS_LANG}='AMERICAN_AMERICA.UTF8'; .. $sth->bind_param(5, $myclobfield, {ora_type => ORA_CLOB, ora_csform => SQLCS_NCHAR}); .. The string prints out 1 on print Encode::is_utf8($myclobfield); But characters like òàè are not correctly inserted into the DB. (I tested with a utf8 compliant client that can correctly insert and read them) Can anyone suggest the best way to do it? Thanks

    Read the article

  • mysql true row merge... not just a union

    - by panofish
    What is the mysql I need to achieve the result below given these 2 tables: table1: +----+-------+ | id | name | +----+-------+ | 1 | alan | | 2 | bob | | 3 | dave | +----+-------+ table2: +----+---------+ | id | state | +----+---------+ | 2 | MI | | 3 | WV | | 4 | FL | +----+---------+ I want to create a temporary view that looks like this desired result: +----+---------+---------+ | id | name | state | +----+---------+---------+ | 1 | alan | | | 2 | bob | MI | | 3 | dave | WV | | 4 | | FL | +----+---------+---------+ I tried a mysql union but the following result is not what I want. create view table3 as (select id,name,"" as state from table1) union (select id,"" as name,state from table2) table3 union result: +----+---------+---------+ | id | name | state | +----+---------+---------+ | 1 | alan | | | 2 | bob | | | 3 | dave | | | 2 | | MI | | 3 | | WV | | 4 | | FL | +----+---------+---------+ First suggestion results: SELECT * FROM table1 LEFT OUTER JOIN table2 USING (id) UNION SELECT * FROM table1 RIGHT OUTER JOIN table2 USING (id) +----+---------+---------+ | id | name | state | +----+---------+---------+ | 1 | alan | | | 2 | bob | MI | | 3 | dave | WV | | 2 | MI | bob | | 3 | WV | dave | | 4 | FL | | +----+---------+---------+

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >