Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 192/331 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

  • ruby / rails / mysql performance degraded on Snow Leopard

    - by adamaig
    I've burned a bunch of hours on this. I'm not having problems getting things to build, but I am seeing that my test suite runs about 2x slower than when I was on OS X 10.5.x . I've spent a lot of time playing around with different optimization settings (learning to avoid homebrew's llvm-gcc compilation). I've just learned that I needed to tweaks /Library/Preferences/SystemConfiguration/com.apple.Boot.plist in order to get the kernel to boot in 64 bit mode. However, my rails app is still running a bit slower than before, even after warming up the mysql server. So what performance tweaks might i need to look into? Right now the stock ruby 1.8.7 runs faster than 1.9.1 for some things, and I'd really like to know if there is anything I should be looking for. All my dev software has been compiled for x86_64, mysql with -O2 optimization, using regular gcc (not llvm-gcc).

    Read the article

  • Clickonce downloading the deploy files via HTTP and not HTTPS

    - by Scott Manning
    I am working on a project to deploy a project via clickonce. The website where these files are housed will only accept HTTPS traffic and if you attempt to connect via HTTP, our siteminder agent will redirect you to a HTTPS login form. We cannot disable the siteminder agent or enable HTTP for security reasons. In the application file, I have a codebase that references an absolute path to the manifest and it is via HTTPS <dependency> <dependentAssembly dependencyType="install" codebase="https://psaportal.ilab.test.com/testprinting/Application_Files/testprint_1_0_0_1/testprint.exe.manifest" size="10147"> <assemblyIdentity name="testprint.exe" version="1.0.0.1" publicKeyToken="9a078649ee05e0e7" language="neutral" processorArchitecture="msil" type="win32" /> <hash> <dsig:Transforms> <dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" /> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" /> <dsig:DigestValue>2nch1T0SmlAycmePobtg9F1qF7c=</dsig:DigestValue> </hash> </dependentAssembly> </dependency> In running wireshark and decoding the SSL traffic (I am using the server’s private key in wireshark to decrypt the SSL traffic). I see the request to the application’s manifest file is via HTTPS (This is a good thing). But when the clickonce tries to download the testprint.exe.deploy and the other respective files, it is always via HTTP and the siteminder jumps in and redirects the requests which kills the clickonce install with errors. I have tried to specific an absolute codebase reference in the manifest file, but then I start getting entrypoint errors when the manifest is downloaded by the Clickonce installer. The current dependency section from the manifest file looks like the following: <dependency> <dependentAssembly dependencyType="install" allowDelayedBinding="true" codebase="testprint.exe" size="107008"> <assemblyIdentity name="testprint" version="1.0.0.1" language="neutral" processorArchitecture="msil" /> <hash> <dsig:Transforms> <dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" /> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" /> <dsig:DigestValue>dm2nJsu/5UyaEXSDmnISwfnE9MM=</dsig:DigestValue> </hash> </dependentAssembly> </dependency> I have verified that the website where the application, manifest and deploy files are all under the same URL and the SSL certificate is a valid certificate. We have tried about every combination of generating application and manifest files as we a dream up and are looking for other solutions. The application is using .NET 3.5 and we have tried building the application and manifest files via VS2008, VS2010 and mage with no success. Does anyone know how to get all of the deploy files to always download via HTTPS?

    Read the article

  • use of tcp_delack_min on redhat linux (kernel 2.6.18)

    - by user41466
    Hello, we're moving from Solaris to Redhat Linux, and trying to duplicate our low-latency setup, that, on solaris, includes the ndd settings related to TCP NO DELAY, and NAGLE ALGORITHM. I got the impression that those parameters are not all configurable system-wide, but still found some info. we have configured our applications to run with no nagle algorithm, but that is not sufficient. we have found an interesting RH article talking presenting the tcp_delack_min parameter, however, when browsing /proc/sys/net/ipv4/ , I can't find it there. would it be safe to assume that simply "adding" the parameter as it's said on the doc would be enough, or rather that the option is not supported by this version (would be strange, as RH specify that it "can be performed on a standard Red Hat Enterprise Linux installation") ? any other idea / recommendation to improve latency further ? thanks

    Read the article

  • FFMPEG based Theora Video Decoder performance??

    - by goldenmean
    Hi, I am in process of porting and optimization of the theora video decoder in the ffmpeg-0.5 package to ARM-Cortex-A8 -Neon processor @ 667 MHz. I am looking for some target estimate for frames per second the decoder library alone should achieve after full optimization (C level and Neon assembly / Intrinsics) for 720x480 Progressive content for a 2Mbps stream. I have a Real Video 9 decoder on cortex-A8 which gives around 40 fps for the same stream above.(720x480, 2Mbps) How can i extrapolate this data based on relative complexities of RV9 and Theora and get a fps estimate for theora decoder Cortex-A8? I am aware the performance depends upon the cache configuration of the h/w, etc...,but any Any pointers will help. Thanks, -AD

    Read the article

  • Unable to find valid certification path to requested target while CAS authentication

    - by Dmitriy Sukharev
    I'm trying to configure CAS authentication. It requires both CAS and client application to use HTTPS protocol. Unfortunately we should use self-signed certificate (with CN that doesn't have anything in common with our server). Also the server is behind firewall and we have only two ports (ssh and https) visible. As far as there're several application that should be visible externally, we use Apache for ajp reverse proxying requests to these applications. Secure connections are managed by Apache, and all Tomcat are not configured to work with SSL. But I obtained exception while authentication, therefore desided to set keystore in CATALINA_OPTS: export CATALINA_OPTS="-Djavax.net.ssl.keyStore=/path/to/tomcat/ssl/cert.pfx -Djavax.net.ssl.keyStoreType=PKCS12 -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.keyAlias=alias -Djavax.net.debug=ssl" cert.pfx was obtained from certificate and key that are used by Apache HTTP Server: $ openssl pkcs12 -export -out /path/to/tomcat/ssl/cert.pfx -inkey /path/to/apache2/ssl/server-key.pem -in /path/to/apache2/ssl/server-cert.pem When I try to authenticate a user I obtain the following exception: Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:174) ~[na:1.6.0_32] at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:238) ~[na:1.6.0_32] at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:318) ~[na:1.6.0_32] Meanwhile I can see in catalina.out that Tomcat see certificate in cert.pfx and it's the same as the one that is used while authentication: 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Constructing validation url: https://external-ip/cas/proxyValidate?pgtUrl=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_proxyreceptor&ticket=ST-17-PN26WtdsZqNmpUBS59RC-cas&service=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_check 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. keyStore is : /path/to/tomcat/ssl/cert.pfx keyStore type is : PKCS12 keyStore provider is : init keystore init keymanager of type SunX509 *** found key for : 1 chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** trustStore is: /jdk-home-folder/jre/lib/security/cacerts Here is a lot of trusted CAs. Here is nothing related to our certicate or our (not trusted) CA. ... 09:11:39.731 [http-bio-8080-exec-4] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. Allow unsafe renegotiation: false Allow legacy hello messages: true Is initial handshake: true Is secure renegotiation: false %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 63, 239, 180, 32, 103, 140, 83, 7, 109, 149, 177, 80, 223, 79, 243, 244, 60, 191, 124, 139, 108, 5, 122, 238, 146, 1, 54, 218 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SHA, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV] Compression Methods: { 0 } *** http-bio-8080-exec-4, WRITE: TLSv1 Handshake, length = 75 http-bio-8080-exec-4, WRITE: SSLv2 client hello message, length = 101 http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 81 *** ServerHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 145, 237, 232, 63, 240, 104, 234, 201, 148, 235, 12, 222, 60, 75, 174, 0, 103, 38, 196, 181, 27, 226, 243, 61, 34, 7, 107, 72 } Session ID: {79, 202, 117, 79, 130, 216, 168, 38, 68, 29, 182, 82, 16, 25, 251, 66, 93, 108, 49, 133, 92, 108, 198, 23, 120, 120, 135, 151, 15, 13, 199, 87} Cipher Suite: SSL_RSA_WITH_RC4_128_SHA Compression Method: 0 Extension renegotiation_info, renegotiated_connection: <empty> *** %% Created: [Session-2, SSL_RSA_WITH_RC4_128_SHA] ** SSL_RSA_WITH_RC4_128_SHA http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 609 *** Certificate chain chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** http-bio-8080-exec-4, SEND TLSv1 ALERT: fatal, description = certificate_unknown http-bio-8080-exec-4, WRITE: TLSv1 Alert, length = 2 http-bio-8080-exec-4, called closeSocket() http-bio-8080-exec-4, handling exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target I tried to convert our pem certificate to der format and imported it to trustedKeyStore (cacerts) (without private key), but it didn't change anything. But I'm not confident that I did it rigth. Also I must inform you that I don't know passphrase for our servier-key.pem file, and probably it differs from password for keystore created by me. OS: CentOS 6.2 Architecture: x64 Tomcat version: 7 Apache HTTP Server version: 2.4 Is there any way to make Tomcat accepts our certificate?

    Read the article

  • Does an Intent's extras still get flattened into a Parcel even if the new activity is being started

    - by Neil Traft
    I was wondering... So if you start a new activity via an intent, the intent has to be serialized and deserialized because you may have to send the intent to a separate VM instance via IPC. But what if the PackageManager knows that your new activity will be created on the current task? It seems like a reasonably Googly optimization would be not to serialize the intent at all, since it's all happening inside the same VM. But then again, you can't just allow the new activity to use the same instance of each parcelable, because any changes made by the new activity would show up in the old activity and the programmer might not be expecting this. So, is this optimization being done? Or do the extras always get marshalled and unmarshalled, no matter what?

    Read the article

  • How do I tell mdadm to start using a missing disk in my RAID5 array again?

    - by Jon Cage
    I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine. When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it. I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d1 : inactive sdf1[2](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] I have two RAID arrays and the one that normally pops up as md1 isn't appearing. I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started: root@uberserver:~# mdadm --stop /dev/md_d1 mdadm: stopped /dev/md_d1 ...and then tried to tell ubuntu to pick the disks up again: root@uberserver:~# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives (out of 3). So that's started md1 again but it's not picking up the disk from md_d1: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde1[1] sdf1[2] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] md_d1 : inactive sdd1[0](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array? How do I get that missing disk back home again? [Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it: root@uberserver:~# mdadm --assemble --scan /dev/md1: File exists mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd1... What am I missing?

    Read the article

  • Google App Engine Database Index

    - by fjsj
    I need to store a undirected graph in a Google App Engine database. For optimization purposes, I am thinking to use database indexes. Using Google App Engine, is there any way to define the columns of a database table to create its index? I will need some optimization, since my app uses this stored undirected graph on a content-based filtering for item recommendation. Also, the recommender algorithm updates the weights of some graph's edges. If it is not possible to use database indexes, please suggest another method to reduce query time for the graph table. I believe my algorithm does more data retrieval operations from graph table than write operations. PS: I am using Python.

    Read the article

  • GCC problem with raw double type comparisons

    - by Monomer
    I have the following bit of code, however when compiling it with GCC 4.4 with various optimization flags I get some unexpected results when its run. #include <iostream> int main() { const unsigned int cnt = 10; double lst[cnt] = { 0.0 }; const double v[4] = { 131.313, 737.373, 979.797, 731.137 }; for(unsigned int i = 0; i < cnt; ++i) { lst[i] = v[i % 4] * i; } for(unsigned int i = 0; i < cnt; ++i) { double d = v[i % 4] * i; if(lst[i] != d) { std::cout << "error @ : " << i << std::endl; return 1; } } return 0; } when compiled with: "g++ -pedantic -Wall -Werror -O1 -o test test.cpp" I get the following output: "error @ : 3" when compiled with: "g++ -pedantic -Wall -Werror -O2 -o test test.cpp" I get the following output: "error @ : 3" when compiled with: "g++ -pedantic -Wall -Werror -O3 -o test test.cpp" I get no errors when compiled with: "g++ -pedantic -Wall -Werror -o test test.cpp" I get no errors I do not believe this to be an issue related to rounding, or epsilon difference in the comparison. I've tried this with Intel v10 and MSVC 9.0 and they all seem to work as expected. I believe this should be nothing more than a bitwise compare. If I replace the if-statement with the following: if (static_cast<long long int>(lst[i]) != static_cast<long long int>(d)), and add "-Wno-long-long" I get no errors in any of the optimization modes when run. If I add std::cout << d << std::endl; before the "return 1", I get no errors in any of the optimization modes when run. Is this a bug in my code, or is there something wrong with GCC and the way it handles the double type?

    Read the article

  • Help with refactoring PHP code

    - by Richard Knop
    I had some troubles implementing Lawler's algorithm but thanks to SO and a bounty of 200 reputation I finally managed to write a working implementation: http://stackoverflow.com/questions/2466928/lawlers-algorithm-implementation-assistance I feel like I'm using too many variables and loops there though so I am trying to refactor the code. It should be simpler and shorter yet remain readable. Does it make sense to make a class for this? Any advice or even help with refactoring this piece of code is welcomed: <?php /* * @name Lawler's algorithm PHP implementation * @desc This algorithm calculates an optimal schedule of jobs to be * processed on a single machine (in reversed order) while taking * into consideration any precedence constraints. * @author Richard Knop * */ $jobs = array(1 => array('processingTime' => 2, 'dueDate' => 3), 2 => array('processingTime' => 3, 'dueDate' => 15), 3 => array('processingTime' => 4, 'dueDate' => 9), 4 => array('processingTime' => 3, 'dueDate' => 16), 5 => array('processingTime' => 5, 'dueDate' => 12), 6 => array('processingTime' => 7, 'dueDate' => 20), 7 => array('processingTime' => 5, 'dueDate' => 27), 8 => array('processingTime' => 6, 'dueDate' => 40), 9 => array('processingTime' => 3, 'dueDate' => 10)); // precedence constrainst, i.e job 2 must be completed before job 5 etc $successors = array(2=>5, 7=>9); $n = count($jobs); $optimalSchedule = array(); for ($i = $n; $i >= 1; $i--) { // jobs not required to precede any other job $arr = array(); foreach ($jobs as $k => $v) { if (false === array_key_exists($k, $successors)) { $arr[] = $k; } } // calculate total processing time $totalProcessingTime = 0; foreach ($jobs as $k => $v) { if (true === array_key_exists($k, $arr)) { $totalProcessingTime += $v['processingTime']; } } // find the job that will go to the end of the optimal schedule array $min = null; $x = 0; $lastKey = null; foreach($arr as $k) { $x = $totalProcessingTime - $jobs[$k]['dueDate']; if (null === $min || $x < $min) { $min = $x; $lastKey = $k; } } // add the job to the optimal schedule array $optimalSchedule[$lastKey] = $jobs[$lastKey]; // remove job from the jobs array unset($jobs[$lastKey]); // remove precedence constraint from the successors array if needed if (true === in_array($lastKey, $successors)) { foreach ($successors as $k => $v) { if ($lastKey === $v) { unset($successors[$k]); } } } } // reverse the optimal schedule array and preserve keys $optimalSchedule = array_reverse($optimalSchedule, true); // add tardiness to the array $i = 0; foreach ($optimalSchedule as $k => $v) { $optimalSchedule[$k]['tardiness'] = 0; $j = 0; foreach ($optimalSchedule as $k2 => $v2) { if ($j <= $i) { $optimalSchedule[$k]['tardiness'] += $v2['processingTime']; } $j++; } $i++; } echo '<pre>'; print_r($optimalSchedule); echo '</pre>';

    Read the article

  • Most Efficient way to set Register to 1 or (-1)

    - by Bob
    I am taking an assembly course now, and the guy who checks our home assignments is a very pedantic old-school optimization freak. For example he deducts 10% if he sees: mov ax, 0 instead of: xor ax,ax even if it's only used once. I am not a complete beginner in assembly programing but I'm not an optimization expert, so I need your help in something (might be a very stupid question but I'll ask anyway): if I need to set a register value to 1 or (-1) is it better to use: mov ax, 1 or do something like: xor ax,ax inc ax I really need a good grade, so I'm trying to get it as optimized as possible. ( I need to optimize both time and code size)

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Can I execute a "variable statements" within a function and without defines.

    - by René Nyffenegger
    I am facing a problem that I cannot see how it is solvable without #defines or incuring a performance impact although I am sure that someone can point me to a solution. I have an algorithm that sort of produces a (large) series of values. For simplicity's sake, in the following I pretend it's a for loop in a for loop, although in my code it's more complex than that. In the core of the loop I need to do calculations with the values being produced. Although the algorithm for the values stays the same, the calculations vary. So basically, what I have is: void normal() { // "Algorithm" producing numbers (x and y): for (int x=0 ; x<1000 ; x++) { for (int y=0 ; y<1000 ; y++) { // Calculation with numbers being produced: if ( x+y == 800 && y > 790) { std::cout << x << ", " << y << std::endl; } // end of calculation }} } So, the only part I need to change is if ( x+y == 800 && y > 790) { std::cout << x << ", " << y << std::endl; } So, in order to solve that, I could construct an abstract base class: class inner_0 { public: virtual void call(int x, int y) = 0; }; and derive a "callable" class from it: class inner : public inner_0 { public: virtual void call(int x, int y) { if ( x+y == 800 && y > 790) { std::cout << x << ", " << y << std::endl; } } }; I can then pass an instance of the class to the "algorithm" like so: void O(inner i) { for (int x=0 ; x<1000 ; x++) { for (int y=0 ; y<1000 ; y++) { i.call(x,y); }} } // somewhere else.... inner I; O(I); In my case, I incur a performance hit because there is an indirect call via virtual function table. So I was thinking about a way around it. It's possible with two #defines: #define OUTER \ for (int x=0 ; x<1000 ; x++) { \ for (int y=0 ; y<1000 ; y++) { \ INNER \ }} // later... #define INNER \ if (x + y == 800 && y > 790) \ std::cout << x << ", " << y << std::endl; OUTER While this certainly works, I am not 100% happy with it because I don't necessarly like #defines. So, my question: is there a better way for what I want to achieve?

    Read the article

  • Best way to search a point across several polygons

    - by user1474341
    I have a requirement whereby I need to match a given point (lat,lon) against several polygons to decide if there is a match. The easiest way would be to iterative over each polygon and apply the point-in-polygon check algorithm, but that is prohibitively expensive. The next optimization that I did was to define a bounding rectangle for each polygon (upper bound, lower bound) and iteratively check the point against the bounding box (fewer comparisons as against checking all the points in the polygon). Is there any other optimization possible? Would a spatial index on the bound rectangle points or a geohash help ? Any guidance would be greatly appreciated. Thanks!

    Read the article

  • displaying python's autodoc to the user (python 3.3)

    - by Plotinus
    I'm writing a simple command line math game, and I'm using python's autodoc for my math algorithms to help me remember, for example, what a proth number is while i'm writing the algorithm, but later on I'll want to tell that information to the user as well, so they'll know what the answer was. So, for example I have: def is_proth(): """Proth numbers and numbers that fit the formula k×2^n + 1, where k are odd positive integers, and 2^n > k.""" [snip] return proths and then I tried to make a dictionary, like so: definitions = {"proths" : help(is_proth)} But it doesn't work. It prints this when I start the program, one for each item in the dictionary, and then it errors out on one of them that returns a set. And anyway, I don't want it displayed to the user until after they've played the game. Help on function is_proth in module __main__: is_proth() Proth numbers and numbers that fit the formula k×2^n + 1, where k are odd positive integers, and 2^n > k. (END) I understand the purpose of autodoc is more for helping programmers who are calling a function than for generating userdoc, but it seems inefficient to have to type out the definition of what a proth number is twice, once in a comment to help me remember what an algorithm does and then once to tell the user the answer to the game they were playing after they've won or lost.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • A Nondeterministic Engine written in VB.NET 2010

    - by neil chen
    When I'm reading SICP (Structure and Interpretation of Computer Programs) recently, I'm very interested in the concept of an "Nondeterministic Algorithm". According to wikipedia:  In computer science, a nondeterministic algorithm is an algorithm with one or more choice points where multiple different continuations are possible, without any specification of which one will be taken. For example, here is an puzzle came from the SICP: Baker, Cooper, Fletcher, Miller, and Smith live on different floors of an apartment housethat contains only five floors. Baker does not live on the top floor. Cooper does not live onthe bottom floor. Fletcher does not live on either the top or the bottom floor. Miller lives ona higher floor than does Cooper. Smith does not live on a floor adjacent to Fletcher's.Fletcher does not live on a floor adjacent to Cooper's. Where does everyone live? After reading this I decided to build a simple nondeterministic calculation engine with .NET. The rough idea is that we can use an iterator to track each set of possible values of the parameters, and then we implement some logic inside the engine to automate the statemachine, so that we can try one combination of the values, then test it, and then move to the next. We also used a backtracking algorithm to go back when we are running out of choices at some point. Following is the core code of the engine itself: Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--Public Class NonDeterministicEngine Private _paramDict As New List(Of Tuple(Of String, IEnumerator)) 'Private _predicateDict As New List(Of Tuple(Of Func(Of Object, Boolean), IEnumerable(Of String))) Private _predicateDict As New List(Of Tuple(Of Object, IList(Of String))) Public Sub AddParam(ByVal name As String, ByVal values As IEnumerable) _paramDict.Add(New Tuple(Of String, IEnumerator)(name, values.GetEnumerator())) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(1, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(2, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(3, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(4, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(5, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(6, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(7, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Public Sub AddRequire(ByVal predicate As Func(Of Object, Object, Object, Object, Object, Object, Object, Object, Boolean), ByVal paramNames As IList(Of String)) CheckParamCount(8, paramNames) _predicateDict.Add(New Tuple(Of Object, IList(Of String))(predicate, paramNames)) End Sub Sub CheckParamCount(ByVal count As Integer, ByVal paramNames As IList(Of String)) If paramNames.Count <> count Then Throw New Exception("Parameter count does not match.") End If End Sub Public Property IterationOver As Boolean Private _firstTime As Boolean = True Public ReadOnly Property Current As Dictionary(Of String, Object) Get If IterationOver Then Return Nothing Else Dim _nextResult = New Dictionary(Of String, Object) For Each item In _paramDict Dim iter = item.Item2 _nextResult.Add(item.Item1, iter.Current) Next Return _nextResult End If End Get End Property Function MoveNext() As Boolean If IterationOver Then Return False End If If _firstTime Then For Each item In _paramDict Dim iter = item.Item2 iter.MoveNext() Next _firstTime = False Return True Else Dim canMoveNext = False Dim iterIndex = _paramDict.Count - 1 canMoveNext = _paramDict(iterIndex).Item2.MoveNext If canMoveNext Then Return True End If Do While Not canMoveNext iterIndex = iterIndex - 1 If iterIndex = -1 Then Return False IterationOver = True End If canMoveNext = _paramDict(iterIndex).Item2.MoveNext If canMoveNext Then For i = iterIndex + 1 To _paramDict.Count - 1 Dim iter = _paramDict(i).Item2 iter.Reset() iter.MoveNext() Next Return True End If Loop End If End Function Function GetNextResult() As Dictionary(Of String, Object) While MoveNext() Dim result = Current If Satisfy(result) Then Return result End If End While Return Nothing End Function Function Satisfy(ByVal result As Dictionary(Of String, Object)) As Boolean For Each item In _predicateDict Dim pred = item.Item1 Select Case item.Item2.Count Case 1 Dim p1 = DirectCast(pred, Func(Of Object, Boolean)) Dim v1 = result(item.Item2(0)) If Not p1(v1) Then Return False End If Case 2 Dim p2 = DirectCast(pred, Func(Of Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) If Not p2(v1, v2) Then Return False End If Case 3 Dim p3 = DirectCast(pred, Func(Of Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) If Not p3(v1, v2, v3) Then Return False End If Case 4 Dim p4 = DirectCast(pred, Func(Of Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) If Not p4(v1, v2, v3, v4) Then Return False End If Case 5 Dim p5 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) If Not p5(v1, v2, v3, v4, v5) Then Return False End If Case 6 Dim p6 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) Dim v6 = result(item.Item2(5)) If Not p6(v1, v2, v3, v4, v5, v6) Then Return False End If Case 7 Dim p7 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) Dim v6 = result(item.Item2(5)) Dim v7 = result(item.Item2(6)) If Not p7(v1, v2, v3, v4, v5, v6, v7) Then Return False End If Case 8 Dim p8 = DirectCast(pred, Func(Of Object, Object, Object, Object, Object, Object, Object, Object, Boolean)) Dim v1 = result(item.Item2(0)) Dim v2 = result(item.Item2(1)) Dim v3 = result(item.Item2(2)) Dim v4 = result(item.Item2(3)) Dim v5 = result(item.Item2(4)) Dim v6 = result(item.Item2(5)) Dim v7 = result(item.Item2(6)) Dim v8 = result(item.Item2(7)) If Not p8(v1, v2, v3, v4, v5, v6, v7, v8) Then Return False End If Case Else Throw New NotSupportedException End Select Next Return True End FunctionEnd Class    And now we can use the engine to solve the problem we mentioned above:   Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--Sub Test2() Dim engine = New NonDeterministicEngine() engine.AddParam("baker", {1, 2, 3, 4, 5}) engine.AddParam("cooper", {1, 2, 3, 4, 5}) engine.AddParam("fletcher", {1, 2, 3, 4, 5}) engine.AddParam("miller", {1, 2, 3, 4, 5}) engine.AddParam("smith", {1, 2, 3, 4, 5}) engine.AddRequire(Function(baker) As Boolean Return baker <> 5 End Function, {"baker"}) engine.AddRequire(Function(cooper) As Boolean Return cooper <> 1 End Function, {"cooper"}) engine.AddRequire(Function(fletcher) As Boolean Return fletcher <> 1 And fletcher <> 5 End Function, {"fletcher"}) engine.AddRequire(Function(miller, cooper) As Boolean 'Return miller = cooper + 1 Return miller > cooper End Function, {"miller", "cooper"}) engine.AddRequire(Function(smith, fletcher) As Boolean Return smith <> fletcher + 1 And smith <> fletcher - 1 End Function, {"smith", "fletcher"}) engine.AddRequire(Function(fletcher, cooper) As Boolean Return fletcher <> cooper + 1 And fletcher <> cooper - 1 End Function, {"fletcher", "cooper"}) engine.AddRequire(Function(a, b, c, d, e) As Boolean Return a <> b And a <> c And a <> d And a <> e And b <> c And b <> d And b <> e And c <> d And c <> e And d <> e End Function, {"baker", "cooper", "fletcher", "miller", "smith"}) Dim result = engine.GetNextResult() While Not result Is Nothing Console.WriteLine(String.Format("baker: {0}, cooper: {1}, fletcher: {2}, miller: {3}, smith: {4}", result("baker"), result("cooper"), result("fletcher"), result("miller"), result("smith"))) result = engine.GetNextResult() End While Console.WriteLine("Calculation ended.")End Sub   Also, this engine can solve the classic 8 queens puzzle and find out all 92 results for me.   Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--Sub Test3() ' The 8-Queens problem. Dim engine = New NonDeterministicEngine() ' Let's assume that a - h represents the queens in row 1 to 8, then we just need to find out the column number for each of them. engine.AddParam("a", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("b", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("c", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("d", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("e", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("f", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("g", {1, 2, 3, 4, 5, 6, 7, 8}) engine.AddParam("h", {1, 2, 3, 4, 5, 6, 7, 8}) Dim NotInTheSameDiagonalLine = Function(cols As IList) As Boolean For i = 0 To cols.Count - 2 For j = i + 1 To cols.Count - 1 If j - i = Math.Abs(cols(j) - cols(i)) Then Return False End If Next Next Return True End Function engine.AddRequire(Function(a, b, c, d, e, f, g, h) As Boolean Return a <> b AndAlso a <> c AndAlso a <> d AndAlso a <> e AndAlso a <> f AndAlso a <> g AndAlso a <> h AndAlso b <> c AndAlso b <> d AndAlso b <> e AndAlso b <> f AndAlso b <> g AndAlso b <> h AndAlso c <> d AndAlso c <> e AndAlso c <> f AndAlso c <> g AndAlso c <> h AndAlso d <> e AndAlso d <> f AndAlso d <> g AndAlso d <> h AndAlso e <> f AndAlso e <> g AndAlso e <> h AndAlso f <> g AndAlso f <> h AndAlso g <> h AndAlso NotInTheSameDiagonalLine({a, b, c, d, e, f, g, h}) End Function, {"a", "b", "c", "d", "e", "f", "g", "h"}) Dim result = engine.GetNextResult() While Not result Is Nothing Console.WriteLine("(1,{0}), (2,{1}), (3,{2}), (4,{3}), (5,{4}), (6,{5}), (7,{6}), (8,{7})", result("a"), result("b"), result("c"), result("d"), result("e"), result("f"), result("g"), result("h")) result = engine.GetNextResult() End While Console.WriteLine("Calculation ended.")End Sub (Chinese version of the post: http://www.cnblogs.com/RChen/archive/2010/05/17/1737587.html) Cheers,  

    Read the article

  • Does a site's bounce rate influence Google rankings?

    - by Joel Spolsky
    Does Google consider bounce rate or something similar in ranking sites? Background: here at Stack Exchange we noticed that the latest Google algorithm changes resulted in about a 20% dip in traffic to Server Fault (and a much smaller dip in traffic to Super User). Stack Overflow traffic was not affected. There was an article on WebProNews which hypothesized that bounce rate might be a ranking signal in Google's latest Panda update. According to Google Analytics, these are our bounce rates over the last month: Site Bounce Rate Avg Time on Site ------------- ----------- ---------------- SuperUser 84.67% 01:16 ServerFault 83.76% 00:53 Stack Overflow 63.63% 04:12 Now, technically, Google has no way to know the bounce rate. If you go to Google, search for something, and click on the first result, Google can't tell the difference between: a user who turns off their computer a user who goes to a completely different web site a user who spends hours clicking around on the website they landed on What Google does know is how long it takes the user to come back to Google and do another search. According to the book In The Plex (page 47), Google distinguishes between what they call "short clicks" and "long clicks": A short click is a search where the user quickly comes back to Google and does another search. Google interprets this as a signal that the first search results were unsatisfactory. A long click is a search where the user doesn't search again for a long time. The book says that Google uses this information internally, to judge the quality of their own algorithms. It also said that short click data in which someone retypes a slight variation of the search is used to fuel the "Did you mean...?" spell checking algorithm. So, my hypothesis is that Google has recently decided to use long click rates as a signal of a high quality site. Does anyone have any evidence of this? Have you seen any high-bounce-rate sites which lost traffic (or vice-versa)?

    Read the article

  • Need some advice regarding collision detection with the sprite changing its width and height

    - by Frank Scott
    So I'm messing around with collision detection in my tile-based game and everything works fine and dandy using this method. However, now I am trying to implement sprite sheets so my character can have a walking and jumping animation. For one, I'd like to to be able to have each frame of variable size, I think. I want collision detection to be accurate and during a jumping animation the sprite's height will be shorter (because of the calves meeting the hamstrings). Again, this also works fine at the moment. I can get the character to animate properly each frame and cycle through animations. The problems arise when the width and height of the character change. Often times its position will be corrected by the collision detection system and the character will be rubber-banded to random parts of the map or even go outside the map bounds. For some reason with the linked collision detection algorithm, when the width or height of the sprite is changed on the fly, the entire algorithm breaks down. The solution I found so far is to have a single width and height of the sprite that remains constant, and only adjust the source rectangle for drawing. However, I'm not sure exactly what to set as the sprite's constant bounding box because it varies so much with the different animations. So now I'm not sure what to do. I'm toying with the idea of pixel-perfect collision detection but I'm not sure if it would really be worth it. Does anyone know how Braid does their collision detection? My game is also a 2D sidescroller and I was quite impressed with how it was handled in that game. Thanks for reading.

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Why are SW engineering interviews disproportionately difficult?

    - by stackoverflowuser2010
    First, some background on me. I have a PhD in CS and have had jobs both as a software engineer and as an R&D research scientist, both at Very Large Corporations You Know Very Well. I recently changed jobs and interviewed for both types of jobs (as I have done in the past). My observation: SW engineer job interviews are way, way disproportionately more difficult than CS researcher job interviews, but the researcher job is higher paying, more competitive, more rewarding, more interesting, and has a higher upside. Here's a typical interview loop for researcher: Phone interview to see if my research is in alignment with the lab's researcher In-person, give presentation on my recent research for one hour (which represents maybe 9 month's worth of work), answer questions In-person one-on-one interviews with about 5 researchers, where they ask me very reasonable questions on my work/publications/patents, including: technical questions, where my work fits into related work, and how I can extend my work to new areas Here's a typical interview loop for SW engineer: Phone interview where I'm asked algorithm questions and maybe do some coding. Pretty standard. In-person interviews at the whiteboard where they drill the F*** out of you on esoteric C++ minutia (e.g. how does a polymorphic virtual function call work), algorithms (make all-pairs-shortest-path algorithm work for 1B vertices), system design (design a database load balancer), etc. This goes on for six or seven interviews. Ridiculous. Why would anyone be willing to put up with this? What is the point of asking about C++ trivia or writing code to prove yourself? Why not make the SE interview more like the researcher interview where you give a talk about what you've done? How are technical job interviews for other fields, like physics, chemistry, civil engineering, mechanical engineering?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >