Search Results

Search found 12537 results on 502 pages for 'solid state'.

Page 97/502 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • How Hacker Can Access VPS CentOS 6 content?

    - by user2118559
    Just want to understand. Please, correct mistakes and write advices Hacker can access to VPS: 1. Through (using) console terminal, for example, using PuTTY. To access, hacker need to know port number, username and password. Port number hacker can know scanning open ports and try to login. The only way to login as I understand need to know username and password. To block (make more difficult) port scanning, need to use iptables configure /etc/sysconfig/iptables. I followed this https://www.digitalocean.com/community/articles/how-to-setup-a-basic-ip-tables-configuration-on-centos-6 tutorial and got *nat :PREROUTING ACCEPT [87:4524] :POSTROUTING ACCEPT [77:4713] :OUTPUT ACCEPT [77:4713] COMMIT *mangle :PREROUTING ACCEPT [2358:200388] :INPUT ACCEPT [2358:200388] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2638:477779] :POSTROUTING ACCEPT [2638:477779] COMMIT *filter :INPUT DROP [1:40] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [339:56132] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 110 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 21 -j ACCEPT COMMIT Regarding ports that need to be opened. If does not use ssl, then seems must leave open port 80 for website. Then for ssh (default 22) and for ftp (default 21). And set ip address, from which can connect. So if hacker uses other ip address, he can not access even knowing username and password? Regarding emails not sure. If I send email, using Gmail (Send mail as: (Use Gmail to send from your other email addresses)), then port 25 not necessary. For incoming emails at dynadot.com I use Email Forwarding. Does it mean that emails “does not arrive to VPS” (before arriving to VPS, emails are forwarded, for example to Gmail)? If emails does not arrive to VPS, then seems port 110 also not necessary. If use only ssl, must open port 443 and close port 80. Do not understand regarding port 3306 In PuTTY with /bin/netstat -lnp see Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 992/mysqld As understand it is for mysql. But does not remember that I have opened such port (may be when installed mysql, the port is opened automatically?). Mysql is installed on the same server, where all other content. Need to understand regarding port 3306 2. Also hacker may be able access console terminal through VPS hosting provider Control Panel (serial console emergency access). As understand only using console terminal (PuTTY, etc.) can make “global” changes (changes that can not modify with ftp). 3. Hacker can access to my VPS exploiting some hole in my php code and uploading, for example, Trojan. Unfortunately, faced situation that VPS was hacked. As understand it was because I used ZPanel. On VPS ( \etc\zpanel\panel\bin) ) found one php file, that was identified as Trojan by some virus scanners (at virustotal.com). Experimented with the file on local computer (wamp). And appears that hacker can see all content of VPS, rename, delete, upload etc. From my opinion, if in PuTTY use command like chattr +i /etc/php.ini then hacker could not be able to modify php.ini. Is there any other way to get into VPS?

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

  • WCF RIA Services DomainContext Abstraction Strategies–Say That 10 Times!

    - by dwahlin
    The DomainContext available with WCF RIA Services provides a lot of functionality that can help track object state and handle making calls from a Silverlight client to a DomainService. One of the questions I get quite often in our Silverlight training classes (and see often in various forums and other areas) is how the DomainContext can be abstracted out of ViewModel classes when using the MVVM pattern in Silverlight applications. It’s not something that’s super obvious at first especially if you don’t work with delegates a lot, but it can definitely be done. There are various techniques and strategies that can be used but I thought I’d share some of the core techniques I find useful. To start, let’s assume you have the following ViewModel class (this is from my Silverlight Firestarter talk available to watch online here if you’re interested in getting started with WCF RIA Services): public class AdminViewModel : ViewModelBase { BookClubContext _Context = new BookClubContext(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, null); } private void LoadBooksCallback(LoadOperation<Book> books) { Books = new ObservableCollection<Book>(books.Entities); } } Notice that BookClubContext is being used directly in the ViewModel class. There’s nothing wrong with that of course, but if other ViewModel objects need to load books then code would be duplicated across classes. Plus, the ViewModel has direct knowledge of how to load data and I like to make it more loosely-coupled. To do this I create what I call a “Service Agent” class. This class is responsible for getting data from the DomainService and returning it to a ViewModel. It only knows how to get and return data but doesn’t know how data should be stored and isn’t used with data binding operations. An example of a simple ServiceAgent class is shown next. Notice that I’m using the Action<T> delegate to handle callbacks from the ServiceAgent to the ViewModel object. Because LoadBooks accepts an Action<ObservableCollection<Book>>, the callback method in the ViewModel must accept ObservableCollection<Book> as a parameter. The callback is initiated by calling the Invoke method exposed by Action<T>: public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, callback); } public void LoadBooksCallback(LoadOperation<Book> lo) { //Check for errors of course...keeping this brief var books = new ObservableCollection<Book>(lo.Entities); var action = (Action<ObservableCollection<Book>>)lo.UserState; action.Invoke(books); } } This can be simplified by taking advantage of lambda expressions. Notice that in the following code I don’t have a separate callback method and don’t have to worry about passing any user state or casting any user state (the user state is the 3rd parameter in the _Context.Load method call shown above). public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), (lo) => { var books = new ObservableCollection<Book>(lo.Entities); callback.Invoke(books); }, null); } } A ViewModel class can then call into the ServiceAgent to retrieve books yet never know anything about the DomainContext object or even know how data is loaded behind the scenes: public class AdminViewModel : ViewModelBase { ServiceAgent _ServiceAgent = new ServiceAgent(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _ServiceAgent.LoadBooks(LoadBooksCallback); } private void LoadBooksCallback(ObservableCollection<Book> books) { Books = books } } You could also handle the LoadBooksCallback method using a lambda if you wanted to minimize code just like I did earlier with the LoadBooks method in the ServiceAgent class.  If you’re into Dependency Injection (DI), you could create an interface for the ServiceAgent type, reference it in the ViewModel and then inject in the object to use at runtime. There are certainly other techniques and strategies that can be used, but the code shown here provides an introductory look at the topic that should help get you started abstracting the DomainContext out of your ViewModel classes when using WCF RIA Services in Silverlight applications.

    Read the article

  • System.Threading.ThreadAbortException executing WCF service

    - by SURESH GIRIRAJAN
    In one of our prod server we recently ran into issue when we went and update the web.config and try to browse the service. We started seeing the service was not responding and getting the following warning in the application log. Our service is WCF service, BizTalk orchestration exposed as service. We have other prod server where we never ran into this issue, so what’s different with this server. After going thru lot of forum and came up on some Microsoft service pack and hot fix which related to FCN. But I don’t want to apply any patch on this server then we need to do on all the other servers too. So solution is simple, I dropped the existing website, created a new site with different name with updated web.config browse the service. Then dropped that site and recreate the original web site and it worked fine without any issue. Event Viewer:  Event Type:        Warning Event Source:    ASP.NET 2.0.50727.0 Event Category:                Web Event Event ID:              1309 Date:                     6/6/2011 Time:                    5:41:42 PM User:                     N/A Computer:          PRODP02 Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 6/6/2011 5:41:42 PM Event time (UTC): 6/6/2011 9:41:42 PM Event ID: a71769f42b304355a58c482bfec267f2 Event sequence: 3 Event occurrence: 1 Event detail code: 0  Application information:     Application domain: /LM/W3SVC/518296899/ROOT/PortArrivals-2-129518698821558995     Trust level: Full     Application Virtual Path: /TESTSVC     Application Path: D:\inetpub\wwwroot\RFID\TESTSVC\     Machine name: PRODP02  Process information:     Process ID: 8752     Process name: w3wp.exe     Account name: domain\BizTalk_Svc_Hostlso  Exception information:     Exception type: ThreadAbortException     Exception message: Thread was being aborted.  Request information:     Request URL: http://localhost:81/TESTSVC/TESTSVCS.svc     Request path: /TESTSVC/TESTSVCS.svc     User host address: 127.0.0.1     User:      Is authenticated: False     Authentication Type:      Thread account name: domain\BizTalk_Svc_Hostlso  Thread information:     Thread ID: 22     Thread account name: domain\BizTalk_Svc_Hostlso     Is impersonating: False     Stack trace:    at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)    at System.Web.HttpApplication.ApplicationStepManager.ResumeSteps(Exception error)  at System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData)    at System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr)  <Description>Handling an exception.</Description> <AppDomain>/LM/W3SVC/518296899/ROOT/TESTSVC-6-129518741899334691</AppDomain> <Exception> <ExceptionType>System.Threading.ThreadAbortException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>Thread was being aborted.</Message> <StackTrace> at System.Threading.Monitor.Enter(Object obj) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) </StackTrace> <ExceptionString>System.Threading.ThreadAbortException: Thread was being aborted.    at System.Threading.Monitor.Enter(Object obj)    at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)    at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath)    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)    at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)</ExceptionString>

    Read the article

  • SQL SERVER – Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following queries will return dirty data? a) SELECT * FROM Table1 (READUNCOMMITED) b) SELECT * FROM Table1 (NOLOCK) c) SELECT * FROM Table1 (DIRTYREAD) d) SELECT * FROM Table1 (MYLOCK) Query Hints: BIG HINT POST Most SQL people know what a “Dirty Record” is. You might also call that an “Intermediate record”. In case this is new to you here is a very quick explanation. The simplest way to describe the steps of a transaction is to use an example of updating an existing record into a table. When the insert runs, SQL Server gets the data from storage, such as a hard drive, and loads it into memory and your CPU. The data in memory is changed and then saved to the storage device. Finally, a message is sent confirming the rows that were affected. For a very short period of time the update takes the data and puts it into memory (an intermediate state), not a permanent state. For every data change to a table there is a brief moment where the change is made in the intermediate state, but is not committed. During this time, any other DML statement needing that data waits until the lock is released. This is a safety feature so that SQL Server evaluates only official data. For every data change to a table there is a brief moment where the change is made in this intermediate state, but is not committed. During this time, any other DML statement (SELECT, INSERT, DELETE, UPDATE) needing that data must wait until the lock is released. This is a safety feature put in place so that SQL Server evaluates only official data. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 1. SQL Joes 2 Pros Development Series – Dirty Records and Table Hints SQL Joes 2 Pros Development Series – Row Constructors SQL Joes 2 Pros Development Series – Finding un-matching Records SQL Joes 2 Pros Development Series – Efficient Query Writing Strategy SQL Joes 2 Pros Development Series – Finding Apostrophes in String and Text SQL Joes 2 Pros Development Series – Wildcard – Querying Special Characters SQL Joes 2 Pros Development Series – Wildcard Basics Recap Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • how to drawing continues line just like in paint [on hold]

    - by hussain shah
    hi sir i want to draw a points.the following code is work good but the problem is than when i drag the mouse button, if i move slow working good but if i move the curser fast they cannot made continues line.please what is the solution...? #include <iostream> #include <GL/glut.h> #include <GL/glu.h> #include <stdlib.h> void first() { glPushMatrix(); glTranslatef(1,01,01); glScalef(1, 1, 1); glColor3f(0, 1, 0); glBegin(GL_QUADS); glVertex2f(0.8, 0.6); glVertex2f(0.6, 0.6); glVertex2f(0.6, 0.8); glVertex2f(0.8, 0.8); glEnd(); glPopMatrix(); glFlush(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0);// screen color //glFlush(); } void drag (int x, int y) { { y=500-y; //x=500-x; glPointSize(5); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex2f(x,y+2); glEnd(); glutSwapBuffers(); glFlush(); } } void reshape (int w, int h){} void init (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0); glViewport(0,0,500,500); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 500.0, 0.0, 500.0, 1.0, -1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void mouse_button (int button, int state, int x, int y) { if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) { drag(x,y); first(); } //else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN) //{ // //} else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN) { exit(0); } } int main (int argc, char**argv) { glutInit (&argc, argv); //initialize the program. glutInitDisplayMode (GLUT_SINGLE); //set up a basic display buffer (only singular for now) glutInitWindowSize (500,500); //set whe width and height of the window glutInitWindowPosition (100, 100); //set the position of the window glutCreateWindow ("A basic OpenGL Window"); //set the caption for the window glutMotionFunc(drag); //glutMouseFunc(mouse_button); init(); glutDisplayFunc (display);//call the display function to draw our world glutMainLoop(); //initialize the OpenGL loop cycle return 0; }

    Read the article

  • The Latest News About SAP

    - by jmorourke
    Like many professionals, I get a lot of my news from Google e-mail alerts that I’ve set up to keep track of key industry trends and competitive news.  In the past few weeks, I’ve been getting a number of news alerts about SAP.  Below are a few recent examples: Warm weather cuts short US maple sugaring season – by Toby Talbot, AP MILWAUKEE – Temperatures in Wisconsin had already hit the high 60s when Gretchen Grape and her family began tapping their 850 maple trees. They had waited for the state's ceremonial tapping to kick off the maple sugaring season. It was moved up five days, but that didn't make much difference. For Grape, the typically month-long season ended nine days later. The SAP had stopped flowing in a record-setting heat wave, and the 5-quart collection bags that in a good year fill in a day were still half-empty. Instead of their usual 300 gallons of syrup, her family had about 40. Maple syrup producers across the North have had their season cut short by unusually warm weather. While those with expensive, modern vacuum systems say they've been able to suck a decent amount of sap from their trees, producers like Grape, who still rely on traditional taps and buckets, have seen their year ruined. "It's frustrating," said the 69-year-old retiree from Holcombe, Wis. "You put in the same amount of work, equipment, investment, and then all of a sudden, boom, you have no SAP." Home & Garden: Too-Early Spring Means Sugaring Woes  - by Georgeanne Davis for The Free Press Over this past weekend, forsythia and daffodils were blooming in the southern parts of the state as temperatures climbed to 85 degrees, and trees began budding out, putting an end to this year's maple syrup production even as the state celebrated Maine Maple Sunday. Maple sugaring needs cold nights and warm days to induce SAP flows. Once the trees begin budding, SAP can still flow, but the SAP is bitter and has an off taste. Many farmers and dairymen count on sugaring for extra income, so the abbreviated season is a real financial loss for them, akin to the shortened shrimping season's effect on Maine lobstermen. SAP season comes to a sugary Sunday finale – Kennebec Journal, March 26th, 2012 Rebecca Manthey stood out in the rain at the entrance of Old Fort Western keeping watch over a cast iron kettle of boiling SAP hooked to a tripod over a wood fire.  Manthey and the rest of the Old Fort Western staff -- decked out in 18th-century attire -- joined sugar houses across the state in observance of Maine Maple Sunday. The annual event is sponsored by the Department of Agriculture and the Maine Maple Producers Association.  She said the rain hadn't kept people from coming to enjoy all the events at the fort surrounding the production of Maple syrup.  "In the 18th century, you would be boiling SAP in the woods, so I would be in the woods," Manthey explained to the families who circled around her. "People spent weeks and weeks in the woods. You don't want to cook it to fast or it would burn. When it looks like the right consistency then you send it (into the kitchen) to be made into sugar." Manthey said she enjoyed portraying an 18th-century woman, even in the rain, which didn't seem to bother visitors either. There was a steady stream of families touring the fort and enjoying the maple syrup demonstrations. I hope you enjoy these updates on SAP – Happy April Fool’s Day!

    Read the article

  • Ubuntu Server 12 not spawning a serial ttyS0 when running on Xen

    - by segfaultreloaded
    I have this problem on more than one host, so the specific hardware is not an issue. Bare metal Ubuntu 12 is not creating a login process on the only serial port, in the default configuration. The serial port works correctly with the firmware. It works correctly with Grub2. I have even connected the serial line to 2 different external client boxes, so the problem is neither the hardware nor the remote client. When finally booted, the system fails to create the login process. root@xenpro3:~# ps ax | grep tty 1229 tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4 1233 tty5 Ss+ 0:00 /sbin/getty -8 38400 tty5 1239 tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2 1241 tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3 1245 tty6 Ss+ 0:00 /sbin/getty -8 38400 tty6 1403 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1 1996 pts/0 S+ 0:00 grep --color=auto tty root@xenpro3:~# dmesg | grep tty [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.2.0-30-generic root=/dev/mapper/xenpro3-root ro console=ttyS0,115200n8 [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-30-generic root=/dev/mapper/xenpro3-root ro console=ttyS0,115200n8 [ 0.000000] console [ttyS0] enabled [ 2.160986] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.203396] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.263296] 00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.323102] 00:09: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A root@xenpro3:~# uname -a Linux xenpro3 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@xenpro3:~# I have tried putting a ttyS0.conf file in /etc/initab, which solves the problem bare metal but I still cannot get the serial port to work when booting Ubuntu on top of Xen, as domain 0. My serial line output looks like this, when booting Xen /dev/ttyS0 at 0x03f8 (irq = 4) is a 16550A * Exporting directories for NFS kernel daemon... [ OK ] * Starting NFS kernel daemon [ OK ] SSL tunnels disabled, see /etc/default/stunnel4 [ 18.654627] XENBUS: Unable to read cpu state [ 18.659631] XENBUS: Unable to read cpu state [ 18.664398] XENBUS: Unable to read cpu state [ 18.669248] XENBUS: Unable to read cpu state * Starting Xen daemons [ OK ] mountall: Disconnected from Plymouth At this point, the serial line is no longer connected to a process. Xen itself is running just fine. Dmesg gives me a long list of [ 120.236841] init: ttyS0 main process ended, respawning [ 120.239717] ttyS0: LSR safety check engaged! [ 130.240265] init: ttyS0 main process (1631) terminated with status 1 [ 130.240294] init: ttyS0 main process ended, respawning [ 130.242970] ttyS0: LSR safety check engaged! which is no surprise because I see root@xenpro3:~# ls -l /dev/ttyS? crw-rw---- 1 root tty 4, 64 Nov 7 14:04 /dev/ttyS0 crw-rw---- 1 root dialout 4, 65 Nov 7 14:04 /dev/ttyS1 crw-rw---- 1 root dialout 4, 66 Nov 7 14:04 /dev/ttyS2 crw-rw---- 1 root dialout 4, 67 Nov 7 14:04 /dev/ttyS3 crw-rw---- 1 root dialout 4, 68 Nov 7 14:04 /dev/ttyS4 crw-rw---- 1 root dialout 4, 69 Nov 7 14:04 /dev/ttyS5 crw-rw---- 1 root dialout 4, 70 Nov 7 14:04 /dev/ttyS6 crw-rw---- 1 root dialout 4, 71 Nov 7 14:04 /dev/ttyS7 crw-rw---- 1 root dialout 4, 72 Nov 7 14:04 /dev/ttyS8 crw-rw---- 1 root dialout 4, 73 Nov 7 14:04 /dev/ttyS9 If I manually change the group of /dev/ttyS0 to dialout, it gets changed back. I have made no changes to the default udev rules, so I cannot see where this problem is coming from. Sincerely, John

    Read the article

  • Can't connect to VPN on Ubuntu 12.04

    - by 12rad
    I'm having a lot of trouble connecting to VPN. This used to work on my machine, but i recently did an update and it's stopped working. I'm not sure what the problem is. My question is how do i debug this? I'm not able to narrow it down to a specific problem. This is what i get when i tail the syslogs. Would appreciate any help! Nov 6 23:42:52 meera NetworkManager[1137]: <info> Starting VPN service 'pptp'... Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 6132 Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN service 'pptp' appeared; activating connections Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN plugin state changed: starting (3) Nov 6 23:42:52 meera NetworkManager[1137]: <info> VPN connection 'NAME VPN' (Connect) reply received. Nov 6 23:42:52 meera pppd[6136]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. Nov 6 23:42:52 meera pppd[6136]: pppd 2.4.5 started by root, uid 0 Nov 6 23:42:52 meera chat[6139]: timeout set to 15 seconds Nov 6 23:42:52 meera chat[6139]: abort on (NO CARRIER) Nov 6 23:42:52 meera chat[6139]: abort on (NO DIALTONE) Nov 6 23:42:52 meera chat[6139]: abort on (ERROR) Nov 6 23:42:52 meera chat[6139]: abort on (NO ANSWER) Nov 6 23:42:52 meera chat[6139]: abort on (BUSY) Nov 6 23:42:52 meera chat[6139]: abort on (Username/Password Incorrect) Nov 6 23:42:52 meera chat[6139]: send (AT^M) Nov 6 23:42:52 meera pptp[6138]: nm-pptp-service-6132 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Nov 6 23:42:52 meera chat[6139]: expect (OK) Nov 6 23:42:52 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Nov 6 23:42:53 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Nov 6 23:42:53 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Nov 6 23:42:53 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Nov 6 23:42:54 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Nov 6 23:42:54 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 13077). Nov 6 23:42:54 meera pptp[6138]: nm-pptp-service-6132 warn[decaps_hdlc:pptp_gre.c:231]: The ppp mode is synchronous, yet no pptp --sync option is specified! Nov 6 23:43:07 meera chat[6139]: alarm Nov 6 23:43:07 meera chat[6139]: Failed Nov 6 23:43:07 meera pppd[6136]: Script chat -v -f /etc/ppp/chat-ztisp finished (pid 6139), status = 0x3 Nov 6 23:43:07 meera pppd[6136]: Connect script failed Nov 6 23:43:07 meera pppd[6136]: Waiting for 1 child processes... Nov 6 23:43:07 meera pppd[6136]: script /usr/sbin/pptp 204.197.218.90 --nolaunchpppd --loglevel 0 --logstring nm-pptp-service-6132, pid 6138 Nov 6 23:43:07 meera pptp[6138]: nm-pptp-service-6132 warn[decaps_hdlc:pptp_gre.c:204]: short read (-1): Input/output error Nov 6 23:43:07 meera pptp[6138]: nm-pptp-service-6132 warn[decaps_hdlc:pptp_gre.c:216]: pppd may have shutdown, see pppd log Nov 6 23:43:07 meera pptp[6143]: nm-pptp-service-6132 log[callmgr_main:pptp_callmgr.c:234]: Closing connection (unhandled) Nov 6 23:43:07 meera pppd[6136]: Script /usr/sbin/pptp 204.197.218.90 --nolaunchpppd --loglevel 0 --logstring nm-pptp-service-6132 finished (pid 6138), status = 0x0 Nov 6 23:43:07 meera pptp[6143]: nm-pptp-service-6132 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Nov 6 23:43:07 meera pptp[6143]: nm-pptp-service-6132 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Nov 6 23:43:07 meera pppd[6136]: Exit. Nov 6 23:43:07 meera NetworkManager[1137]: <warn> VPN plugin failed: 1 Nov 6 23:43:07 meera NetworkManager[1137]: <info> VPN plugin state changed: stopped (6) Nov 6 23:43:07 meera NetworkManager[1137]: <info> VPN plugin state change reason: 0 Nov 6 23:43:07 meera NetworkManager[1137]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Same SELECT used in an INSERT has different execution plan

    - by amacias
    A customer complained that a query and its INSERT counterpart had different execution plans, and of course, the INSERT was slower. First lets look at the SELECT : SELECT ua_tr_rundatetime,        ua_ch_treatmentcode,        ua_tr_treatmentcode,        ua_ch_cellid,        ua_tr_cellid FROM   (SELECT DISTINCT CH.treatmentcode AS UA_CH_TREATMENTCODE,                         CH.cellid        AS UA_CH_CELLID         FROM    CH,                 DL         WHERE  CH.contactdatetime > SYSDATE - 5                AND CH.treatmentcode = DL.treatmentcode) CH_CELLS,        (SELECT DISTINCT T.treatmentcode AS UA_TR_TREATMENTCODE,                         T.cellid        AS UA_TR_CELLID,                         T.rundatetime   AS UA_TR_RUNDATETIME         FROM    T,                 DL         WHERE  T.treatmentcode = DL.treatmentcode) TRT_CELLS WHERE  CH_CELLS.ua_ch_treatmentcode(+) = TRT_CELLS.ua_tr_treatmentcode;  The query has 2 DISTINCT subqueries.  The execution plan shows one with DISTICT Placement transformation applied and not the other. The view in Step 5 has the prefix VW_DTP which means DISTINCT Placement. -------------------------------------------------------------------- | Id  | Operation                    | Name            | Cost (%CPU) -------------------------------------------------------------------- |   0 | SELECT STATEMENT             |                 |   272K(100) |*  1 |  HASH JOIN OUTER             |                 |   272K  (1) |   2 |   VIEW                       |                 |  4408   (1) |   3 |    HASH UNIQUE               |                 |  4408   (1) |*  4 |     HASH JOIN                |                 |  4407   (1) |   5 |      VIEW                    | VW_DTP_48BAF62C |  1660   (2) |   6 |       HASH UNIQUE            |                 |  1660   (2) |   7 |        TABLE ACCESS FULL     | DL              |  1644   (1) |   8 |      TABLE ACCESS FULL       | T               |  2744   (1) |   9 |   VIEW                       |                 |   267K  (1) |  10 |    HASH UNIQUE               |                 |   267K  (1) |* 11 |     HASH JOIN                |                 |   267K  (1) |  12 |      PARTITION RANGE ITERATOR|                 |   266K  (1) |* 13 |       TABLE ACCESS FULL      | CH              |   266K  (1) |  14 |      TABLE ACCESS FULL       | DL              |  1644   (1) -------------------------------------------------------------------- Query Block Name / Object Alias (identified by operation id): -------------------------------------------------------------    1 - SEL$1    2 - SEL$AF418D5F / TRT_CELLS@SEL$1    3 - SEL$AF418D5F    5 - SEL$F6AECEDE / VW_DTP_48BAF62C@SEL$48BAF62C    6 - SEL$F6AECEDE    7 - SEL$F6AECEDE / DL@SEL$3    8 - SEL$AF418D5F / T@SEL$3    9 - SEL$2        / CH_CELLS@SEL$1   10 - SEL$2   13 - SEL$2        / CH@SEL$2   14 - SEL$2        / DL@SEL$2 Predicate Information (identified by operation id): ---------------------------------------------------    1 - access("CH_CELLS"."UA_CH_TREATMENTCODE"="TRT_CELLS"."UA_TR_TREATMENTCODE")    4 - access("T"."TREATMENTCODE"="ITEM_1")   11 - access("CH"."TREATMENTCODE"="DL"."TREATMENTCODE")   13 - filter("CH"."CONTACTDATETIME">SYSDATE@!-5) The outline shows PLACE_DISTINCT(@"SEL$3" "DL"@"SEL$3") indicating that the QB3 is the one that got the transformation. Outline Data -------------   /*+       BEGIN_OUTLINE_DATA       IGNORE_OPTIM_EMBEDDED_HINTS       OPTIMIZER_FEATURES_ENABLE('11.2.0.3')       DB_VERSION('11.2.0.3')       ALL_ROWS       OUTLINE_LEAF(@"SEL$2")       OUTLINE_LEAF(@"SEL$F6AECEDE")       OUTLINE_LEAF(@"SEL$AF418D5F") PLACE_DISTINCT(@"SEL$3" "DL"@"SEL$3")       OUTLINE_LEAF(@"SEL$1")       OUTLINE(@"SEL$48BAF62C")       OUTLINE(@"SEL$3")       NO_ACCESS(@"SEL$1" "TRT_CELLS"@"SEL$1")       NO_ACCESS(@"SEL$1" "CH_CELLS"@"SEL$1")       LEADING(@"SEL$1" "TRT_CELLS"@"SEL$1" "CH_CELLS"@"SEL$1")       USE_HASH(@"SEL$1" "CH_CELLS"@"SEL$1")       FULL(@"SEL$2" "CH"@"SEL$2")       FULL(@"SEL$2" "DL"@"SEL$2")       LEADING(@"SEL$2" "CH"@"SEL$2" "DL"@"SEL$2")       USE_HASH(@"SEL$2" "DL"@"SEL$2")       USE_HASH_AGGREGATION(@"SEL$2")       NO_ACCESS(@"SEL$AF418D5F" "VW_DTP_48BAF62C"@"SEL$48BAF62C")       FULL(@"SEL$AF418D5F" "T"@"SEL$3")       LEADING(@"SEL$AF418D5F" "VW_DTP_48BAF62C"@"SEL$48BAF62C" "T"@"SEL$3")       USE_HASH(@"SEL$AF418D5F" "T"@"SEL$3")       USE_HASH_AGGREGATION(@"SEL$AF418D5F")       FULL(@"SEL$F6AECEDE" "DL"@"SEL$3")       USE_HASH_AGGREGATION(@"SEL$F6AECEDE")       END_OUTLINE_DATA   */ The 10053 shows there is a comparative of cost with and without the transformation. This means the transformation belongs to Cost-Based Query Transformations (CBQT). In SEL$3 the optimization of the query block without the transformation is 6659.73 and with the transformation is 4408.41 so the transformation is kept. GBP/DP: Checking validity of GBP/DP for query block SEL$3 (#3) DP: Checking validity of distinct placement for query block SEL$3 (#3) DP: Using search type: linear DP: Considering distinct placement on query block SEL$3 (#3) DP: Starting iteration 1, state space = (5) : (0) DP: Original query DP: Costing query block. DP: Updated best state, Cost = 6659.73 DP: Starting iteration 2, state space = (5) : (1) DP: Using DP transformation in this iteration. DP: Transformed query DP: Costing query block. DP: Updated best state, Cost = 4408.41 DP: Doing DP on the original QB. DP: Doing DP on the preserved QB. In SEL$2 the cost without the transformation is less than with it so it is not kept. GBP/DP: Checking validity of GBP/DP for query block SEL$2 (#2) DP: Checking validity of distinct placement for query block SEL$2 (#2) DP: Using search type: linear DP: Considering distinct placement on query block SEL$2 (#2) DP: Starting iteration 1, state space = (3) : (0) DP: Original query DP: Costing query block. DP: Updated best state, Cost = 267936.93 DP: Starting iteration 2, state space = (3) : (1) DP: Using DP transformation in this iteration. DP: Transformed query DP: Costing query block. DP: Not update best state, Cost = 267951.66 To the same query an INSERT INTO is added and the result is a very different execution plan. INSERT  INTO cc               (ua_tr_rundatetime,                ua_ch_treatmentcode,                ua_tr_treatmentcode,                ua_ch_cellid,                ua_tr_cellid)SELECT ua_tr_rundatetime,       ua_ch_treatmentcode,       ua_tr_treatmentcode,       ua_ch_cellid,       ua_tr_cellidFROM   (SELECT DISTINCT CH.treatmentcode AS UA_CH_TREATMENTCODE,                        CH.cellid        AS UA_CH_CELLID        FROM    CH,                DL        WHERE  CH.contactdatetime > SYSDATE - 5               AND CH.treatmentcode = DL.treatmentcode) CH_CELLS,       (SELECT DISTINCT T.treatmentcode AS UA_TR_TREATMENTCODE,                        T.cellid        AS UA_TR_CELLID,                        T.rundatetime   AS UA_TR_RUNDATETIME        FROM    T,                DL        WHERE  T.treatmentcode = DL.treatmentcode) TRT_CELLSWHERE  CH_CELLS.ua_ch_treatmentcode(+) = TRT_CELLS.ua_tr_treatmentcode;----------------------------------------------------------| Id  | Operation                     | Name | Cost (%CPU)----------------------------------------------------------|   0 | INSERT STATEMENT              |      |   274K(100)|   1 |  LOAD TABLE CONVENTIONAL      |      |            |*  2 |   HASH JOIN OUTER             |      |   274K  (1)|   3 |    VIEW                       |      |  6660   (1)|   4 |     SORT UNIQUE               |      |  6660   (1)|*  5 |      HASH JOIN                |      |  6659   (1)|   6 |       TABLE ACCESS FULL       | DL   |  1644   (1)|   7 |       TABLE ACCESS FULL       | T    |  2744   (1)|   8 |    VIEW                       |      |   267K  (1)|   9 |     SORT UNIQUE               |      |   267K  (1)|* 10 |      HASH JOIN                |      |   267K  (1)|  11 |       PARTITION RANGE ITERATOR|      |   266K  (1)|* 12 |        TABLE ACCESS FULL      | CH   |   266K  (1)|  13 |       TABLE ACCESS FULL       | DL   |  1644   (1)----------------------------------------------------------Query Block Name / Object Alias (identified by operation id):-------------------------------------------------------------   1 - SEL$1   3 - SEL$3 / TRT_CELLS@SEL$1   4 - SEL$3   6 - SEL$3 / DL@SEL$3   7 - SEL$3 / T@SEL$3   8 - SEL$2 / CH_CELLS@SEL$1   9 - SEL$2  12 - SEL$2 / CH@SEL$2  13 - SEL$2 / DL@SEL$2Predicate Information (identified by operation id):---------------------------------------------------   2 - access("CH_CELLS"."UA_CH_TREATMENTCODE"="TRT_CELLS"."UA_TR_TREATMENTCODE")   5 - access("T"."TREATMENTCODE"="DL"."TREATMENTCODE")  10 - access("CH"."TREATMENTCODE"="DL"."TREATMENTCODE")  12 - filter("CH"."CONTACTDATETIME">SYSDATE@!-5)Outline Data-------------  /*+      BEGIN_OUTLINE_DATA      IGNORE_OPTIM_EMBEDDED_HINTS      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')      DB_VERSION('11.2.0.3')      ALL_ROWS      OUTLINE_LEAF(@"SEL$2")      OUTLINE_LEAF(@"SEL$3")      OUTLINE_LEAF(@"SEL$1")      OUTLINE_LEAF(@"INS$1")      FULL(@"INS$1" "CC"@"INS$1")      NO_ACCESS(@"SEL$1" "TRT_CELLS"@"SEL$1")      NO_ACCESS(@"SEL$1" "CH_CELLS"@"SEL$1")      LEADING(@"SEL$1" "TRT_CELLS"@"SEL$1" "CH_CELLS"@"SEL$1")      USE_HASH(@"SEL$1" "CH_CELLS"@"SEL$1")      FULL(@"SEL$2" "CH"@"SEL$2")      FULL(@"SEL$2" "DL"@"SEL$2")      LEADING(@"SEL$2" "CH"@"SEL$2" "DL"@"SEL$2")      USE_HASH(@"SEL$2" "DL"@"SEL$2")      USE_HASH_AGGREGATION(@"SEL$2")      FULL(@"SEL$3" "DL"@"SEL$3")      FULL(@"SEL$3" "T"@"SEL$3")      LEADING(@"SEL$3" "DL"@"SEL$3" "T"@"SEL$3")      USE_HASH(@"SEL$3" "T"@"SEL$3")      USE_HASH_AGGREGATION(@"SEL$3")      END_OUTLINE_DATA  */ There is no DISTINCT Placement view and no hint.The 10053 trace shows a new legend "DP: Bypassed: Not SELECT"implying that this is a transformation that it is possible only for SELECTs. GBP/DP: Checking validity of GBP/DP for query block SEL$3 (#4) DP: Checking validity of distinct placement for query block SEL$3 (#4) DP: Bypassed: Not SELECT. GBP/DP: Checking validity of GBP/DP for query block SEL$2 (#3) DP: Checking validity of distinct placement for query block SEL$2 (#3) DP: Bypassed: Not SELECT. In 12.1 (and hopefully in 11.2.0.4 when released) the restriction on applying CBQT to some DMLs and DDLs (like CTAS) is lifted.This is documented in BugTag Note:10013899.8 Allow CBQT for some DML / DDLAnd interestingly enough, it is possible to have a one-off patch in 11.2.0.3. SQL> select DESCRIPTION,OPTIMIZER_FEATURE_ENABLE,IS_DEFAULT     2  from v$system_fix_control where BUGNO='10013899'; DESCRIPTION ---------------------------------------------------------------- OPTIMIZER_FEATURE_ENABLE  IS_DEFAULT ------------------------- ---------- enable some transformations for DDL and DML statements 11.2.0.4                           1

    Read the article

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • OS X won't see Windows 7 in network (and vice versa)

    - by meds
    I've enabled SMB sharing in OS X Lion and have added folders to share, it says 'Windows Sharing: On' with a green circle next to it (from the sharing window) and that to access the volume I will need to to go to \\192.168.0.17. It also says that the OS X should be visible as 'macbook' in the network. Both my WIndows 7 and OS X are connected to the same network, yet when I try to go to \\192.168.0.17 or from the Mac try to go to my Windows system (smb://192.168.0.6) the two OSs don't see each other. Any ideas why? Attempting to ping the Mac from Windows results in this output in the command prompt: Pinging 192.168.0.17 with 32 bytes of data: Reply from 192.168.0.6: Destination host unreachable. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.0.17: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss), ipconfig in Windows is: Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::8918:efd1:b05c:890f%21 IPv4 Address. . . . . . . . . . . : 192.168.0.6 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::98ab:63fc:3c07:d837%13 IPv4 Address. . . . . . . . . . . : 192.168.74.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::80ff:c575:7b50:3a10%14 IPv4 Address. . . . . . . . . . . : 192.168.21.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{2E97D0AE-9E18-4072-AC23-1979BA0DCB79}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{E260CE43-E9A7-4DE0-A88E-4EAFF68ACDDB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{A5130812-59CE-4DDF-9C35-9433BCED9831}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{134BCAE7-CFFF-4A98-8DA0-3708806AABEB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{8D9E3B8F-161C-4ACE-B211-3EDD694416B2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : in OS X: lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4> ether c8:2a:14:01:24:c1 media: autoselect (none) status: inactive en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether e0:f8:47:0c:fe:04 inet6 fe80::e2f8:47ff:fe0c:fe04%en1 prefixlen 64 scopeid 0x5 inet 192.168.0.17 netmask 0xffffff00 broadcast 192.168.0.255 media: autoselect status: active p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304 ether 02:f8:47:0c:fe:04 media: autoselect status: inactive fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 70:cd:60:ff:fe:d8:f1:32 media: autoselect <full-duplex> status: inactive

    Read the article

  • puma init.d for centos 6 fails with runuser: user /var/log/puma.log does not exist

    - by Rubytastic
    Trying to get a init.d/puma to work on Centos 6. It throws error runuser: user /var/log/puma.log does not exist I run this from the /srv/books/current folder but it fails. I tried to debug the values but not quite get what is missing and why it throws this error. #! /bin/sh # puma - this script starts and stops the puma daemon # # chkconfig: - 85 15 # description: Puma # processname: puma # config: /etc/puma.conf # pidfile: /srv/books/current/tmp/pids/puma.pid # Author: Darío Javier Cravero &lt;[email protected]> # # Do NOT "set -e" # Original script https://github.com/puma/puma/blob/master/tools/jungle/puma # It was modified here by Stanislaw Pankevich <[email protected]> # to run on CentOS 5.5 boxes. # Script works perfectly on CentOS 5: script uses its native daemon(). # Puma is being stopped/restarted by sending signals, control app is not used. # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/usr/local/bin:/usr/local/sbin/:/sbin:/usr/sbin:/bin:/usr/bin DESC="Puma rack web server" NAME=puma DAEMON=$NAME SCRIPTNAME=/etc/init.d/$NAME CONFIG=/etc/puma.conf JUNGLE=`cat $CONFIG` RUNPUMA=/usr/local/bin/run-puma # Skipping the following non-CentOS string # Load the VERBOSE setting and other rcS variables # . /lib/init/vars.sh # CentOS does not have these functions natively log_daemon_msg() { echo "$@"; } log_end_msg() { [ $1 -eq 0 ] && RES=OK; logger ${RES:=FAIL}; } # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that performs a clean up of puma.* files # cleanup() { echo "Cleaning up puma temporary files..." echo $1; PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state SOCKFILE=$1/tmp/puma/puma.sock rm -f $PIDFILE $STATEFILE $SOCKFILE } # # Function that starts the jungle # do_start() { log_daemon_msg "=> Running the jungle..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/puma/config.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/puma/puma.log" fi do_start_one $dir $user $config_file $log_file done } do_start_one() { PIDFILE=$1/puma/puma.pid if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` # If the puma isn't running, run it, otherwise restart it. if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then do_start_one_do $1 $2 $3 $4 else do_restart_one $1 fi else do_start_one_do $1 $2 $3 $4 fi } do_start_one_do() { log_daemon_msg "--> Woke up puma $1" log_daemon_msg "user $2" log_daemon_msg "log to $4" cleanup $1; daemon --user $2 $RUNPUMA $1 $3 $4 } # # Function that stops the jungle # do_stop() { log_daemon_msg "=> Putting all the beasts to bed..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_stop_one $dir done } # # Function that stops the daemon/service # do_stop_one() { log_daemon_msg "--> Stopping $1" PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state echo $PIDFILE if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` echo "Pid:" echo $PID if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then log_daemon_msg "---> Puma $1 isn't running." else log_daemon_msg "---> About to kill PID `cat $PIDFILE`" # pumactl --state $STATEFILE stop # Many daemons don't delete their pidfiles when they exit. kill -9 $PID fi cleanup $1 else log_daemon_msg "---> No puma here..." fi return 0 } # # Function that restarts the jungle # do_restart() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_restart_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_restart_one() { PIDFILE=$1/tmp/puma/puma.pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to restart puma $1" # pumactl --state $dir/tmp/puma/state restart kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> Your puma was never playing... Let's get it out there first" user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi return 0 } # # Function that statuss then jungle # do_status() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_status_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_status_one() { PIDFILE=$1/tmp/puma/pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to status puma $1" pumactl --state $dir/tmp/puma/state stats # kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> $1 isn't there :(..." fi return 0 } do_add() { str="" # App's directory if [ -d "$1" ]; then if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then str=$1 else echo "The app is already being managed. Remove it if you want to update its config." exit 1 fi else echo "The directory $1 doesn't exist." exit 1 fi # User to run it as if [ "`grep -c "^$2:" /etc/passwd`" -eq 0 ]; then echo "The user $2 doesn't exist." exit 1 else str="$str,$2" fi # Config file if [ "$3" != "" ]; then if [ -e $3 ]; then str="$str,$3" else echo "The config file $3 doesn't exist." exit 1 fi fi # Log file if [ "$4" != "" ]; then str="$str,$4" fi # Add it to the jungle echo $str >> $CONFIG log_daemon_msg "Added a Puma to the jungle: $str. You still have to start it though." } do_remove() { if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then echo "There's no app $1 to remove." else # Stop it first. do_stop_one $1 # Remove it from the config. sed -i "\\:^$1:d" $CONFIG log_daemon_msg "Removed a Puma from the jungle: $1." fi } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_start else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_stop else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_stop_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) # TODO Implement. log_daemon_msg "Status $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_status else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_status_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; restart) log_daemon_msg "Restarting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_restart else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_restart_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; add) if [ "$#" -lt 3 ]; then echo "Please, specifiy the app's directory and the user that will run it at least." echo " Usage: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." exit 1 else do_add $2 $3 $4 $5 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; remove) if [ "$#" -lt 2 ]; then echo "Please, specifiy the app's directory to remove." exit 1 else do_remove $2 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; *) echo "Usage:" >&2 echo " Run the jungle: $SCRIPTNAME {start|stop|status|restart}" >&2 echo " Add a Puma: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." echo " Remove a Puma: $SCRIPTNAME remove /path/to/app" echo " On a Puma: $SCRIPTNAME {start|stop|status|restart} PUMA-NAME" >&2 exit 3 ;; esac :

    Read the article

  • jQuery, jQuery UI, and Dual Licensed Plugins (Dual Licensing)

    - by John Hartsock
    OK I have read many posts regarding Dual Licensing using MIT and GPL licenses. But Im curious still, as the wording seems to be inclusive. Many of the Dual Licenses state that the software is licensed using "MIT AND GPL". The "AND" is what confuses me. It seems to me that the word "AND" in the terms, means you will be licensing the product using both licenses. Most of the posts, here on stackoverflow, state that you can license the software using one "OR" the other. JQuery specifically states "OR", whereas JQuery UI specifically States "AND". Another Instance of the "AND" would be JQGrid. Im not a lawyer but, it seems to me that a legal interpretation of this would state that use of the software would mean that your using the software under both licenses. Has anyone who has contacted a lawyer gotten clarification or a definitive answer as to what is true? Can you use Dual licensed software products that state "AND" in the terms of agreement under either license? EDITED: Guys here is specifically what Im talking about on jquery.org/license you see the following stated: You may use any jQuery project under the terms of either the MIT License or the GNU General Public License (GPL) Version 2 but in the header of Jquery's and Jquery UI library you see this: * Dual licensed under the MIT and GPL licenses. * http://docs.jquery.com/License The site says MIT or GPL but the license statement in the software says MIT and GPL.

    Read the article

  • problem in fonts type and fonts size on printing

    - by user1400
    hello i have a application on php that i show a report in the table , i want to print this page i can see my page fine in print preview , but when i send thuis page to printer ,the fonts are small and diffrent fonts that i set in css file this is my css file @page { size: A4 landscape; margin-top:2cm; margin-bottom:1cm; margin-left:1cm; margin-right:1cm; } table.print{ text-align:right; border:#999 1px solid; } table.print td.e1{ border-top:#999 1px solid; padding:5px 2px; text-align: right; font-size: 20pt; font-family:"stencil"; } table.print td.e2{ border-top:#999 1px solid; padding:5px 2px; text-align: right; font-size: 120%; font-family:"tahoma"; } thanks

    Read the article

  • how to rethrow same exception in sql server

    - by Shantanu Gupta
    I want to rethrow same exception in sql server that has been occured in my try block. I am able to throw same message but i want to throw same error. BEGIN TRANSACTION BEGIN TRY INSERT INTO Tags.tblDomain (DomainName, SubDomainId, DomainCode, Description) VALUES(@DomainName, @SubDomainId, @DomainCode, @Description) COMMIT TRANSACTION END TRY BEGIN CATCH declare @severity int; declare @state int; select @severity=error_severity(), @state=error_state(); RAISERROR(@@Error,@ErrorSeverity,@state); ROLLBACK TRANSACTION END CATCH RAISERROR(@@Error, @ErrorSeverity, @state); This line will show error, but i want functionality something like that. This raises error with error number 50000, but i want erron number to be thrown that i am passing @@error, I want to capture this error no at frontend i.e. catch (SqlException ex) { if ex.number==2627 MessageBox.show("Duplicate value cannot be inserted"); } I want this functionality. which can't be achieved using raiseerror. I dont want to give custom error message at back end.

    Read the article

  • SQL query to count and list the different counties per zip code

    - by Chris
    I have a sql server 2005 table called ZipCode, which has all the US ZIPs in it. For each entry, it lists the zip, county, city, state, and several other details. Some zipcodes span multiple cities and some even span multiple counties. As a result, some zipcodes appear many times in the table. I am trying to query the table to see which zipcodes go across multiple counties. This is what I have so far: select zipcode, count(zipcode) as total, county, state from zipcode group by zipcode, county, state order by zipcode Of 19248 records in the result set, here are the first several records returned: zipcode total county state 00501 2 SUFFOLK NY 00544 2 SUFFOLK NY 00801 3 SAINT THOMAS VI 00802 3 SAINT THOMAS VI 00803 3 SAINT THOMAS VI 00804 3 SAINT THOMAS VI 00805 1 SAINT THOMAS VI 00820 2 SAINT CROIX VI 00821 1 SAINT CROIX VI 00822 1 SAINT CROIX VI 00823 2 SAINT CROIX VI 00824 2 SAINT CROIX VI In this particular example, each zip with a total of two or more happens to be in the table more than once, and it's because the "cityaliasname" (not shown) or some other column differs. But I just want to know which zips are in there more than once because the county column differs. I searched before posting this and I found many questions about counting records but I could not figure out how to apply them to my problem. Please forgive me if there is already a question whose answer applies to this question.

    Read the article

  • ASP.NET MVC and ViewState

    - by nickyt
    Now I've seen some questions like this, but it's not exactly what I want to ask, so for all those screaming duplicate, I apologize :). I've barely touched ASP.NET MVC but from what I understand there is no ViewState/ControlState... fine. So my question is what is the alternative to retaining a control's state? Do we go back to old school ASP where we might simulate what ASP.NET ViewState/ControlState does by creating hidden form inputs with the control's state, or with MVC, do we just assume AJAX always and retain all state client-side and make AJAX calls to update? This question has some answers, http://stackoverflow.com/questions/1285547/maintaining-viewstate-in-asp-net-mvc, but not exactly what I'm looking for in an answer. UPDATE: Thanks for all the answers so far. Just to clear up what I'm not looking for and what I'm looking for: Not looking for: Session solution Cookie solution Not looking to mimic WebForms in MVC What I am/was looking for: A method that only retains the state on postback if data is not rebound to a control. Think WebForms with the scenario of only binding a grid on the initial page load, i.e. only rebinding the data when necessary. As I mentioned, I'm not trying to mimic WebForms, just wondering what mechanisms MVC offers.

    Read the article

  • jQuery addClass on $.post

    - by Tim
    I am basically trying to create a small registration form. If the username is already taken, I want to add the 'red' class, if not then 'green'. The PHP here works fine, and returns either a "YES" or "NO" to determine whether it's ok. The CSS: input { border:1px solid #ccc; } .red { border:1px solid #c00; } .green { border:1px solid green; background:#afdfaf; } The Javascript I'm using is: $("#username").change(function() { var value = $("#username").val(); if(value!= '') { $.post("check.php", { value: value }, function(data){ $("#test").html(data); if(data=='YES') { $("#username").removeClass('red').addClass('green'); } if(data=='NO') { $("#username").removeClass('green').addClass('red'); } }); } }); I've got the document.ready stuff too... It all works fine because the #test div html changes to either "YES" or "NO", apart from the last part where I check what the value of the data is. Here is the php: $value=$_POST['value']; if ($value!="") { $sql="select * FROM users WHERE username='".$value."'"; $query = mysql_query($sql) or die ("Could not match data because ".mysql_error()); $num_rows = mysql_num_rows($query); if ($num_rows 0) { echo "NO"; } else { echo "YES"; } }

    Read the article

  • How to configure replication? - This database is not enabled for publication.

    - by truthseeker
    Hi, I'm trying to configure repication on SQL Server 2005. I can done it using wizard. But when I'm trying to run generated scripts by this wizard the error message appears: Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addpublication, Line 159 This database is not enabled for publication. Msg 18757, Level 16, State 1, Procedure sp_MSrepl_addpublication_snapshot, Line 66 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addarticle, Line 168 This database is not enabled for publication. Msg 14294, Level 16, State 1, Procedure sp_verify_job_identifiers, Line 25 Supply either @job_id or @job_name to identify the job. It's a bit strange, because when I'm running this query on database where I clicked and then removed publication, everyting is going well. The problem is when I'm using my query on new database. What is more I'm using sp_replicationdboption stored procedure. When I'm tryin to run it, it says: The replication option 'publish' of database 'ReplicationTest00' has already been set to true. Please help me resolve this issue.

    Read the article

  • Why unchecked jQuery UI radio button's label appears "checked" after refresh in FF and IE?

    - by Daj pan spokój
    I've got 2 radio buttons with labels: button A is checked by default in HTML (checked="checked"). They're then processed with jQuery UI Radio Button. When I check button B and refresh the page, B's label is displayed as checked (it has CSS ui-state-active class). And A's label is displayed as unchecked, even though it's A button that is actually checked. It happens in Firefox (3) and IE (8). In Chrome (5) it works fine - after refresh A's label is displayed as checked. <input class="ui-helper-hidden-accessible" id="show-all-hotels" name="radio" CHECKED="CHECKED" type="radio"> <label aria-disabled="false" role="button" class="ui-button ui-widget ui-state-default ui-button-text-only ui-corner-left" aria-pressed="false" for="show-all-hotels"> <span class="ui-button-text">All tours</span> </label> <input class="ui-helper-hidden-accessible" id="show-availability" name="radio" type="radio"> <label aria-disabled="false" role="button" aria-pressed="true" class="UI-STATE-ACTIVE ui-button ui-widget ui-state-default ui-button-text-only ui-corner-right" for="show-availability"> <span class="ui-button-text">Only available at</span> </label> Do You know why is it happening and how to make checked button label appear as checked in all browsers?

    Read the article

  • jQuery autocomplete: taking JSON input and setting multiple fields from single field

    - by Sly
    I am trying to get the jQuery autocomplete plugin to take a local JSON variable as input. Once the user has selected one option from the autocomplete list, I want the adjacent address fields to be autopopulated. Here's the JSON variable that declared as a global variable in the of the HTML file: var JSON_address={"1":{"origin":{"nametag":"Home","street":"Easy St","city":"Emerald City","state":"CA","zip":"9xxxx"},"destination":{"nametag":"Work","street":"Factory St","city":"San Francisco","state":"CA","zip":"94104"}},"2":{"origin":{"nametag":"Work","street":"Umpa Loompa St","city":"San Francisco","state":"CA","zip":"94104"},"destination":{"nametag":"Home","street":"Easy St","city":"Emerald City ","state":"CA","zip":"9xxxx"}}}</script> I want the first field to display a list of "origin" nametags: "Home", "Work". Then when "Home" is selected, adjacent fields are automatically populated with Street: Easy St, City: Emerald City, etc. Here's the code I have for the autocomplete: $(document).ready(function(){ $("#origin_nametag_id").autocomplete(JSON_address, { autoFill:true, minChars:0, dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.nametag, result:data[i].origin.nametag }; } return rows; } }).change(function(){ $("#street_address_id").autocomplete({ dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.street, result:data[i].origin.street }; } return rows; } }); }); }); So this question really has two subparts: 1) How do you get autocomplete to process the multi-dimensional JSON object? (When the field is clicked and text entered, nothing happens - no list) 2) How do you get the other fields (street, city, etc) to populate based upon the origin nametag sub-array? Thanks in advance!

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >