Search Results

Search found 3001 results on 121 pages for 'stopped'.

Page 92/121 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Can't reach only certain websites from my Wifi (with macbook and iphone)

    - by trampj
    I can't access certain websites neither from my macbook nor from my iphone when connected to my Wifi. The same website can be opened from another windows computer connected to the same Wifi. This is what happens when I try to ping it: PING ilpost.it (151.1.175.113): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ... And when I try to traceroute it: host-001:~ j$ traceroute www.ilpost.it traceroute to ilpost.it (151.1.175.113), 64 hops max, 52 byte packets 1 vodafonedslrouter (192.168.1.1) 2.965 ms 0.743 ms 0.745 ms 2 * 2.96.54.77.rev.vodafone.pt (77.54.96.2) 12.076 ms 10.871 ms 3 77.41.30.213.rev.vodafone.pt (213.30.41.77) 14.145 ms 10.693 ms 11.960 ms 4 85.205.11.49 (85.205.11.49) 9.658 ms 8.946 ms 9.085 ms 5 85.205.13.105 (85.205.13.105) 57.497 ms 57.621 ms 48.080 ms 6 188.111.129.17 (188.111.129.17) 49.483 ms 51.338 ms 48.852 ms 7 85.205.25.174 (85.205.25.174) 47.891 ms 49.219 ms 47.821 ms 8 * * * 9 * * * 10 * * * 11 * * * I've flushed my DNS cache but nothing changed. This is quite dramatic as it seems to depend on 85.205.25.174 hop and don't know how to avoid it. Any suggestions? I add that 3 days ago everything worked fine. Then it has stopped.

    Read the article

  • Can't reach only certain websites from my Wifi (with macbook and iphone)

    - by mellin
    I can't access certain websites neither from my macbook nor from my iphone when connected to my Wifi. The same website can be opened from another windows computer connected to the same Wifi. This is what happens when I try to ping it: PING ilpost.it (151.1.175.113): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ... And when I try to traceroute it: host-001:~ j$ traceroute www.ilpost.it traceroute to ilpost.it (151.1.175.113), 64 hops max, 52 byte packets 1 vodafonedslrouter (192.168.1.1) 2.965 ms 0.743 ms 0.745 ms 2 * 2.96.54.77.rev.vodafone.pt (77.54.96.2) 12.076 ms 10.871 ms 3 77.41.30.213.rev.vodafone.pt (213.30.41.77) 14.145 ms 10.693 ms 11.960 ms 4 85.205.11.49 (85.205.11.49) 9.658 ms 8.946 ms 9.085 ms 5 85.205.13.105 (85.205.13.105) 57.497 ms 57.621 ms 48.080 ms 6 188.111.129.17 (188.111.129.17) 49.483 ms 51.338 ms 48.852 ms 7 85.205.25.174 (85.205.25.174) 47.891 ms 49.219 ms 47.821 ms 8 * * * 9 * * * 10 * * * 11 * * * I've flushed my DNS cache but nothing changed. This is quite dramatic as it seems to depend on 85.205.25.174 hop and don't know how to avoid it. Any suggestions? I add that 3 days ago everything worked fine. Then it has stopped.

    Read the article

  • Can no longer duplicate display to external monitor on Windows 7

    - by rbeier
    We have a large TV at work - I connect my laptop to it to share my screen during meetings. Until today, my laptop display has been duplicating to the TV automatically when I connect the TV cable to the laptop. The display resolution would decrease automatically to be compatible with the TV. Today, however, it's stopped working. When I connect the cable to the TV, the display extends rather than duplicating. Using the Win+P key combination (or Fn+F7 on my Lenovo laptop), I can choose to duplicate the display - but when I do this, it ends up only displaying on the laptop. I can get it to display on the TV by hitting Win+P and choosing "projector only", but then I can't see what I'm doing on the laptop screen. I have a Lenovo W520 laptop running Windows 7, connected to the TV using a DisplayPort-to-HDMI converter cable. The TV's native resolution is 1280x720; the laptop's native resolution is 1600x900. I've tried booting with the TV cable already connected; I've tried manually lowering the display resolution on the laptop to 1280x720 before duplicating the display. Neither works. Does anyone have any other suggestions?

    Read the article

  • ionice without effect

    - by tim
    System is Ubuntu 10 LTS 64bit (2.6.35.31), I'm running on xen 4.0, no services active, cron stopped, scheduler is cfq for the disk /usr is mounted from: time find /usr -exec stat {} \; > /dev/null 2>&1 & giving real 0m35.760s user 0m0.270s sys 0m3.910s and time ionice -c3 find /usr -exec stat {} \; > /dev/null 2>&1 & giving real 0m36.110s user 0m0.310s sys 0m4.100s which is exactly as expected, now I run both at the same time: time find /usr -exec stat {} \; > /dev/null 2>&1 & time ionice -c3 find /usr -exec stat {} \; > /dev/null 2>&1 & where to my believe the ioniced version should be much slower while the straight version should be as fast as if running alone. but: straight: real 1m10.430s user 0m0.320s sys 0m3.940s ioniced: real 1m10.230s user 0m0.250s sys 0m4.020s which implies that ionice did not work at all. Any hints?

    Read the article

  • Mysql Replication out of sync? What commands do I run to sync it back up?

    - by Alex
    I have a master-master replication system. However, due to an auto-increment issue, I got an error in replication...and it stopped replicating. Someone told me to do: stop slave; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; start slave; It didn't work. Then they told me to do: SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 2; It didn't work. Then to test it out, I did: SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 99999; It starts, but it is not updating. I created a table on DB1...and it is not showing up on DB2... Below are the SHOW STATUS for both my DB1 and DB2 (I hit them together): mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000605 Position: 2019727 Binlog_Do_DB: Binlog_Ignore_DB: 1 row in set (0.00 sec) mysql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.000605 Read_Master_Log_Pos: 2008810 Relay_Log_File: mysqld-relay-bin.001731 Relay_Log_Pos: 10176595 Relay_Master_Log_File: mysql-bin.000470 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 4255373725 Exec_Master_Log_Pos: 10176458 Relay_Log_Space: 135062517347 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 1376343 1 row in set (0.00 sec) How do I fix it so that they sync back up again? Thank you.

    Read the article

  • NFS server hangs after 3 minutes

    - by John P
    I have a VPS running Centos 6.3 with a fully updated NFS. When I mount the NFS directory from the client, everything works perfectly fine for approximately 3 minutes, then the client hangs attempting to see the directory. nfs-utils-1.2.3-26.el6.x86_64 service nfs status rpc.svcgssd is stopped rpc.mountd (pid 2544) is running... nfsd (pid 2609 2608 2607 2606 2605 2604 2603 2602) is running... rpc.rquotad (pid 2540) is running... cat /etc/exports /home/user XX.XX.XX.20(rw,async,no_root_squash) The client is running Centos 5.8. The directory is mounted using mount x.x.x.6:/home/user /mnt When everything is working, I get the following on the client: /usr/sbin/rpcinfo -p X.X.X.6 | grep mountd 100005 1 udp 892 mountd 100005 1 tcp 892 mountd 100005 2 udp 892 mountd 100005 2 tcp 892 mountd 100005 3 udp 892 mountd 100005 3 tcp 892 mountd When it stops working, rpcinfo just hangs on the client, however running the above command on the server does return data. There are no logs on the NFS Server side that would indicate an issue. On the client side, I see: cat /var/log/messages kernel: nfs: server X.X.X.6 not responding, still trying The client and server are plugged into the same switch, however they are on different networks. The server is a VPS while the client is a dedicated box. SELINUX is in permissive mode on both client and server, and I've turned iptables off on the server to make sure that was not causing an issue. Any ideas would be helpful - right now I'm having to restart NFS every two minutes in a cron job to keep it semi working. Thanks

    Read the article

  • android app does not show up on my device or the emulator in eclipse

    - by Sam
    hey everyone, I have no errors in my app-code what so ever, but when i try to run in on either my cell or my emulator/the avd in eclipse i can't run it because it doesn't show up on either one. this is my console output: [2011-02-04 08:14:58 - Versuch] Uploading Versuch.apk onto device 'CB511L2WTB' [2011-02-04 08:14:58 - Versuch] Installing Versuch.apk... [2011-02-04 08:15:01 - Versuch] Success! [2011-02-04 08:15:01 - Versuch] \Versuch\bin\Versuch.apk installed on device [2011-02-04 08:15:01 - Versuch] Done! and this is my LogCat output, which tells me nothing, but you are the experts ;) 02-04 08:18:10.020: DEBUG/dalvikvm(22167): GC freed 2576 objects / 559120 bytes in 37ms 02-04 08:18:10.700: DEBUG/dalvikvm(6709): GC freed 7692 objects / 478912 bytes in 41ms 02-04 08:18:11.170: DEBUG/dalvikvm(31774): GC freed 3367 objects / 163464 bytes in 122ms 02-04 08:18:13.230: DEBUG/dalvikvm(22167): GC freed 2790 objects / 552328 bytes in 38ms 02-04 08:18:14.650: DEBUG/dalvikvm(6709): GC freed 8443 objects / 540440 bytes in 39ms 02-04 08:18:16.260: DEBUG/dalvikvm(31921): GC freed 214 objects / 9824 bytes in 216ms 02-04 08:18:16.670: DEBUG/dalvikvm(22167): GC freed 3232 objects / 561256 bytes in 40ms 02-04 08:18:18.600: DEBUG/dalvikvm(6709): GC freed 7718 objects / 481952 bytes in 39ms 02-04 08:18:19.210: DEBUG/dalvikvm(1129): GC freed 6898 objects / 275328 bytes in 109ms 02-04 08:18:19.690: DEBUG/dalvikvm(22167): GC freed 2968 objects / 571232 bytes in 39ms 02-04 08:18:21.440: DEBUG/dalvikvm(1212): GC freed 1020 objects / 49328 bytes in 395ms 02-04 08:18:22.570: DEBUG/dalvikvm(6709): GC freed 7893 objects / 495616 bytes in 40ms 02-04 08:18:23.060: DEBUG/dalvikvm(22167): GC freed 3117 objects / 561912 bytes in 41ms 02-04 08:18:25.860: DEBUG/dalvikvm(22167): GC freed 2924 objects / 558448 bytes in 36ms 02-04 08:18:26.350: DEBUG/dalvikvm(32098): GC freed 4662 objects / 495496 bytes in 290ms 02-04 08:18:26.410: DEBUG/dalvikvm(22167): GC freed 1077 objects / 130680 bytes in 33ms 02-04 08:18:27.080: DEBUG/dalvikvm(6709): GC freed 7912 objects / 485368 bytes in 40ms 02-04 08:18:28.190: DEBUG/dalvikvm(22167): GC freed 953 objects / 767272 bytes in 33ms 02-04 08:18:29.500: DEBUG/dalvikvm(1129): GC freed 6756 objects / 270480 bytes in 105ms 02-04 08:18:30.500: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:18:31.090: DEBUG/dalvikvm(6709): GC freed 10545 objects / 682480 bytes in 49ms 02-04 08:18:31.120: DEBUG/dalvikvm(1813): GC freed 5970 objects / 310912 bytes in 60ms 02-04 08:18:31.320: DEBUG/dalvikvm(22167): GC freed 2468 objects / 539520 bytes in 39ms 02-04 08:18:34.110: DEBUG/dalvikvm(22167): GC freed 2879 objects / 569008 bytes in 35ms 02-04 08:18:34.920: DEBUG/dalvikvm(6709): GC freed 7029 objects / 424632 bytes in 35ms 02-04 08:18:36.150: DEBUG/dalvikvm(9060): GC freed 564 objects / 27840 bytes in 89ms 02-04 08:18:36.630: DEBUG/dalvikvm(22167): GC freed 2437 objects / 554000 bytes in 35ms 02-04 08:18:38.760: DEBUG/dalvikvm(6709): GC freed 8309 objects / 545032 bytes in 36ms 02-04 08:18:39.270: DEBUG/dalvikvm(1129): GC freed 6958 objects / 278352 bytes in 107ms 02-04 08:18:39.970: DEBUG/dalvikvm(22167): GC freed 2915 objects / 560312 bytes in 38ms 02-04 08:18:41.260: DEBUG/dalvikvm(6184): GC freed 373 objects / 26152 bytes in 205ms 02-04 08:18:42.780: DEBUG/dalvikvm(6709): GC freed 7212 objects / 447696 bytes in 36ms 02-04 08:18:43.160: DEBUG/dalvikvm(22167): GC freed 3106 objects / 561824 bytes in 39ms 02-04 08:18:46.310: DEBUG/dalvikvm(22167): GC freed 3110 objects / 564080 bytes in 45ms 02-04 08:18:46.650: DEBUG/dalvikvm(6709): GC freed 7508 objects / 468832 bytes in 36ms 02-04 08:18:48.820: DEBUG/dalvikvm(31712): GC freed 13795 objects / 828232 bytes in 203ms 02-04 08:18:49.040: DEBUG/dalvikvm(1129): GC freed 6918 objects / 276224 bytes in 109ms 02-04 08:18:49.640: DEBUG/dalvikvm(22167): GC freed 2952 objects / 562168 bytes in 37ms 02-04 08:18:50.630: DEBUG/dalvikvm(6709): GC freed 8332 objects / 549680 bytes in 35ms 02-04 08:18:52.770: DEBUG/dalvikvm(22167): GC freed 3108 objects / 563192 bytes in 37ms 02-04 08:18:54.400: DEBUG/dalvikvm(6709): GC freed 7509 objects / 469016 bytes in 35ms 02-04 08:18:55.900: DEBUG/dalvikvm(22167): GC freed 3121 objects / 572920 bytes in 38ms 02-04 08:18:58.150: DEBUG/dalvikvm(6709): GC freed 7408 objects / 465456 bytes in 35ms 02-04 08:18:58.710: DEBUG/dalvikvm(1129): GC freed 6908 objects / 276440 bytes in 107ms 02-04 08:18:59.190: DEBUG/dalvikvm(22167): GC freed 3160 objects / 563144 bytes in 38ms 02-04 08:19:02.080: DEBUG/dalvikvm(6709): GC freed 7436 objects / 468040 bytes in 36ms 02-04 08:19:02.380: DEBUG/dalvikvm(22167): GC freed 3104 objects / 557600 bytes in 39ms 02-04 08:19:05.050: DEBUG/dalvikvm(22167): GC freed 2860 objects / 570072 bytes in 35ms 02-04 08:19:05.810: DEBUG/dalvikvm(6709): GC freed 7508 objects / 469080 bytes in 35ms 02-04 08:19:06.500: DEBUG/skia(22167): --- decoder->decode returned false 02-04 08:19:07.960: DEBUG/dalvikvm(22167): GC freed 2747 objects / 520008 bytes in 36ms 02-04 08:19:08.180: DEBUG/dalvikvm(1129): GC freed 7866 objects / 317304 bytes in 107ms 02-04 08:19:09.540: DEBUG/dalvikvm(6709): GC freed 8220 objects / 539688 bytes in 36ms 02-04 08:19:10.810: DEBUG/dalvikvm(22167): GC freed 2898 objects / 596824 bytes in 37ms 02-04 08:19:13.360: DEBUG/dalvikvm(22167): GC freed 2503 objects / 398936 bytes in 35ms 02-04 08:19:13.370: INFO/dalvikvm-heap(22167): Grow heap (frag case) to 5.029MB for 570264-byte allocation 02-04 08:19:13.400: DEBUG/dalvikvm(22167): GC freed 702 objects / 24976 bytes in 31ms 02-04 08:19:13.400: DEBUG/skia(22167): --- decoder->decode returned false 02-04 08:19:13.540: DEBUG/dalvikvm(6709): GC freed 7481 objects / 466544 bytes in 36ms 02-04 08:19:15.600: DEBUG/WifiService(1129): got ACTION_DEVICE_IDLE 02-04 08:19:15.960: INFO/wpa_supplicant(2522): CTRL-EVENT-DRIVER-STATE STOPPED 02-04 08:19:15.960: VERBOSE/WifiMonitor(1129): Event [CTRL-EVENT-DRIVER-STATE STOPPED] 02-04 08:19:17.270: DEBUG/dalvikvm(22167): GC freed 2372 objects / 1266992 bytes in 36ms 02-04 08:19:17.520: DEBUG/dalvikvm(6709): GC freed 7996 objects / 519128 bytes in 37ms 02-04 08:19:18.150: DEBUG/dalvikvm(1129): GC freed 7110 objects / 285032 bytes in 108ms 02-04 08:19:20.460: DEBUG/dalvikvm(22167): GC freed 3327 objects / 565264 bytes in 36ms 02-04 08:19:21.250: DEBUG/dalvikvm(6709): GC freed 7632 objects / 486024 bytes in 37ms 02-04 08:19:26.470: DEBUG/dalvikvm(31774): GC freed 345 objects / 16160 bytes in 96ms 02-04 08:19:30.423: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:20:05.280: DEBUG/dalvikvm(1813): GC freed 741 objects / 36840 bytes in 91ms 02-04 08:20:23.580: DEBUG/WifiService(1129): ACTION_BATTERY_CHANGED pluggedType: 2 02-04 08:20:30.423: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:20:53.970: INFO/FastDormancyManager(1129): Fast Dormant executed. ExecuteCount:2683 NonExecuteCount:25773 I really hope you can help me.

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface I've run setup-pstn which produces the following output trixbox1.localdomain ~]# setup-pstn -------------------------------------------------------------- Detecting PSTN cards and USB PSTN Devices -------------------------------------------------------------- Hardware present! STOPPING ASTERISK Asterisk Stopped STOPPING FOP SERVER FOP Server Stopped Unloading DAHDI hardware modules: done Loading DAHDI hardware modules: wct4xxp: [ OK ] wcte12xp: [ OK ] wct1xxp: [ OK ] wcte11xp: [ OK ] wctdm24xxp: [ OK ] opvxa1200: [ OK ] wcfxo: [ OK ] wctdm: [ OK ] wcb4xxp: [ OK ] wctc4xxp: [ OK ] xpp_usb: [ OK ] Running dahdi_cfg: [ OK ] SETTING FILE PERMISSIONS Permissions OK STARTING ASTERISK Asterisk Started STARTING FOP SERVER FOP Server Started Chan Extension Context Language MOH Interpret Blocked State pseudo default en default In Service 1 from-pstn en default In Service dahdi_scan returns: dahdi_scan [1] active=yes alarms=OK description=Wildcard TDM400P REV I Board 5 name=WCTDM/4 manufacturer=Digium devicetype=Wildcard TDM400P REV I location=PCI Bus 00 Slot 10 basechan=1 totchans=4 irq=209 type=analog port=1,FXO port=2,none port=3,none port=4,none And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook A cat of /etc/asterisk/dahdi.conf shows: [trixbox1.localdomain ~]# cat /etc/asterisk/dahdi-channels.conf ; Autogenerated by /usr/sbin/dahdi_genconf on Tue May 25 17:45:13 2010 ; If you edit this file and execute /usr/sbin/dahdi_genconf again, ; your manual changes will be LOST. ; Dahdi Channels Configurations (chan_dahdi.conf) ; ; This is not intended to be a complete chan_dahdi.conf. Rather, it is intended ; to be #include-d by /etc/chan_dahdi.conf that will include the global settings ; ; Span 1: WCTDM/4 "Wildcard TDM400P REV I Board 5" (MASTER) ;;; line="1 WCTDM/4/0 FXSKS (SWEC: MG2)" signalling=fxs_ks callerid=asreceived group=0 context=from-pstn channel => 1 callerid= group= context=default I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". I have one outbound route which uses the dial pattern 9|. and the Trunk Zap/1 and one Zap Trunk which uses Zap Identifier (trunk name): 1 and has no Dial Rules. The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. When running tail -f /var/log/asterisk/full and placing a call I get the following output: [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP TOS bits 184 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP CoS mark 5 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP TOS bits 136 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP CoS mark 6 [May 26 11:10:52] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:1] Macro("SIP/801-b7ce8c28", "user-callerid,SKIPTTL,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:1] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:2] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:3] ExecIf("SIP/801-b7ce8c28", "1?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:4] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:5] Set("SIP/801-b7ce8c28", "AMPUSERCIDNAME=Jona") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:6] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:7] Set("SIP/801-b7ce8c28", "AMPUSERCID=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:8] Set("SIP/801-b7ce8c28", "CALLERID(all)="Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:9] Set("SIP/801-b7ce8c28", "REALCALLERIDNUM=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:10] ExecIf("SIP/801-b7ce8c28", "0?Set(CHANNEL(language)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:11] GotoIf("SIP/801-b7ce8c28", "1?continue") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-user-callerid,s,20) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:20] NoOp("SIP/801-b7ce8c28", "Using CallerID "Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:2] Set("SIP/801-b7ce8c28", "_NODEST=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:3] Macro("SIP/801-b7ce8c28", "record-enable,801,OUT,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:1] GotoIf("SIP/801-b7ce8c28", "1?check") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-record-enable,s,4) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:4] AGI("SIP/801-b7ce8c28", "recordingcheck,20100526-111052,1274868652.1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Launched AGI Script /var/lib/asterisk/agi-bin/recordingcheck [May 26 11:10:52] VERBOSE[2858] logger.c: recordingcheck,20100526-111052,1274868652.1: Outbound recording not enabled [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28>AGI Script recordingcheck completed, returning 0 [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:5] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:4] Macro("SIP/801-b7ce8c28", "dialout-trunk,1,01483890915,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:1] Set("SIP/801-b7ce8c28", "DIAL_TRUNK=1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:2] GosubIf("SIP/801-b7ce8c28", "0?sub-pincheck,s,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:3] GotoIf("SIP/801-b7ce8c28", "0?disabletrunk,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:4] Set("SIP/801-b7ce8c28", "DIAL_NUMBER=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:5] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=tr") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:6] Set("SIP/801-b7ce8c28", "OUTBOUND_GROUP=OUT_1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:7] GotoIf("SIP/801-b7ce8c28", "1?nomax") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s,9) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:9] GotoIf("SIP/801-b7ce8c28", "0?skipoutcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:10] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:11] Macro("SIP/801-b7ce8c28", "outbound-callerid,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:1] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:2] ExecIf("SIP/801-b7ce8c28", "0?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:3] GotoIf("SIP/801-b7ce8c28", "1?normcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,6) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:6] Set("SIP/801-b7ce8c28", "USEROUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:7] Set("SIP/801-b7ce8c28", "EMERGENCYCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:8] Set("SIP/801-b7ce8c28", "TRUNKOUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:9] GotoIf("SIP/801-b7ce8c28", "1?trunkcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,12) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:12] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:13] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:14] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=prohib_passed_screen)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:12] ExecIf("SIP/801-b7ce8c28", "0?AGI(fixlocalprefix)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:13] Set("SIP/801-b7ce8c28", "OUTNUM=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:14] Set("SIP/801-b7ce8c28", "custom=DAHDI/1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:15] ExecIf("SIP/801-b7ce8c28", "0?Set(DIAL_TRUNK_OPTIONS=M(setmusic^))") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:16] Macro("SIP/801-b7ce8c28", "dialout-trunk-predial-hook,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk-predial-hook:1] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:17] GotoIf("SIP/801-b7ce8c28", "0?bypass,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:18] GotoIf("SIP/801-b7ce8c28", "0?customtrunk") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:19] Dial("SIP/801-b7ce8c28", "DAHDI/1/01483890915,300,") in new stack [May 26 11:10:52] WARNING[2858] app_dial.c: Unable to create channel of type 'DAHDI' (cause 0 - Unknown) [May 26 11:10:52] VERBOSE[2858] logger.c: == Everyone is busy/congested at this time (1:0/0/1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:20] Goto("SIP/801-b7ce8c28", "s-CHANUNAVAIL,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:1] GotoIf("SIP/801-b7ce8c28", "1?noreport") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,3) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:3] NoOp("SIP/801-b7ce8c28", "TRUNK Dial failed due to CHANUNAVAIL (hangupcause: 0) - failing through to other trunks") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:5] Macro("SIP/801-b7ce8c28", "outisbusy,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:1] Playback("SIP/801-b7ce8c28", "all-circuits-busy-now,noanswer") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'all-circuits-busy-now.ulaw' (language 'en') [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:2] Playback("SIP/801-b7ce8c28", "pls-try-call-later,noanswer") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'pls-try-call-later.ulaw' (language 'en') [May 26 11:10:54] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (macro-outisbusy, s, 2) exited non-zero on 'SIP/801-b7ce8c28' in macro 'outisbusy' [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (from-internal, 901483890915, 5) exited non-zero on 'SIP/801-b7ce8c28' [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [h@from-internal:1] Macro("SIP/801-b7ce8c28", "hangupcall") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:1] ResetCDR("SIP/801-b7ce8c28", "vw") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:2] NoCDR("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:3] GotoIf("SIP/801-b7ce8c28", "1?skiprg") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,6) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:6] GotoIf("SIP/801-b7ce8c28", "1?skipblkvm") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,9) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:9] GotoIf("SIP/801-b7ce8c28", "1?theend") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,11) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:11] Hangup("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (macro-hangupcall, s, 11) exited non-zero on 'SIP/801-b7ce8c28' in macro 'hangupcall' [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (from-internal, h, 1) exited non-zero on 'SIP/801-b7ce8c28' I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • httpd high cpu usage slowing down server response

    - by max
    my client has a image sharing website with about 100.000 visitor per day it has been slowed down considerably since this morning when i checked processes i've notice high cpu usage from http .... some has suggested ddos attack ... i'm not a webmaster and i've no idea whts going on top top - 20:13:30 up 5:04, 4 users, load average: 4.56, 4.69, 4.59 Tasks: 284 total, 3 running, 281 sleeping, 0 stopped, 0 zombie Cpu(s): 12.1%us, 0.9%sy, 1.7%ni, 69.0%id, 16.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15875096k used, 162056k free, 360468k buffers Swap: 4194288k total, 888k used, 4193400k free, 14050008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4151 apache 20 0 277m 84m 3784 R 50.2 0.5 0:01.98 httpd 4115 apache 20 0 210m 16m 4480 S 18.3 0.1 0:00.60 httpd 12885 root 39 19 4296 692 308 S 13.0 0.0 11:09.53 gzip 4177 apache 20 0 214m 20m 3700 R 12.3 0.1 0:00.37 httpd 2219 mysql 20 0 4257m 198m 5668 S 11.0 1.3 42:49.70 mysqld 3691 apache 20 0 206m 14m 6416 S 1.7 0.1 0:03.38 httpd 3934 apache 20 0 211m 17m 4836 S 1.0 0.1 0:03.61 httpd 4098 apache 20 0 209m 17m 3912 S 1.0 0.1 0:04.17 httpd 4116 apache 20 0 211m 17m 4476 S 1.0 0.1 0:00.43 httpd 3867 apache 20 0 217m 23m 4672 S 0.7 0.1 1:03.87 httpd 4146 apache 20 0 209m 15m 3628 S 0.7 0.1 0:00.02 httpd 4149 apache 20 0 209m 15m 3616 S 0.7 0.1 0:00.02 httpd 12884 root 39 19 22336 2356 944 D 0.7 0.0 0:19.21 tar 4054 apache 20 0 206m 12m 4576 S 0.3 0.1 0:00.32 httpd another top top - 15:46:45 up 5:08, 4 users, load average: 5.02, 4.81, 4.64 Tasks: 288 total, 6 running, 281 sleeping, 0 stopped, 1 zombie Cpu(s): 18.4%us, 0.9%sy, 2.3%ni, 56.5%id, 21.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15792196k used, 244956k free, 360924k buffers Swap: 4194288k total, 888k used, 4193400k free, 13983368k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4622 apache 20 0 209m 16m 3868 S 54.2 0.1 0:03.99 httpd 4514 apache 20 0 213m 20m 3924 R 50.8 0.1 0:04.93 httpd 4627 apache 20 0 221m 27m 4560 R 18.9 0.2 0:01.20 httpd 12885 root 39 19 4296 692 308 S 18.9 0.0 11:51.79 gzip 2219 mysql 20 0 4257m 199m 5668 S 18.3 1.3 43:19.04 mysqld 4512 apache 20 0 227m 33m 4736 R 5.6 0.2 0:01.93 httpd 4520 apache 20 0 213m 19m 4640 S 1.3 0.1 0:01.48 httpd 4590 apache 20 0 212m 19m 3932 S 1.3 0.1 0:00.06 httpd 4573 apache 20 0 210m 16m 3556 R 1.0 0.1 0:00.03 httpd 4562 root 20 0 15164 1388 952 R 0.7 0.0 0:00.08 top 98 root 20 0 0 0 0 S 0.3 0.0 0:04.89 kswapd0 100 root 39 19 0 0 0 S 0.3 0.0 0:02.85 khugepaged 4579 apache 20 0 209m 16m 3900 S 0.3 0.1 0:00.83 httpd 4637 apache 20 0 209m 15m 3668 S 0.3 0.1 0:00.03 httpd ps aux [root@server ~]# ps aux | grep httpd root 2236 0.0 0.0 207524 10124 ? Ss 15:09 0:03 /usr/sbin/http d -k start -DSSL apache 3087 2.7 0.1 226968 28232 ? S 20:04 0:06 /usr/sbin/http d -k start -DSSL apache 3170 2.6 0.1 221296 22292 ? R 20:05 0:05 /usr/sbin/http d -k start -DSSL apache 3171 9.0 0.1 225044 26768 ? R 20:05 0:17 /usr/sbin/http d -k start -DSSL apache 3188 1.5 0.1 223644 24724 ? S 20:05 0:03 /usr/sbin/http d -k start -DSSL apache 3197 2.3 0.1 215908 17520 ? S 20:05 0:04 /usr/sbin/http d -k start -DSSL apache 3198 1.1 0.0 211700 13000 ? S 20:05 0:02 /usr/sbin/http d -k start -DSSL apache 3272 2.4 0.1 219960 21540 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3273 2.0 0.0 211600 12804 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3279 3.7 0.1 229024 29900 ? S 20:06 0:05 /usr/sbin/http d -k start -DSSL apache 3280 1.2 0.0 0 0 ? Z 20:06 0:01 [httpd] <defun ct> apache 3285 2.9 0.1 218532 21604 ? S 20:06 0:04 /usr/sbin/http d -k start -DSSL apache 3287 30.5 0.4 265084 65948 ? R 20:06 0:43 /usr/sbin/http d -k start -DSSL apache 3297 1.9 0.1 216068 17332 ? S 20:06 0:02 /usr/sbin/http d -k start -DSSL apache 3342 2.7 0.1 216716 17828 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3356 1.6 0.1 217244 18296 ? S 20:07 0:01 /usr/sbin/http d -k start -DSSL apache 3365 6.4 0.1 226044 27428 ? S 20:07 0:06 /usr/sbin/http d -k start -DSSL apache 3396 0.0 0.1 213844 16120 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3399 5.8 0.1 215664 16772 ? S 20:07 0:05 /usr/sbin/http d -k start -DSSL apache 3422 0.7 0.1 214860 17380 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3435 3.3 0.1 216220 17460 ? S 20:07 0:02 /usr/sbin/http d -k start -DSSL apache 3463 0.1 0.0 212732 15076 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3492 0.0 0.0 207660 7552 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3493 1.4 0.1 218092 19188 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3500 1.9 0.1 224204 26100 ? R 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3501 1.7 0.1 216916 17916 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3502 0.0 0.0 207796 7732 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3505 0.0 0.0 207660 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3529 0.0 0.0 207660 7524 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3531 4.0 0.1 216180 17280 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3532 0.0 0.0 207656 7464 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3543 1.4 0.1 217088 18648 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3544 0.0 0.0 207656 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3545 0.0 0.0 207656 7560 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3546 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3547 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3548 2.3 0.1 216904 17888 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3550 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3551 0.0 0.0 207660 7536 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3552 0.2 0.0 214104 15972 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3553 6.5 0.1 216740 17712 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3554 6.3 0.1 216156 17260 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3555 0.0 0.0 207796 7716 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3556 1.8 0.0 211588 12580 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3557 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3565 0.0 0.0 207660 7520 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3570 0.0 0.0 207660 7516 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3571 0.0 0.0 207660 7504 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL root 3577 0.0 0.0 103316 860 pts/2 S+ 20:08 0:00 grep httpd httpd error log [Mon Jul 01 18:53:38 2013] [error] [client 2.178.12.67] request failed: error reading the headers, referer: http://akstube.com/image/show/27023/%D9%86%DB%8C%D9%88%D8%B4%D8%A7-%D8%B6%DB%8C%D8%BA%D9%85%DB%8C-%D9%88-%D8%AE%D9%88%D8%A7%D9%87%D8%B1-%D9%88-%D9%87%D9%85%D8%B3%D8%B1%D8%B4 [Mon Jul 01 18:55:33 2013] [error] [client 91.229.215.240] request failed: error reading the headers, referer: http://akstube.com/image/show/44924 [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] Invalid method in request [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] File does not exist: /var/www/html/501.shtml [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] client denied by server configuration: /var/www/html/server-status [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] File does not exist: /var/www/html/403.shtml [Mon Jul 01 19:23:57 2013] [error] [client 151.242.14.31] request failed: error reading the headers [Mon Jul 01 19:37:16 2013] [error] [client 2.190.16.65] request failed: error reading the headers [Mon Jul 01 19:56:00 2013] [error] [client 151.242.14.31] request failed: error reading the headers Not a JPEG file: starts with 0x89 0x50 also there is lots of these in the messages log Jul 1 20:15:47 server named[2426]: client 203.88.6.9#11926: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:47 server named[2426]: client 203.88.6.9#26255: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#20093: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#8672: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:07 server named[2426]: client 203.88.6.9#39352: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#25382: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#9064: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#35375: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#61932: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#4423: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#40229: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#46128: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#62128: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#35240: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#36774: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#28361: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#14970: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20216: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#31794: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#23042: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#11333: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#41807: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20092: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#43526: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#17173: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#62412: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#63961: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#64345: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#31030: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17098: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17197: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#18114: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#59138: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:17 server named[2426]: client 203.88.6.9#28715: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:48:33 server named[2426]: client 203.88.23.9#26355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#34473: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#62658: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#51631: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:35 server named[2426]: client 203.88.23.9#54701: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#63694: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#18203: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:37 server named[2426]: client 203.88.6.10#9029: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#58981: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#29321: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:47 server named[2426]: client 119.160.127.42#42355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:49 server named[2426]: client 119.160.120.42#46285: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:53 server named[2426]: client 119.160.120.42#30696: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:54 server named[2426]: client 119.160.127.42#14038: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:55 server named[2426]: client 119.160.120.42#33586: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:56 server named[2426]: client 119.160.127.42#55114: query (cache) 'xxxmaza.com/A/IN' denied

    Read the article

  • Cannot login to Ubuntu 14.04 returns to the login screen

    - by Safder
    I updated to 14.04 from 12.04.03. I was using cinammon at that time. When I upgraded to 14.04 ,I got a serious problem. I was unable to login, as whenever I hit enter after entering password, it is taking me straight to login screen again.I unable to login like Guest even. So, here is my .xsession-errors file. possibly anyone could help. Thanks in advance. Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: gnome-settings-daemon main process (14717) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.1000.crash) main process (14607) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.1000.crash) main process (14623) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.1000.crash) main process (14663) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.1000.crash) main process (14620) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.1000.crash) main process (14612) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.1000.crash) main process (14616) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.1000.crash) main process (14629) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.1000.crash) main process (14625) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_update-notifier_system-crash-notification.1000.crash) main process (14662) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.127.crash) main process (14613) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.1000.crash) main process (14665) terminated with status 133 init: update-notifier-crash (/var/crash/linux-image-3.13.0-24-generic.0.crash) main process (14606) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_Xorg.0.crash) main process (14609) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.127.crash) main process (14624) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.127.crash) main process (14622) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.127.crash) main process (14618) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.127.crash) main process (14628) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.127.crash) main process (14664) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.127.crash) main process (14608) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.127.crash) main process (14634) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.127.crash) main process (14667) terminated with status 133 init: gnome-settings-daemon main process (14843) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-settings-daemon main process (14850) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-session (GNOME) main process (14727) killed by TRAP signal init: gnome-settings-daemon main process (14859) killed by TRAP signal init: logrotate main process (14570) killed by TERM signal init: upstart-dbus-session-bridge main process (14683) terminated with status 1 init: Disconnected from notified D-Bus bus

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • How can I get sikuli-ide to work?

    - by ayckoster
    I installed sikuli-ide with sudo apt-get install sikuli-ide Everything was fine until I tried to start it from the terminal. I typed sikuli-ide But the only response I got was [info] locale: en_US The application was not started, furthermore there is no desktop file and sikuli-ide does not show up in Dash Home. I guess there is something wrong with the package. I run Ubuntu 12.10 64bit. I tried to install it (Sikuli-X-1.0rc3 (r905)-linux-x86_64.zip) from their page, now the IDE starts, but when I try to execute a simple script I get the following error: [error] Stopped [error] An error occurs at line 1 [error] Error message: Traceback (most recent call last): File "", line 1, in File "/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py", line 3, in File "/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/Sikuli.py", line 22, in java.lang.UnsatisfiedLinkError: /home/ayckoster/opt/Sikuli-IDE/libs/libVisionProxy.so: libml.so.2.1: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1935) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1860) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1821) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at com.wapmx.nativeutils.jniloader.NativeLoader.loadLibrary(NativeLoader.java:44) at org.sikuli.script.Finder.(Finder.java:33) at java.lang.Class.forName0(Native Method) at java.lang. Class.forName(Class.java:264) at org.python.core.Py.loadAndInitClass(Py.java:895) at org.python.core.Py.findClassInternal(Py.java:830) at org.python.core.Py.findClassEx(Py.java:881) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:133) at org.python.core.packagecache.PackageManager.findClass(PackageManager.java:28) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:122) at org.python.core.PyJavaPackage.__findattr_ex__(PyJavaPackage.java:137) at org.python.core.PyObject.__findattr__(PyObject.java:863) at org.python.core.imp.import_name(imp.java:849) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importFromAs(imp.java:978) at org.python.core.imp.importFrom(imp.java:954) at sikuli.Sikuli$py.f$0(/home/ayckoster/opt/Sikuli-IDE/siku li-script.jar/Lib/sikuli/Sikuli.py:211) at sikuli.Sikuli$py.call_function(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/Sikuli.py) at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.imp.createFromCode(imp.java:386) at org.python.core.util.importer.importer_load_module(importer.java:109) at org.python.modules.zipimport.zipimporter.zipimporter_load_module(zipimporter.java:161) at org.python.modules.zipimport.zipimporter$zipimporter_load_module_exposer.__call__(Unknown Source) at org.python.core.PyBuiltinMethodNarrow.__call__(PyBuiltinMethodNarrow.java:47) at org.python.core.imp.loadFromLoader(imp.java:513) at org.python.core.imp.find_module(imp.java:467) at org.python.core.PyModule.impAttr(PyModule.java:100) at org.python.core.imp.import_next(imp.java:715) at org.python.core.imp.import_name(imp.java:824) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importAll(imp.java:998) at sikuli$py.f$0(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py:3) at sikuli$py.call_function(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py) at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.imp.createFromCode(imp.java:386) at org.python.core.util.importer.importer_load_module(importer.java:109) at org.python.modules.zipimport.zipimporter.zipimporter_load_module(zipimporter.java:161) at org.python.modules.zipimport.zipimporter$zipimporter_load_module_exposer.__call__(Unknown Source) at org.python.core.PyBuiltinMethodNarrow.__call__(PyBuiltinMethodNarrow.java:47) at org.python.core.imp.loadFromLoader(imp.java:513) at org.python.core.imp.find_module(imp.java:467) at org.python.core.imp.import_next(imp.java:713) at or g.python.core.imp.import_name(imp.java:824) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importAll(imp.java:998) at org.python.pycode._pyx2.f$0(:1) at org.python.pycode._pyx2.call_function() at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.Py.runCode(Py.java:1261) at org.python.core.Py.exec(Py.java:1305) at org.python.util.PythonInterpreter.exec(PythonInterpreter.java:206) at org.sikuli.script.ScriptRunner.runPython(ScriptRunner.java:61) at org.sikuli.ide.SikuliIDE$ButtonRun.runPython(SikuliIDE.java:1572) at org.sikuli.ide.SikuliIDE$ButtonRun$1.run(SikuliIDE.java:1677) java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: /home/ayckoster/opt/Sikuli-IDE/libs/libVisionProxy.so: libml.so.2.1: cannot open shared object file: No such file or directory If I try to use the click() method from the gui it fails. So I created my own click method and it look like this: This cannot be executed and produces the error above.

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

  • SQL SERVER – Error: Fix – Msg 208 – Invalid object name ‘dbo.backupset’ – Invalid object name ‘dbo.backupfile’

    - by pinaldave
    Just a day before I got a very interesting email. Here is the email (modified a bit to make it relevant to this blog post). “Pinal, We are facing a very strange issue. One of our query  related to backup files and backup set has stopped working suddenly in SSMS. It works fine in application where we have and in the stored procedure but when we have it in our SSMS it gives following error. Msg 208, Level 16, State 1, Line 1 Invalid object name ‘dbo.backupfile’. Here are our queries which we are trying to execute. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; This query gives us details related to backupset and backup files when the backup was taken.” When I receive this kind of email, usually I have no answers directly. The claim that it works in stored procedure and in application but not in SSMS gives me no real data. I have requested him to very first check following two things: If he is connected to correct server? His answer was yes. If he has enough permissions? His answer was he was logged in as an admin. This means there was something more to it and I requested him to send me a screenshot of the his SSMS. He promptly sends that to me and as soon as I receive the screen shot I knew what was going on. Before I say anything take a look at the screenshot yourself and see if you can figure out why his queries are not working in SSMS. Just to make your life a bit easy, I have already given a hint in the image. The answer is very simple, the context of the database is master database. To execute above two queries the context of the database has to be msdb. Tables backupset and backupfile belong to the database msdb only. Here are two workaround or solution to above problem: 1) Change context to MSDB Above two queries when they will run as following they will not error out and will give the accurate desired result. USE msdb GO SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; 2) Prefix the query with msdb There are cases above script used in stored procedure or part of big query, it is not possible to change the context of the whole query to any specific database. Use three part naming convention and prefix them with msdb. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM msdb.dbo.backupset; SELECT logical_name, backup_size, file_type FROM msdb.dbo.backupfile; Very simple solution but sometime keeps people wondering for an answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • default xna 4.0 gametime don´t works well for 2D physics

    - by EusKoder
    I am developing a game using Visual Studio 2010 and XNA 4.0, after advancing to some extent with the project (a platform based 2d platformer msdn starter kit) I got to test it on different computers with different hardware (CPU, graphics, etc.) and I found that the speed of movement object of the game is quite different, I implemented the PSK physics msdn that are based on time, /// <summary> /// Updates the player's velocity and position based on input, gravity, etc. /// </summary> public void ApplyPhysics(GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; Vector2 previousPosition = Position; // Base velocity is a combination of horizontal movement control and // acceleration downward due to gravity. velocity.X += movement * MoveAcceleration * elapsed; velocity.Y = MathHelper.Clamp(velocity.Y + GravityAcceleration * elapsed, -MaxFallSpeed, MaxFallSpeed); velocity.Y = DoJump(velocity.Y, gameTime); // Apply pseudo-drag horizontally. if (IsOnGround) velocity.X *= GroundDragFactor; else velocity.X *= GroundDragFactor; //velocity.X *= AirDragFactor; // Prevent the player from running faster than his top speed. velocity.X = MathHelper.Clamp(velocity.X, -MaxMoveSpeed, MaxMoveSpeed); // Apply velocity. Position += velocity *elapsed; Position = new Vector2((float)Math.Round(Position.X), (float)Math.Round(Position.Y)); // If the player is now colliding with the level, separate them. HandleCollisions(gameTime); // If the collision stopped us from moving, reset the velocity to zero. if (Position.X == previousPosition.X) velocity.X = 0; if (Position.Y == previousPosition.Y) { velocity.Y = 0; jumpTime = 0.0f; } } tested eg with a PC (PC1) 2.13GHz Intel Core 2 6400 / ATI Radeon HD 4670 and another one: (pc2) 3.00GHz Intel Pentium D / Intel 82945G Express Chipset Family by displacement difference (moving x axis at supossed (position = velocity * gametime.ElapsedGameTime.TotalSeconds) constant velocity, for example) is 3 seconds in a total of 20 (example: moving pc1 player sprite 6000 pixels in the x-axis at 20 seconds and pc 2 runs the same distance in 17 ). Tested on a 3rd PC: i72700k / Gigabyte GTX 560 TI the results are even worse, after some time after starting the game gets like 3 times slower and showing the number of pixels in each frame moved in a debug window in the game (counting updatespersecond with counter variable for updates cuantity and gametime for counting a second show 63fps), it appears as if the number is always constant ( refreshments lose the Update method?). In this pc if I put the game in fullscreen during the course of the game, the effect of "go slow" is immediate and restore window mode sometimes yield returns to "normal" and sometimes not. Eventually I began to try a new project to test whether the movement is constant in different pc loading only one sprite and its position value in screen printing. Occur The same. I even tried moving a constant amount of pixels explicitly (position + = 5) and different speeds in different pc quantities of pixels moved in x time. I have the game loop as the default (fixedTimeStep=true;SynchronizeWithVerticalRetrace=true;). I've also tried turning off and creating another timestep as discussed in different post (eg http://gafferongames.com/game-physics/fix-your-timestep/ but i can´t achieve the desired result, move the same number of pixels in X seconds on different computers with windows. All pc used for tests use windows 7 enterprise pc1 == x86 the others are x64. The weirdest thing is that I find information about people describing the same problem and that I wear long nights of searches. Thanks for your help.

    Read the article

  • EVENT RECAP: Oracle Day & Product Fair - Ft. Lauderdale

    - by cwarticki
    Are you attending any of the Oracle Days and other Events? They are fantastic!  Keep track of the Oracle Events by following @OracleEvents on Twitter.  Also, stay in the know by subscribing to one of the several Oracle Newsletters. Those will also keep you posted of upcoming in-person and webcast events. From the Oracle Events website, simply navigate to your geography and refine your options to locate what interests you. You can also perform keyword searches. Today, I had the opportunity to participate in the Oracle Day & Product Fair in Ft. Lauderdale, Florida  Thanks to those who stopped by to ask your support questions and watched me demo My Oracle Support features and best practices. (Bob Stanoch, Sales Consulting Manager giving the 2nd keynote address on Exadata below) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It was a pleasant surprise to run into my former Oracle colleague Josh Tieso.  Josh (pictured right) is Sr. Oracle DBA at United Healthcare. He used to work for Oracle Support years ago but for the last 6 years at UHC. Josh is a member of the ERP DBA team, working with Exalogic, Oracle ERP R12, & RAC. Along the exhibit/vendor row, I met with Marco Gangano, National Sales Manager at Mythics. It was great getting to meet Marco and I look forward to working with his company with regards to Support Best Practices. In addition, Lissette Paez (left) was representing TAM Training.  TAM Training is an Oracle University, award-winning training partner.  They cover training across the scope of Oracle products with 7 facilities in the U.S.  Lissette and I have done a couple of these Oracle Days before.  It's great to see familiar faces.  A little while ago, I was down in this area to work with Citrix with an onsite session on Support Best Practices.  Pablo Leon and Alberto Gonzalez (right)came to chat with me over at the Support booth.  They wanted to know when I was giving my session.  Unfortunately, not this time guys. I'm on booth duty only. Keep in touch. Many thanks to our sponsors: BIAS, Cloudera, Intel and TekStream Solutions.Come attend one of the many Oracle Days & other events planned for you. -Chris WartickiGlobal Customer Management

    Read the article

  • 11 Types of Developers

    - by Lee Brandt
    Jack Dawson Jack Dawson is the homeless drifter in Titanic. At one point in the movie he says, “I figure life’s a gift, and I don’t intend on wasting it.” He is happy to wander wherever life takes him. He works himself from place to place, making just enough money to make it to his next adventure. The “Jack Dawson” developer clings on to any new technology as the ‘next big thing’, and will find ways to shoe-horn it in to places where it is not a fit. He is very appealing to the other developers because they want to try the newest techniques and tools too, He will only stay until the new technology either bores him or becomes problematic. Jack will also be hard to find once the technology has been implemented, because he will be on to the next shiny thing. However, having a Jack Dawson on your team can be beneficial. Jack can be a great ally when attempting to convince a stodgy, corporate entity to upgrade. Jack usually has an encyclopedic recall of all the new features of the technology upgrade and is more than happy to interject them in any conversation. Tom Smykowski Tom is the neurotic employee in Office Space, and is deathly afraid of being fired. He will do only what is necessary to keep the status quo. He believes as long as nothing changes, his job is safe. He will scoff at anything new and be the naysayer during any change initiative. Tom can be useful in off-setting Jack Dawson. Jack will constantly be pushing for change and Tom will constantly be fighting it. When you see that Jack is getting kind of bored with a new technology and Tom has finally stopped wetting himself at the mere mention of it, then it is probably the sweet spot of beginning to implement that new technology (providing it is the right tool for the job). Ray Consella Ray is the guy who built the Field of Dreams. He took a risk. Sometimes he screwed it up, but he knew he didn’t want to end up regretting not attempting it. He constantly doubted himself, but he knew he had to keep going. Granted, he was doing what the voices in his head were telling him to do, but my point is he was driven to do something that most people considered crazy. Even when his friends, his wife and even he told himself he was crazy, somewhere inside himself, he knew it was the right thing to do. These are the innovators. These are the Bill Gates and Steve Jobs of the world. The take risks, they fail, they learn and the get better. Obviously, this kind of person thrives in start-ups and smaller companies, but that is due to their natural aversion to bureaucracy. They want to see their ideas put into motion quickly, and withdrawn quickly if it doesn’t work. Short feedback cycles are essential to Ray. He wants to know if his idea is working or not. He wants to modify or reverse his idea if it is not working or makes things worse. These are the agilistas. May I always be one.

    Read the article

  • My Mother Bought a Droid

    - by Ben Griswold
    I converted to iPhone two years ago when I left my former employer and my Blackberry behind. The truth is I half-heartedly purchased my iPhone. It was a great looking device, but as far as I was concerned, it was a mere toy compared to my Blackberry.  I remember hiding the toy in my briefcase when attending business meetings because I didn’t consider it to be professional enough.  I’ve since owned all three generations of the iPhone and, well, iPhone seems to have caught on. I still miss the click of the Blackberry keyboard and the blinking red light letting me know that someone/something requires my attention, but I’m officially an iPhone fanboy now.  My mom called last weekend and asked if she should buy an iPhone. I talked her ear off about everything I love about iPhone. I went on for about twenty minutes. I couldn’t help myself. I mentioned everything from my podcast subscriptions to the application which manages my workouts.  I went as far as to say that someday all smart phones will be referred to as iPhones just like all tissues are referred to as Kleenex and all sodas are referred to as Cokes.  I was really on a roll and then I stopped. I had to…the call dropped.  There I was, strategically standing in the far corner of my backyard where I get the most reliable AT&T reception and the call drops in middle of my iPhone pitch.  Folks, I don’t care how good a salesperson you are, it’s tough to recover from a situation like this.   I dialed my mom back and jokingly asked if she was planning to make calls with her new phone. I explained that AT&T is bound to provide better service eventually but I’m not sure she should wait. After all, I have troubles with the network in San Diego and I can only image how bad it would be for her in Western Massachusetts. Mom called back a few days later exclaiming, “I bought a Droid! I love this phone! I haven’t done anything with it but make phone calls, but I love it.”  I had to laugh.  My mom made the right call (pun intended.)  The iPhone is an amazing device, but owners are constantly reminded that its core function (it’s a phone, remember?) is subpar.  If you love gadgets, you’re probably enthralled by iPhone’s many bells and whistles and, relatively speaking, the terrible phone service might not amount to much.  (Maybe it amounts to a rant on your blog.) The overall iPhone offering is so attractive that consumers are willing to wait for AT&T to straighten up their act or wait until Apple grants a choice of carriers.  But I don’t see either of these remedies coming soon. In the interim, I’m willing to take my iPhone for what it is and just continue to enjoy my favorite features while pretending that poor coverage isn’t a big deal. With any luck, more and more reasonable folks will recognize that Android Phones are legitimate players in the smart phone space, they will buy loads of them and there will become plenty of functional phones to borrow when my “phone” is showing zero bars.  Heck, I’m already covered when I visit my mom.

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 3)

    - by Sadequl Hussain
    Continuing from Part 2 of the Database Migration Checklist series: Step 10: Full-text catalogs and full-text indexing This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems. Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points. First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way: Msg 9954, Level 16, State 2, Line 1 The Full-Text Service (msftesql) is disabled. The system administrator must enable this service. This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager. Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it. Msg 7624, Level 16, State 1, Line 1 Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog. A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server. Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination. Step 11: Database collation considerations Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command: ALTER DATABASE database_name COLLATE collation_name You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected. And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following: Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘. That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other. Continues…

    Read the article

  • Post MIX10 Decompression

    - by Dave Campbell
    With a big dose of reality, I walked into this place this morning and found out "yeah, I really do write .NET web apps and MS Access for a living" :( ... but it pays the bills and I've gotten *way* used to eating 3 times a day :) MIX10 was great, although the buzz didn't seem as big as MIX09, and I'm not sure why. It also seemed like a different crowd and other folks I talked to agreed with that. Of course now I can outwardly admit that the "Windows Phone 7 Series" is programmed with Silverlight ... how cool is that? I've been biting my tongue about that info for over a month! I cloistered myself in Ballroom A for the week, not counting the Keynotes. That's where the phone sessions were located. I tried to collect the full set, but ended up bailing on the last one because it was ending at the time that MIX10 was ending, and I hadn't spent a whole lot of time in 'The Commons'. I met a bunch of folks I've blogged about, or exchanged email with, and that's always fun. Renewed associations with folks I only see once or twice a year and way too long a list and don't want to mention some and leave off others... I did have an opportunity to meet Charles Petzold... wow that was interesting... I got into Windows development through Charles' Programming Windows 3.1 book 'back in the day' ... couldn't find anyone at Honeywell wanted to join my journey, so it was just me and 'Chuck' :) ... read every word of that book more than once... all marked up, tags sticking out of it. And now he's writing a WP7 book ... gotta get it: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) I went through my Big List-o-BlogsTM last night and it took over 2 hours because of all the new content since MIX10. I've got 90 posts tagged as of 9PM on 3/21. If everybody stopped right now, it would take me 9 days to push what I have now, so you'll have to be patient! I had another event on Thursday that was *extremely* tiring, so I ended up staying over another night. I drove back into the strip on Friday morning to try to find a non-cheesy souvenir for my wife, and didn't find much. Then I went to Blueberry Hill restaurant for 3 eggs, 3 strips of bacon, and 3 awesome potato pancakes. Check them out if you have time! And then hit the road. In case anyone is wondering, the 2-1/2 hour drive I took across Hoover Dam on Sunday afternoon only took 30 minutes on Friday afternoon... that was a more normal trip! I thoroughly enjoyed the time I spent with everyone. Thanks to John Papa and his crew for the great Insider's party on Monday night... the Blues Brothers were a fun surprise and they did a good job! And the swag was great... thanks to all the contributors for a fun evening at their expense! All I can say is stay tuned, go to live.visitmix.com/videos and watch everything, get the phone tools, start working... everything's different and everything's fun... jump in, it's all Silverlight! Stay in the 'Light! Technorati Tags: Silverlight    Silverlight 4    Windows Phone     MIX10

    Read the article

  • Listen to Online Radio with Antenna

    - by Asian Angel
    Are you looking for some fresh new music to listen to at home or at work? With Antenna you can listen to online radio stations from all over the world. Note: Requires Adobe AIR (download link at bottom of article). Antenna in Action Once you have completed the installation and started Antenna up this is the window that you will see. The left side will have a “browsing pane” where you can search for the stations that you would like to listen to using the various categories. Based on the stations that you choose the background map will change location to match the stations locations. Here is a closer look at the “Categories Bar”. For our first example we used the “Country Category” to find our first station to listen to. When you choose a country you will be presented with a list of the stations available for that country. To start listening to a particular station just double click on the appropriate entry line. A closer look at the “browser pane” with our first station playing. Notice the “Reliability Indicator” that will be available for each listing…some may be better than others and you can use this to choose the best streaming stations from the list. In the upper left corner you will notice three icons…each will open a small pop-up window with a specific purpose. The first icon will open up the “About Window”. If you need to contact Antenna’s creator or would like to place a request for a station to be added to the app then this is the best way to do it. The second icon will open up a Antenna specific chat window. The third icon will allow you to set a default location and make adjustments to some of the app’s settings. Recording Audio The “Recording Function” is the only area where we experienced some “quirkiness” with the app. To start recording press the “Round White Button”… Note: Based on feedback on the app creator’s webpage some people have experienced the same problem as we did during our tests with the app failing to complete the recordings. Hopefully this bug will be fixed with the next release. Once recording has started the button will turn red. Click on the button again to stop recording. Once you have stopped recording you will see the following message window appear and the main window will be shaded over with a whitish color until you click “OK”. Conclusion Regardless of the slight quirkiness in recording online music Antenna more than makes up for it with the terrific selection of online stations and streaming capability. New fresh music for you to listen to is only a click or two away… Links Download Antenna (Antenna Homepage) Download Antenna at Softpedia Download Adobe AIR Similar Articles Productive Geek Tips Listen to Local FM Radio in Windows 7 Media CenterListen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Listen and Record Over 12,000 Online Radio Stations with RadioSureWeekend Fun: Watch Television on Your PC with AnyTV TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >