Search Results

Search found 4332 results on 174 pages for 'resolve'.

Page 32/174 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • IIS crashes with unhandled exception in ASP.NET

    - by SnowCrash
    We had an issue recently with an unhandled exception in an ASP.NET C# application bringing down IIS and all application pools that it was hosting. IIS Manager was unable to restart or stop/start the service and I was unable start IIS again after killing w3wp.exe in the task manager. A system reboot restored IIS to a running state; as a primarily Linux admin, I generally consider an unplanned system reboot to resolve a software error to be an act of high heresy. Is there a way to "harden" IIS so that a faulting application does not affect anything but the request that exposes the fault? Some details on the server and application fault. IIS: 7.5 .NET: 4.0 Windows Server 2008 R2 Faulted on call to System.Net.Dns.Resolve() with a url pointing to a non-existant domain as the argument. (I'm aware that this method is deprecated but the point that a page code issue shouldn't bring down the server still stands) The exception generated was SocketException. The faulting module according to event viewer was KERNELBASE.dll The issue was resolved by wrapping the call in a try-catch, logging the exception and displaying some generic content on the page. I'm hoping that I missed something in the IIS config that would switch it to "production" mode or something.

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • ESXi 5 VM Putty session hangs, vSphere client timing out

    - by user192702
    First of all I believe this is a ESXi issue but let me know if you have seen this. It started about a year ago when I noticed occasionally when I putty via SSH to my VM guests, if I do anything that makes it to display a lot of things at once, the session will hang and I have to start a new one quite often only to find the same behaviour. What I meant by display a lot of things can be any of the following: 1) tail -f filename 2) Paste a long command 3) less filename If I type in one character at a time this won't happen. I tried searching online and it always point me to flow control settings and the various suggestions I've tried have never been able to resolve the issue. Since last week, I've noticed I'm not able to connect to my POP3 server from Outlook (it's timing out from Outlook's perspective). Today I tried to connect to the ESXi via vSphere client and it gives me a time out also. Exact behavior and error I saw is similar to the one posted at the following URL but the suggested technique also failed to resolve the issue. http://davidcocke.blogspot.hk/2012/02/unable-to-login-with-vsphere-client.html Has anyone experienced this before? Any suggestions on how to troubleshoot this?

    Read the article

  • How to install ia32-libs on Wheezy?

    - by javano
    I have seen a couple of questions on ServerFault relating to installing ia32-libs on a 64bit machine but the solutions aren't working for me (I don't think any of these questions where for Wheezy specifically I'm not sure how to proceed); root@server:/home/# apt-get install -f ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 php5 : Depends: libapache2-mod-php5 (>= 5.4.4-14+deb7u2) but it is not going to be installed or libapache2-mod-php5filter (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-cgi (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-fpm (>= 5.4.4-14+deb7u2) but it is not going to be installed php5-mysql : Depends: phpapi-20100525 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. root@server:/home/# sudo apt-get install ia32-libs-i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-i386:i386 : Depends: freeglut3:i386 (>= 2.6.0-1) but it is not going to be installed Depends: lesstif2:i386 (>= 1:0.95.2-1) but it is not going to be installed Depends: libacl1:i386 (>= 2.2.49-4) but it is not going to be installed Depends: libasyncns0:i386 (>= 0.3-1.1) but it is not going to be installed Depends: libattr1:i386 (>= 1:2.4.44-2) but it is not going to be installed Depends: libaudio2:i386 (>= 1.9.2-4) but it is not going to be installed Depends: libaudiofile1:i386 (>= 0.2.6-8) but it is not going to be installed Depends: libavahi-client3:i386 (>= 0.6.27-2+squeeze1) but it is not going to be installed Depends: libavahi-common3:i386 (>= 0.6.27-2+squeeze1) but it is not going to be installed Depends: libbsd0:i386 (>= 0.2.0-1) but it is not going to be installed Depends: libcap2:i386 (>= 1:2.19-3) but it is not going to be installed Depends: libcomerr2:i386 (>= 1.41.12-4stable1) but it is not going to be installed Depends: libcups2:i386 (>= 1.4.4-7+squeeze1) but it is not going to be installed Depends: libcurl3:i386 (>= 7.21.0-2) but it is not going to be installed Depends: libdbus-1-3:i386 (>= 1.2.24-4+squeeze1) but it is not going to be installed Depends: libdirectfb-1.2-9:i386 (>= 1.2.10.0-4) but it is not going to be installed Depends: libdrm-intel1:i386 (>= 2.4.21-1~squeeze3) but it is not going to be installed Depends: libdrm-radeon1:i386 (>= 2.4.21-1~squeeze3) but it is not going to be installed Depends: libdrm2:i386 (>= 2.4.21-1~squeeze3) but it is not going to be installed Depends: libedit2:i386 (>= 2.11-20080614-2) but it is not going to be installed Depends: libesd0:i386 (>= 0.2.41-8) but it is not going to be installed Depends: libexif12:i386 (>= 0.6.19-1) but it is not going to be installed Depends: libexpat1:i386 (>= 2.0.1-7) but it is not going to be installed Depends: libflac8:i386 (>= 1.2.1-2+b1) but it is not going to be installed Depends: libfltk1.1:i386 (>= 1.1.10-2+b1) but it is not going to be installed Depends: libfontconfig1:i386 (>= 2.8.0-2.1) but it is not going to be installed Depends: libfreetype6:i386 (>= 2.4.2-2.1+squeeze3) but it is not going to be installed Depends: libgcrypt11:i386 (>= 1.4.5-2) but it is not going to be installed Depends: libgdbm3:i386 (>= 1.8.3-9) but it is not going to be installed Depends: libgl1-mesa-dri:i386 (>= 7.7.1-5) but it is not going to be installed Depends: libgl1-mesa-glx:i386 (>= 7.7.1-5) but it is not going to be installed Depends: libglu1-mesa:i386 (>= 7.7.1-5) but it is not going to be installed Depends: libgnutls26:i386 (>= 2.8.6-1) but it is not going to be installed Depends: libgpg-error0:i386 (>= 1.6-1) but it is not going to be installed Depends: libgphoto2-2:i386 (>= 2.4.6-3) but it is not going to be installed Depends: libgphoto2-port0:i386 (>= 2.4.6-3) but it is not going to be installed Depends: libgssapi-krb5-2:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: libice6:i386 (>= 2:1.0.6-2) but it is not going to be installed Depends: libidn11:i386 (>= 1.15-2) but it is not going to be installed Depends: libieee1284-3:i386 (>= 0.2.11-6) but it is not going to be installed Depends: libjack-jackd2-0:i386 (>= 1.9.5~dfsg-14) but it is not going to be installed or libjack0:i386 (>= 1:0.118+svn3796-7) but it is not going to be installed Depends: libjpeg62:i386 (>= 6b1-1) but it is not going to be installed Depends: libjpeg8:i386 (>= 8b-1) but it is not going to be installed Depends: libk5crypto3:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: libkeyutils1:i386 (>= 1.4-1) but it is not going to be installed Depends: libkrb5-3:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: libkrb5support0:i386 (>= 1.8.3+dfsg-4squeeze2) but it is not going to be installed Depends: liblcms1:i386 (>= 1.18.dfsg-1.2+b3) but it is not going to be installed Depends: libltdl7:i386 (>= 2.2.6b-2) but it is not going to be installed Depends: liblzo2-2:i386 (>= 2.03-2) but it is not going to be installed Depends: libmpg123-0:i386 (>= 1.12.1-3) but it is not going to be installed Depends: libnspr4-0d:i386 (>= 4.8.6-1) but it is not going to be installed Depends: libnss3-1d:i386 (>= 3.12.8-1+squeeze4) but it is not going to be installed Depends: libogg0:i386 (>= 1.2.0~dfsg-1) but it is not going to be installed Depends: libopenal1:i386 (>= 1:1.12.854-2) but it is not going to be installed Depends: libpam0g:i386 (>= 1.1.1-6.1+squeeze1) but it is not going to be installed Depends: libpng12-0:i386 (>= 1.2.44-1+squeeze1) but it is not going to be installed Depends: libpopt0:i386 (>= 1.16-1) but it is not going to be installed Depends: libpulse0:i386 (>= 0.9.21-3+squeeze1) but it is not going to be installed Depends: libsamplerate0:i386 (>= 0.1.7-3) but it is not going to be installed Depends: libsane:i386 (>= 1.0.21-9) but it is not going to be installed Depends: libsasl2-2:i386 (>= 2.1.23.dfsg1-7) but it is not going to be installed Depends: libsdl1.2debian:i386 (>= 1.2.15) but it is not going to be installed Depends: libselinux1:i386 (>= 2.0.96-1) but it is not going to be installed Depends: libsigc++-2.0-0c2a:i386 (>= 2.2.4.2-1) but it is not going to be installed Depends: libsm6:i386 (>= 2:1.1.1-1) but it is not going to be installed Depends: libsndfile1:i386 (>= 1.0.21-3+squeeze1) but it is not going to be installed Depends: libsqlite3-0:i386 (>= 3.7.3-1) but it is not going to be installed Depends: libssh2-1:i386 (>= 1.2.6-1) but it is not going to be installed Depends: libssl1.0.0:i386 (>= 1) but it is not going to be installed Depends: libstdc++5:i386 (>= 1:3.3.6-20) but it is not going to be installed Depends: libsvga1:i386 (>= 1:1.4.3-29) but it is not going to be installed Depends: libsysfs2:i386 (>= 2.1.0+repack-1) but it is not going to be installed Depends: libtasn1-3:i386 (>= 2.7-1) but it is not going to be installed Depends: libtdb1:i386 (>= 1.2.1-2+b1) but it is not going to be installed Depends: libtiff4:i386 (>= 3.9.4-5+squeeze3) but it is not going to be installed Depends: libts-0.0-0:i386 (>= 1.0-7) but it is not going to be installed Depends: libusb-0.1-4:i386 (>= 2:0.1.12-16) but it is not going to be installed Depends: libuuid1:i386 (>= 2.17.2-9) but it is not going to be installed Depends: libvorbis0a:i386 (>= 1.3.1-1) but it is not going to be installed Depends: libvorbisenc2:i386 (>= 1.3.1-1) but it is not going to be installed Depends: libvorbisfile3:i386 (>= 1.3.1-1) but it is not going to be installed Depends: libwrap0:i386 (>= 7.6.q-19) but it is not going to be installed Depends: libx11-6:i386 (>= 2:1.3.3-4) but it is not going to be installed Depends: libx86-1:i386 (>= 1.1+ds1-6) but it is not going to be installed Depends: libxau6:i386 (>= 1:1.0.6-1) but it is not going to be installed Depends: libxaw7:i386 (>= 2:1.0.7-1) but it is not going to be installed Depends: libxcb-render-util0:i386 (>= 0.3.6-1) but it is not going to be installed Depends: libxcb-render0:i386 (>= 1.6-1) but it is not going to be installed Depends: libxcb1:i386 (>= 1.6-1) but it is not going to be installed Depends: libxcomposite1:i386 (>= 1:0.4.2-1) but it is not going to be installed Depends: libxcursor1:i386 (>= 1:1.1.10-2) but it is not going to be installed Depends: libxdamage1:i386 (>= 1:1.1.3-1) but it is not going to be installed Depends: libxdmcp6:i386 (>= 1:1.0.3-2) but it is not going to be installed Depends: libxext6:i386 (>= 2:1.1.2-1) but it is not going to be installed Depends: libxfixes3:i386 (>= 1:4.0.5-1) but it is not going to be installed Depends: libxft2:i386 (>= 2.1.14-2) but it is not going to be installed Depends: libxi6:i386 (>= 2:1.3-6) but it is not going to be installed Depends: libxinerama1:i386 (>= 2:1.1-3) but it is not going to be installed Depends: libxml2:i386 (>= 2.7.8.dfsg-2+squeeze1) but it is not going to be installed Depends: libxmu6:i386 (>= 2:1.0.5-2) but it is not going to be installed Depends: libxmuu1:i386 (>= 2:1.0.5-2) but it is not going to be installed Depends: libxp6:i386 (>= 1:1.0.0.xsf1-2) but it is not going to be installed Depends: libxpm4:i386 (>= 1:3.5.8-1) but it is not going to be installed Depends: libxrandr2:i386 (>= 2:1.3.0-3) but it is not going to be installed Depends: libxrender1:i386 (>= 1:0.9.6-1) but it is not going to be installed Depends: libxslt1.1:i386 (>= 1.1.26-6) but it is not going to be installed Depends: libxss1:i386 (>= 1:1.2.0-2) but it is not going to be installed Depends: libxt6:i386 (>= 1:1.0.7-1) but it is not going to be installed Depends: libxtst6:i386 (>= 2:1.1.0-3) but it is not going to be installed Depends: libxv1:i386 (>= 2:1.0.5-1) but it is not going to be installed Depends: libxxf86vm1:i386 (>= 1:1.1.0-2) but it is not going to be installed Depends: odbcinst1debian2:i386 (>= 2.2.14p2-1) but it is not going to be installed Depends: libodbc1:i386 but it is not going to be installed Depends: xaw3dg:i386 (>= 1.5+E-18) but it is not going to be installed php5 : Depends: libapache2-mod-php5 (>= 5.4.4-14+deb7u2) but it is not going to be installed or libapache2-mod-php5filter (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-cgi (>= 5.4.4-14+deb7u2) but it is not going to be installed or php5-fpm (>= 5.4.4-14+deb7u2) but it is not going to be installed php5-mysql : Depends: phpapi-20100525 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. root@server:/home/# dpkg --print-architecture amd64 root@server:/home/# dpkg --print-foreign-architectures i386 root@server:/home/# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 7.1 (wheezy) Release: 7.1 Codename: wheezy root@server:/home/# uname -a Linux servername 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux root@server:/home/# cat /etc/apt/sources.list deb http://ftp.us.debian.org/debian stable main contrib non-free

    Read the article

  • Lots of strange IP addresses in my Windows Firewall logs. Concern?

    - by gmoore
    Was trying to debug a Samba sharing issue with Mac OS X so I turned on logging for my Windows Firewall. I didn't expect a lot of conections but the thing filled up quickly. Here's a sample: 2009-12-21 08:49:32 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56335 139 - - - - - - - - - 2009-12-21 08:49:33 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56337 139 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.73.242 1389 53 - - - - - - - - - 2009-12-21 08:50:02 CLOSE TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.71.226 60290 53 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:04 CLOSE TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:41 CLOSE UDP 192.168.0.3 192.168.0.4 137 50300 - - - - - - - - - I can pick out the local IP addresses (192.168.0.3 is my Windows XP machine, 192.169.0.4 is Mac OS X) as I debug the Samba issue. But some of the others resolve to Comcast (my ISP) and others resolve to weird hosts like van-dns.com and navisite.net. It doesn't look like any connection sent/received any bytes. I used the reference here: http://technet.microsoft.com/en-us/library/cc758040%28WS.10%29.aspx. Is it a cause for concern?

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • DNS lookup fails when with all the MAC workstations

    - by user39564
    Hi, I am having this insane problem. We are mac-heavy users. Around 10 workstations, one Xserve server, two windows workstation and one Linux (me). Last year I added an A record to our domain's DNS. However we had to change that a few months ago to a new IP. But all the Mac workstations fail to resolve the proper DNS and they still resolve to the old IP, even after 2 months. On both the windows workstation and my linux box a simple nslookup resolves to proper IP. However, on ALL the mac workstation, dig and nslookup report the old IP address. From my linux workstation: jp@lo:~$ nslookup - 208.67.222.222 client.xyz.com Server: 208.67.222.222 Address: 208.67.222.222#53 Non-authoritative answer: Name: client.xyz.com Address: 68.71.40.xx But when I am trying the exact same command from any Mac workstation, I get the old IP: $ nslookup - 208.67.222.222 client.xyz.com Server: 208.67.222.222 Address: 208.67.222.222#53 Non-authoritative answer: Name: client.xyz.com Address: 98.143.155.xx The strange thing is that this only happens in our internal network. No problem from home nor from another server. I did try to flush the DNS, don't worry. It did not help. I am starting to wonder if my router (OpenWRT) or Mac OS X Server is not in some way spoofing the DNS request and thus acting as a cache. Any suggestions/comments would be grateful. Thank you, JP

    Read the article

  • What is the reason for this DNSSEC validation failure of dnsviz.net?

    - by grifferz
    On trying to resolve dnsviz.net from a host using an Unbound resolver that is configured to use DNSSEC validation, the result is "no servers could be reached": $ dig -t soa dnsviz.net ; <<>> DiG 9.6-ESV-R4 <<>> -t soa dnsviz.net ;; global options: +cmd ;; connection timed out; no servers could be reached Nothing is logged by Unbound to suggest why this is the case. Here is the /etc/unbound/unbound.conf: server: verbosity: 1 interface: 192.168.0.8 interface: 127.0.0.1 interface: ::0 access-control: 0.0.0.0/0 refuse access-control: ::0/0 refuse access-control: 127.0.0.0/8 allow_snoop access-control: 192.168.0.0/16 allow_snoop chroot: "" auto-trust-anchor-file: "/etc/unbound/root.key" val-log-level: 2 python: remote-control: control-enable: yes If I add: module-config: "iterator" (thus disabling DNSSEC validation) then I am able to resolve this host normally. The domain and its DNSSEC check out fine according to http://dnscheck.iis.se/ so there must be something wrong with my resolver configuration. What is it and how do I go about debugging that?

    Read the article

  • bind9 - forwarders are not working

    - by Sarp Kaya
    I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them. The content of /etc/bind/named.conf.options is: options { directory "/var/cache/bind"; forwarders { 131.181.127.32; 131.181.59.48; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file. Any recommendations for fixing this problem?

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

  • How to install wordpress without a web browser

    - by bvandrunen
    What I am trying to do is to automate wordpress website creation for the company I am working on. We have lots of information in our database for our customers and we want to create a wordpress website for each customer. The process works great and we have no trouble with the creation of websites/transfer of data or anything like that. The problem we do have is when we buy a new domain (http://www.newdomain.com) our process breaks (we call a stored procedure which installs all the data after the URL is called to install wordpress) if the domain takes more than 15min to resolve. We have tried doing looping (where the process checks to see if the domain resolves and keeps trying - but eventually if fails). So what we are looking for is to see if there is a way to install an URL without actually having the domain resolve yet. I have seen where possibilities where you can change the wp-config file but this doesn't work since we have more than one domain and it changes the source URL for all the domains. What we really need is just a way for us to manually start the install script through a call either through a database or some other way that doesn't check to see if the domain is resolved or pointing at the server or not. Thank for any suggestions. EDIT: All we do to install wordpress is call this URL: http://"newdomain".com/wp-admin/install.php?step=2 - if you change settings in the backend calling this URL will install wordpress without having to go through the wp-admin/install.php form

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

  • Apache: getting proxy, rewrite, and SSL to play nice

    - by Rich M
    Hi, I'm having loads of trouble trying to integrate proxy, rewrite, and SSL altogether in Apache 2. A brief history, my application runs on port 8080 and before adding SSL, I used proxy to strip the 8080 from the url's to and from the server. So instead of www.example.com:8080/myapp, the client app accessed everything via www.example.com/myapp Here was the conf the accomplished this: ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> ProxyPass /myapp http://www.example.com:8080/myapp ProxyPassReverse /myapp http://www.example.com:8080/myapp What I'm trying to do now is force all requests to myapp to be HTTPS, and then have those SSL requests follow the same proxy rules that strip out the port number as my application used to. Simply changing the ports 8080 to 8443 in the ProxyPass lines does not accomplish this. Unfortunately I'm not an expert in Apache, and my skills of trial and error are already reaching the end of the line. RewriteEngine On RewriteCond %{HTTPS} off RewriteRule myapp/* https://%{HTTP_HOST}%{REQUEST_URI} ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> SSLProxyEngine on ProxyPass /myapp https://www.example.com:8443/mloyalty ProxyPassReverse /myapp https://www.example.com:8433/mloyalty As this stands, a request to anything on the server other than /myapp load fine with http. If I make a browser http request to /mypp it then redirects to https:// www.example.com:8443/myapp , which is not the desired behavior. Links within the application then resolve to https:// www.example.com/myapp/linkedPage , which is desirable. Browser requests (http and https) to anything one level beyond just /myapp ie. /myapp/mycontext resolve to https:// www.example.com/myapp/mycontext without the port. I'm not sure what other information there is for me to give, but I think my goals should be clear.

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

  • Problem compiling hive with ant

    - by conandor
    I compiling with Solaris 10 SPARC, jdk 1.6 from Sun, Ant 1.7.1 from OpenCSW. I have no problem running hadoop 0.17.2.1 However, I have problem compiling/integrating hive with the error 'cannot find symbol', although I followed the tutorial. I have the hive source code from SVN exactly from tutorial. How can I know the hive version I compiling and how can I compile against hadoop 0.17.2.1? Please advice. Thank you. -bash-3.00$ export PATH=/usr/jdk/instances/jdk1.6.0/bin:/usr/bin:/opt/csw/bin:/opt/webstack/bin -bash-3.00$ export JAVA_HOME=/usr/jdk/instances/jdk1.6.0 -bash-3.00$ export HADOOP=/export/home/mywork/hadoop-0.17.2.1/bin/hadoop -bash-3.00$ /opt/csw/bin/ant package -Dhadoop.version=0.17.2.1 Buildfile: build.xml jar: create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: compile: ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 25878ms :: artifacts dl 37ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/82ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.17.2.1 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.17.2.1) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 12041ms :: artifacts dl 30ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/39ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.18.3 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.18.3) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 11107ms :: artifacts dl 36ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/49ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.19.0 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.19.0) ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#shims;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.17.2.1 in hadoop-source [ivy:retrieve] found hadoop#core;0.18.3 in hadoop-source [ivy:retrieve] found hadoop#core;0.19.0 in hadoop-source [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 9969ms :: artifacts dl 33ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 4 | 0 | 0 | 0 || 4 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#shims [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 4 already retrieved (0kB/57ms) install-hadoopcore-internal: build_shims: [echo] Compiling shims against hadoop 0.20.0 (/export/home/mywork/hive/build/hadoopcore/hadoop-0.20.0) jar: [echo] Jar: shims create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: install-hadoopcore: install-hadoopcore-default: ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /export/home/mywork/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-retrieve-hadoop-source: [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:retrieve] :: loading settings :: file = /export/home/mywork/hive/ivy/ivysettings.xml [ivy:retrieve] :: resolving dependencies :: org.apache.hadoop.hive#common;working@kaili [ivy:retrieve] confs: [default] [ivy:retrieve] found hadoop#core;0.20.0 in hadoop-source [ivy:retrieve] :: resolution report :: resolve 4864ms :: artifacts dl 13ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 1 | 0 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: org.apache.hadoop.hive#common [ivy:retrieve] confs: [default] [ivy:retrieve] 0 artifacts copied, 1 already retrieved (0kB/52ms) install-hadoopcore-internal: setup: compile: [echo] Compiling: common jar: [echo] Jar: common create-dirs: compile-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks deploy-ant-tasks: create-dirs: init: compile: [echo] Compiling: anttasks jar: init: dynamic-serde: compile: [echo] Compiling: hive [javac] Compiling 167 source files to /export/home/mywork/hive/build/serde/classes [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java:30: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java:31: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorUtils [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/MetadataTypedColumnsetSerDe.java:31: cannot find symbol [javac] symbol : class MetadataListStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector [javac] import org.apache.hadoop.hive.serde2.objectinspector.MetadataListStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:33: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:35: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:36: cannot find symbol [javac] symbol : class FloatObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.FloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:39: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/SerDeUtils.java:40: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:44: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:46: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:47: cannot find symbol [javac] symbol : class FloatObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.FloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:50: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java:51: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java:43: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarSerDe.java:41: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:26: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:39: cannot find symbol [javac] symbol: class LazySimpleStructObjectInspector [javac] LazyNonPrimitive<LazySimpleStructObjectInspector> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java:68: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyStruct [javac] public LazyStruct(LazySimpleStructObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java:36: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java:37: cannot find symbol [javac] symbol : class PrimitiveObjectInspectorUtils [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeString.java:23: cannot find symbol [javac] symbol : class StringObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypei16.java:23: cannot find symbol [javac] symbol : class ShortObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.ShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeDouble.java:23: cannot find symbol [javac] symbol : class DoubleObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.DoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDeTypeBool.java:23: cannot find symbol [javac] symbol : class BooleanObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.objectinspector.primitive [javac] import org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:20: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyBooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:37: cannot find symbol [javac] symbol: class LazyBooleanObjectInspector [javac] LazyPrimitive<LazyBooleanObjectInspector, BooleanWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java:39: cannot find symbol [javac] symbol : class LazyBooleanObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyBoolean [javac] public LazyBoolean(LazyBooleanObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:21: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyByteObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:37: cannot find symbol [javac] symbol: class LazyByteObjectInspector [javac] LazyPrimitive<LazyByteObjectInspector, ByteWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyByte.java:39: cannot find symbol [javac] symbol : class LazyByteObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyByte [javac] public LazyByte(LazyByteObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:23: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyDoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:31: cannot find symbol [javac] symbol: class LazyDoubleObjectInspector [javac] LazyPrimitive<LazyDoubleObjectInspector, DoubleWritable> { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDouble.java:33: cannot find symbol [javac] symbol : class LazyDoubleObjectInspector [javac] location: class org.apache.hadoop.hive.serde2.lazy.LazyDouble [javac] public LazyDouble(LazyDoubleObjectInspector oi) { [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:25: cannot find symbol [javac] symbol : class LazyObjectInspectorFactory [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:26: cannot find symbol [javac] symbol : class LazySimpleStructObjectInspector [javac] location: package org.apache.hadoop.hive.serde2.lazy.objectinspector [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:27: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyBooleanObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:28: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyByteObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:29: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyDoubleObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:30: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyFloatObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:31: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:32: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyLongObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:33: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:34: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyShortObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java:35: package org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive does not exist [javac] import org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyStringObjectInspector; [javac] ^ [javac] /export/home/mywork/hive/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFloat.java:

    Read the article

  • BES Express - failed to run Administration Service

    - by Max Gontar
    Hi! I have installed BES Express on Windows Server 7 with Exchange 2007 following by RIM tutorial JDK 1.6.18, JDK\bin included into Path variable After reboot I've run Blackberry Administration Service and receive such error in browser window: HTTP Status 500 - -------------------------------------------------------------------------------- type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: org.apache.hivemind.ApplicationRuntimeException: Missing classpath resource '/com/rim/bes/bas/web/adminconsole/pages/login/SystemError.page'. [context:/WEB-INF/webAdminConsole.application, line 25, column 37] org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipelineBridge.java:60) $ServletRequestServicer_12d51b7c397.service($ServletRequestServicer_12d51b7c397.java) org.apache.tapestry.request.DecodedRequestInjector.service(DecodedRequestInjector.java:55) $ServletRequestServicerFilter_12d51b7c393.service($ServletRequestServicerFilter_12d51b7c393.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) org.apache.tapestry.multipart.MultipartDecoderFilter.service(MultipartDecoderFilter.java:52) $ServletRequestServicerFilter_12d51b7c391.service($ServletRequestServicerFilter_12d51b7c391.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) org.apache.tapestry.services.impl.SetupRequestEncoding.service(SetupRequestEncoding.java:53) $ServletRequestServicerFilter_12d51b7c395.service($ServletRequestServicerFilter_12d51b7c395.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) $ServletRequestServicer_12d51b7c305.service($ServletRequestServicer_12d51b7c305.java) org.apache.tapestry.ApplicationServlet.doService(ApplicationServlet.java:123) com.rim.bes.bas.web.common.BASApplicationServlet.doService(BASApplicationServlet.java:153) org.apache.tapestry.ApplicationServlet.doGet(ApplicationServlet.java:79) javax.servlet.http.HttpServlet.service(HttpServlet.java:617) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.rim.bes.bas.web.console.LoginDispatcher.processRequest(LoginDispatcher.java:146) com.rim.bes.bas.web.console.LoginDispatcher.doGet(LoginDispatcher.java:79) javax.servlet.http.HttpServlet.service(HttpServlet.java:617) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.rim.bes.bas.web.common.ResponseHeadersFilter.doFilter(ResponseHeadersFilter.java:85) com.rim.bes.bas.web.console.VSJSupportFilter.doFilter(VSJSupportFilter.java:228) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) root cause org.apache.hivemind.ApplicationRuntimeException: Missing classpath resource '/com/rim/bes/bas/web/adminconsole/pages/login/SystemError.page'. [context:/WEB-INF/webAdminConsole.application, line 25, column 37] org.apache.tapestry.error.ExceptionPresenterImpl.presentException(ExceptionPresenterImpl.java:64) com.rim.bes.bas.web.common.CommonExceptionPresenter.presentException(CommonExceptionPresenter.java:269) com.rim.bes.bas.web.common.CommonExceptionPresenter.presentException(CommonExceptionPresenter.java:113) $ExceptionPresenter_12d51b7c2cf.presentException($ExceptionPresenter_12d51b7c2cf.java) org.apache.tapestry.engine.AbstractEngine.activateExceptionPage(AbstractEngine.java:121) org.apache.tapestry.engine.AbstractEngine.service(AbstractEngine.java:280) org.apache.tapestry.services.impl.InvokeEngineTerminator.service(InvokeEngineTerminator.java:60) $WebRequestServicer_12d51b7c3b5.service($WebRequestServicer_12d51b7c3b5.java) com.rim.bes.bas.web.console.ObjectCacheServiceFilter.service(ObjectCacheServiceFilter.java:72) $WebRequestServicerFilter_12d51b7c3b9.service($WebRequestServicerFilter_12d51b7c3b9.java) $WebRequestServicer_12d51b7c3bb.service($WebRequestServicer_12d51b7c3bb.java) com.rim.bes.bas.web.common.ServiceFilter.service(ServiceFilter.java:84) $WebRequestServicerFilter_12d51b7c3b7.service($WebRequestServicerFilter_12d51b7c3b7.java) $WebRequestServicer_12d51b7c3bb.service($WebRequestServicer_12d51b7c3bb.java) $WebRequestServicer_12d51b7c3b1.service($WebRequestServicer_12d51b7c3b1.java) org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipelineBridge.java:56) $ServletRequestServicer_12d51b7c397.service($ServletRequestServicer_12d51b7c397.java) org.apache.tapestry.request.DecodedRequestInjector.service(DecodedRequestInjector.java:55) $ServletRequestServicerFilter_12d51b7c393.service($ServletRequestServicerFilter_12d51b7c393.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) org.apache.tapestry.multipart.MultipartDecoderFilter.service(MultipartDecoderFilter.java:52) $ServletRequestServicerFilter_12d51b7c391.service($ServletRequestServicerFilter_12d51b7c391.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) org.apache.tapestry.services.impl.SetupRequestEncoding.service(SetupRequestEncoding.java:53) $ServletRequestServicerFilter_12d51b7c395.service($ServletRequestServicerFilter_12d51b7c395.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) $ServletRequestServicer_12d51b7c305.service($ServletRequestServicer_12d51b7c305.java) org.apache.tapestry.ApplicationServlet.doService(ApplicationServlet.java:123) com.rim.bes.bas.web.common.BASApplicationServlet.doService(BASApplicationServlet.java:153) org.apache.tapestry.ApplicationServlet.doGet(ApplicationServlet.java:79) javax.servlet.http.HttpServlet.service(HttpServlet.java:617) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.rim.bes.bas.web.console.LoginDispatcher.processRequest(LoginDispatcher.java:146) com.rim.bes.bas.web.console.LoginDispatcher.doGet(LoginDispatcher.java:79) javax.servlet.http.HttpServlet.service(HttpServlet.java:617) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.rim.bes.bas.web.common.ResponseHeadersFilter.doFilter(ResponseHeadersFilter.java:85) com.rim.bes.bas.web.console.VSJSupportFilter.doFilter(VSJSupportFilter.java:228) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) root cause org.apache.hivemind.ApplicationRuntimeException: Missing classpath resource '/com/rim/bes/bas/web/adminconsole/pages/login/SystemError.page'. [context:/WEB-INF/webAdminConsole.application, line 25, column 37] org.apache.tapestry.asset.ClasspathAssetFactory.createAbsoluteAsset(ClasspathAssetFactory.java:61) $AssetFactory_12d51b7c400.createAbsoluteAsset($AssetFactory_12d51b7c400.java) org.apache.tapestry.asset.ContextAssetFactory.createAsset(ContextAssetFactory.java:66) $AssetFactory_12d51b7c3fe.createAsset($AssetFactory_12d51b7c3fe.java) $AssetFactory_12d51b7c404.createAsset($AssetFactory_12d51b7c404.java) $AssetFactory_12d51b7c2ff.createAsset($AssetFactory_12d51b7c2ff.java) org.apache.tapestry.asset.AssetSourceImpl.findAsset(AssetSourceImpl.java:64) $AssetSource_12d51b7c3fa.findAsset($AssetSource_12d51b7c3fa.java) org.apache.tapestry.services.impl.NamespaceResourcesImpl.findSpecificationResource(NamespaceResourcesImpl.java:61) org.apache.tapestry.services.impl.NamespaceResourcesImpl.getPageSpecification(NamespaceResourcesImpl.java:71) org.apache.tapestry.engine.Namespace.locatePageSpecification(Namespace.java:264) org.apache.tapestry.engine.Namespace.getPageSpecification(Namespace.java:172) org.apache.tapestry.resolver.PageSpecificationResolverImpl.resolve(PageSpecificationResolverImpl.java:131) $PageSpecificationResolver_12d51b7c3f4.resolve($PageSpecificationResolver_12d51b7c3f4.java) $PageSpecificationResolver_12d51b7c3f5.resolve($PageSpecificationResolver_12d51b7c3f5.java) org.apache.tapestry.pageload.PageSource.getPage(PageSource.java:115) $IPageSource_12d51b7c2e5.getPage($IPageSource_12d51b7c2e5.java) org.apache.tapestry.engine.RequestCycle.loadPage(RequestCycle.java:268) org.apache.tapestry.engine.RequestCycle.getPage(RequestCycle.java:251) org.apache.tapestry.error.ExceptionPresenterImpl.presentException(ExceptionPresenterImpl.java:40) com.rim.bes.bas.web.common.CommonExceptionPresenter.presentException(CommonExceptionPresenter.java:269) com.rim.bes.bas.web.common.CommonExceptionPresenter.presentException(CommonExceptionPresenter.java:113) $ExceptionPresenter_12d51b7c2cf.presentException($ExceptionPresenter_12d51b7c2cf.java) org.apache.tapestry.engine.AbstractEngine.activateExceptionPage(AbstractEngine.java:121) org.apache.tapestry.engine.AbstractEngine.service(AbstractEngine.java:280) org.apache.tapestry.services.impl.InvokeEngineTerminator.service(InvokeEngineTerminator.java:60) $WebRequestServicer_12d51b7c3b5.service($WebRequestServicer_12d51b7c3b5.java) com.rim.bes.bas.web.console.ObjectCacheServiceFilter.service(ObjectCacheServiceFilter.java:72) $WebRequestServicerFilter_12d51b7c3b9.service($WebRequestServicerFilter_12d51b7c3b9.java) $WebRequestServicer_12d51b7c3bb.service($WebRequestServicer_12d51b7c3bb.java) com.rim.bes.bas.web.common.ServiceFilter.service(ServiceFilter.java:84) $WebRequestServicerFilter_12d51b7c3b7.service($WebRequestServicerFilter_12d51b7c3b7.java) $WebRequestServicer_12d51b7c3bb.service($WebRequestServicer_12d51b7c3bb.java) $WebRequestServicer_12d51b7c3b1.service($WebRequestServicer_12d51b7c3b1.java) org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipelineBridge.java:56) $ServletRequestServicer_12d51b7c397.service($ServletRequestServicer_12d51b7c397.java) org.apache.tapestry.request.DecodedRequestInjector.service(DecodedRequestInjector.java:55) $ServletRequestServicerFilter_12d51b7c393.service($ServletRequestServicerFilter_12d51b7c393.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) org.apache.tapestry.multipart.MultipartDecoderFilter.service(MultipartDecoderFilter.java:52) $ServletRequestServicerFilter_12d51b7c391.service($ServletRequestServicerFilter_12d51b7c391.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) org.apache.tapestry.services.impl.SetupRequestEncoding.service(SetupRequestEncoding.java:53) $ServletRequestServicerFilter_12d51b7c395.service($ServletRequestServicerFilter_12d51b7c395.java) $ServletRequestServicer_12d51b7c399.service($ServletRequestServicer_12d51b7c399.java) $ServletRequestServicer_12d51b7c305.service($ServletRequestServicer_12d51b7c305.java) org.apache.tapestry.ApplicationServlet.doService(ApplicationServlet.java:123) com.rim.bes.bas.web.common.BASApplicationServlet.doService(BASApplicationServlet.java:153) org.apache.tapestry.ApplicationServlet.doGet(ApplicationServlet.java:79) javax.servlet.http.HttpServlet.service(HttpServlet.java:617) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.rim.bes.bas.web.console.LoginDispatcher.processRequest(LoginDispatcher.java:146) com.rim.bes.bas.web.console.LoginDispatcher.doGet(LoginDispatcher.java:79) javax.servlet.http.HttpServlet.service(HttpServlet.java:617) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.rim.bes.bas.web.common.ResponseHeadersFilter.doFilter(ResponseHeadersFilter.java:85) com.rim.bes.bas.web.console.VSJSupportFilter.doFilter(VSJSupportFilter.java:228) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) Can you help me? Thank you! same question on bbforums

    Read the article

  • Contructor parameters for dependent classes with Unity Framework

    - by Onisemus
    I just started using the Unity Application Block to try to decouple my classes and make it easier for unit testing. I ran into a problem though that I'm not sure how to get around. Looked through the documentation and did some Googling but I'm coming up dry. Here's the situation: I have a facade-type class which is a chat bot. It is a singleton class which handles all sort of secondary classes and provides a central place to launch and configure the bot. I also have a class called AccessManager which, well, manages access to bot commands and resources. Boiled down to the essence, I have the classes set up like so. public class Bot { public string Owner { get; private set; } public string WorkingDirectory { get; private set; } private IAccessManager AccessManager; private Bot() { // do some setup // LoadConfig sets the Owner & WorkingDirectory variables LoadConfig(); // init the access mmanager AccessManager = new MyAccessManager(this); } public static Bot Instance() { // singleton code } ... } And the AccessManager class: public class MyAccessManager : IAccessManager { private Bot botReference; public MyAccesManager(Bot botReference) { this.botReference = botReference; SetOwnerAccess(botReference.Owner); } private void LoadConfig() { string configPath = Path.Combine( botReference.WorkingDirectory, "access.config"); // do stuff to read from config file } ... } I would like to change this design to use the Unity Application Block. I'd like to use Unity to generate the Bot singleton and to load the AccessManager interface in some sort of bootstrapping method that runs before anything else does. public static void BootStrapSystem() { IUnityContainer container = new UnityContainer(); // create new bot instance Bot newBot = Bot.Instance(); // register bot instance container.RegisterInstance<Bot>(newBot); // register access manager container.RegisterType<IAccessManager,MyAccessManager>(newBot); } And when I want to get a reference to the Access Manager inside the Bot constructor I can just do: IAcessManager accessManager = container.Resolve<IAccessManager>(); And elsewhere in the system to get a reference to the Bot singleton: // do this Bot botInstance = container.Resolve<Bot>(); // instead of this Bot botInstance = Bot.Instance(); The problem is the method BootStrapSystem() is going to blow up. When I create a bot instance it's going to try to resolve IAccessManager but won't be able to because I haven't registered the types yet (that's the next line). But I can't move the registration in front of the Bot creation because as part of the registration I need to pass the Bot as a parameter! Circular dependencies!! Gah!!! This indicates to me I have a flaw in the way I have this structured. But how do I fix it? Help!!

    Read the article

  • DNS problems on CentOS fresh install

    - by Rick Koshi
    I'm having some DNS issues on a new box I'm installing with CentOS 6.2. I am able to look up names using nslookup, dig, or host. I am able to ping machines by name or by IP address. However, when I try other tools, such as ssh, wget, or yum, they are unable to resolve names. For example: # wget http://www.google.com --2012-03-08 14:48:06-- http://www.google.com/ Resolving www.google.com... failed: Name or service not known. wget: unable to resolve host address `www.google.com' # ssh www.google.com ssh: Could not resolve hostname www.google.com: Name or service not known # ping -c 1 www.google.com PING www.l.google.com (74.125.113.106) 56(84) bytes of data. 64 bytes from vw-in-f106.1e100.net (74.125.113.106): icmp_seq=1 ttl=46 time=43.6 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 59ms rtt min/avg/max/mdev = 43.665/43.665/43.665/0.000 ms # host www.google.com www.google.com is an alias for www.l.google.com. www.l.google.com has address 74.125.113.99 www.l.google.com has address 74.125.113.103 www.l.google.com has address 74.125.113.104 www.l.google.com has address 74.125.113.105 www.l.google.com has address 74.125.113.106 www.l.google.com has address 74.125.113.147 My /etc/nsswitch.conf file is the default, including this (standard) line: hosts: files dns /etc/resolv.conf is as set up by DHCP: ; generated by /sbin/dhclient-script nameserver 192.168.1.254 192.168.1.254 is a working DNS server (my DSL modem, working for years with other machines) Anyone know why ping would work, but ssh/wget would fail? Per NcA's suggestion, I tried changing /etc/resolv.conf to point to 8.8.8.8. Oddly enough, this does make it work. Obviously, my DSL modem is responding to DNS requests in some way that some parts of Linux's resolution system don't like. Looking at the tcpdump, I am unable to see what the difference is. Certainly, both servers are sending the same addresses. Here's the output from tcpdump -nn -X with the server set to the DNS server on the DSL modem. It's clearly replying with the correct addresses, but ssh/wget don't seem happy with it for some reason: 15:53:52.133580 IP 192.168.1.254.53 > 192.168.1.2.54836: 33157 7/0/0 CNAME www.l.google.com., A 74.125.115.105, A 74.125.115.106, A 74.125.115.147, A 74.125.115.99, A 74.125.115.103, A 74.125.115.104 (148) 0x0000: 4500 00b0 e33a 0000 ff11 53b1 c0a8 01fe E....:....S..... 0x0010: c0a8 0102 0035 d634 009c 7528 8185 8180 .....5.4..u(.... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0007 acd0 0008 0377 7777 016c c010 .........www.l.. 0x0050: c02c 0001 0001 0000 0001 0004 4a7d 7369 .,..........J}si 0x0060: c02c 0001 0001 0000 0001 0004 4a7d 736a .,..........J}sj 0x0070: c02c 0001 0001 0000 0001 0004 4a7d 7393 .,..........J}s. 0x0080: c02c 0001 0001 0000 0001 0004 4a7d 7363 .,..........J}sc 0x0090: c02c 0001 0001 0000 0001 0004 4a7d 7367 .,..........J}sg 0x00a0: c02c 0001 0001 0000 0001 0004 4a7d 7368 .,..........J}sh 15:53:52.135669 IP 192.168.1.254.53 > 192.168.1.2.54836: 65062- 0/0/0 (32) 0x0000: 4500 003c e33b 0000 ff11 5424 c0a8 01fe E..<.;....T$.... 0x0010: c0a8 0102 0035 d634 0028 98f9 fe26 8000 .....5.4.(...&.. 0x0020: 0001 0000 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 gle.com..... I'm not enough of an expert to know if this is malformed in some way, but ping seems to do the right thing with it. For comparison, here's the same thing when querying 8.8.8.8: 15:57:27.990270 IP 8.8.8.8.53 > 192.168.1.2.49028: 59114 7/0/0 CNAME www.l.google.com., A 74.125.113.105, A 74.125.113.103, A 74.125.113.106, A 74.125.113.147, A 74.125.113.104, A 74.125.113.99 (148) 0x0000: 4500 00b0 5530 0000 2f11 6453 0808 0808 E...U0../.dS.... 0x0010: c0a8 0102 0035 bf84 009c 39f8 e6ea 8180 .....5....9..... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 516a 0008 0377 7777 016c c010 ....Qj...www.l.. 0x0050: c02c 0001 0001 0000 0116 0004 4a7d 7169 .,..........J}qi 0x0060: c02c 0001 0001 0000 0116 0004 4a7d 7167 .,..........J}qg 0x0070: c02c 0001 0001 0000 0116 0004 4a7d 716a .,..........J}qj 0x0080: c02c 0001 0001 0000 0116 0004 4a7d 7193 .,..........J}q. 0x0090: c02c 0001 0001 0000 0116 0004 4a7d 7168 .,..........J}qh 0x00a0: c02c 0001 0001 0000 0116 0004 4a7d 7163 .,..........J}qc 15:57:28.018909 IP 8.8.8.8.53 > 192.168.1.2.49028: 31984 1/1/0 CNAME www.l.google.com. (102) 0x0000: 4500 0082 7b1b 0000 2f11 3e96 0808 0808 E...{.../.>..... 0x0010: c0a8 0102 0035 bf84 006e c67e 7cf0 8180 .....5...n.~|... 0x0020: 0001 0001 0001 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 517f 0008 0377 7777 016c c010 ....Q....www.l.. 0x0050: c030 0006 0001 0000 0258 0026 036e 7334 .0.......X.&.ns4 0x0060: c010 0964 6e73 2d61 646d 696e c010 0016 ...dns-admin.... 0x0070: 91f3 0000 0384 0000 0384 0000 0708 0000 ................ 0x0080: 003c .< I still don't know why the server's reply is adequate for ping but not for ssh/wget. If anyone has ideas, I'd be happy to hear them. For now, though, I can either refer to an outside DNS server or set up my own server on the new box. It's a workaround that seems like it should be unnecessary, but will allow me to proceed.

    Read the article

  • Resolving a Forward Declaration Issue Involving a State Machine in C++

    - by hypersonicninja
    I've recently returned to C++ development after a hiatus, and have a question regarding implementation of the State Design Pattern. I'm using the vanilla pattern, exactly as per the GoF book. My problem is that the state machine itself is based on some hardware used as part of an embedded system - so the design is fixed and can't be changed. This results in a circular dependency between two of the states (in particular), and I'm trying to resolve this. Here's the simplified code (note that I tried to resolve this by using headers as usual but still had problems - I've omitted them in this code snippet): #include <iostream> #include <memory> using namespace std; class Context { public: friend class State; Context() { } private: State* m_state; }; class State { public: State() { } virtual void Trigger1() = 0; virtual void Trigger2() = 0; }; class LLT : public State { public: LLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class ALL : public State { public: ALL() { } void Trigger1() { new LLT(); } void Trigger2() { new DH(); } }; // DL needs to 'know' about DH. class DL : public State { public: DL() { } void Trigger1() { new ALL(); } void Trigger2() { new DH(); } }; class HLT : public State { public: HLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class AHL : public State { public: AHL() { } void Trigger1() { new DH(); } void Trigger2() { new HLT(); } }; // DH needs to 'know' about DL. class DH : public State { public: DH () { } void Trigger1() { new AHL(); } void Trigger2() { new DL(); } }; int main() { auto_ptr<LLT> llt (new LLT); auto_ptr<ALL> all (new ALL); auto_ptr<DL> dl (new DL); auto_ptr<HLT> hlt (new HLT); auto_ptr<AHL> ahl (new AHL); auto_ptr<DH> dh (new DH); return 0; } The problem is basically that in the State Pattern, state transitions are made by invoking the the ChangeState method in the Context class, which invokes the constructor of the next state. Because of the circular dependency, I can't invoke the constructor because it's not possible to pre-define both of the constructors of the 'problem' states. I had a look at this article, and the template method which seemed to be the ideal solution - but it doesn't compile and my knowledge of templates is a rather limited... The other idea I had is to try and introduce a Helper class to the subclassed states, via multiple inheritance, to see if it's possible to specify the base class's constructor and have a reference to the state subclasse's constructor. But I think that was rather ambitious... Finally, would a direct implmentation of the Factory Method Design Pattern be the best way to resolve the entire problem?

    Read the article

  • Configure TFS portal afterwards

    Update #1 January 8th, 2010: There is an updated post on this topic for Beta 2: http://www.ewaldhofman.nl/post/2009/12/10/Configure-TFS-portal-afterwards-Beta-2.aspx Update #2 October 10th, 2010: In the new Team Foundation Server Power Tools September 2010, there is now a command to create a portal. tfpt addprojectportal   Add or move portal for an existing team project Usage: tfpt addprojectportal /collection:uri                              /teamproject:"project name"                              /processtemplate:"template name"                              [/webapplication:"webappname"]                              [/relativepath:"pathfromwebapp"]                              [/validate]                              [/verbose] /collection Required. URL of Team Project Collection. /teamproject Required. Specifies the name of the team project. /processtemplate Required. Specifies that name of the process template. /webapplication The name of the SharePoint Web Application. Must also specify relativepath. /relativepath The path for the site relative to the root URL for the SharePoint Web Application. Must also specify webapplication. /validate Specifies that the user inputs are to be validated. If specified, only validation will be done and no portal setting will be changed. /verbose Switches on the verbose mode. I created a new Team Project in TFS 2010 Beta 1 and choose not to configure SharePoint during the creation of the Team Project. Of course I found out fairly quickly that a portal for TFS is very useful, especially the Iteration and the Product backlog workbooks and the dashboard reports. This blog describes how you can configure the sharepoint portal afterwards. Update: September 9th, 2009 Adding the portal afterwards is much easier as described below. Here are the steps Step 1: Create a new temporary project (with a SharePoint site for it). Open the Team Explorer Right click in the Team Explorer the root node (i.e. the project collection) Select "New team project" from the menu Walk throught he wizard and make sure you check the option to create the portal (which is by default checked) Step 2: Disable the site for the new project Open the Team Explorer Select the team project you created in step 1 In the menu click on Team -> Show Project Portal. In the menu click on Team -> Team Project Settings -> Portal Settings... The following dialog pops up Uncheck the option "Enable team project portal" Confirm the dialog with OK Step 3: Enable the site for the original one. Point it to the newly created site. Open the Team Explorer Select the team project you want to add the portal to In the menu open Team -> Team Project Settings -> Portal Settings... The same dialog as in step 2 pops up Check the option "Enable team project portal" Click on the "Configure URL" button The following dialog pops up   In the dialog select in the combobox of the web application the TFS server Enter in the Relative site path the text "sites/[Project Collection Name]/[Team Project Name created in step 1]" Confirm the "Specify an existing SharePoint Site" with OK Check the "Reports and dashboards refer to data for this team project" option Confirm the dialog "Project Portal Settings" with OK Step 4: Delete the temporary project you created. In Beta 1, I have found no way to delete a team project. Maybe it will be available in TFS 2010 Beta 2. Original post Step 1: Create new portal site Go to the sharepoint site of your project collection (/sites//default.aspx">/sites//default.aspx">http://<servername>/sites/<project_collection_name>/default.aspx) Click on the Site Actions at the left side of the screen and choose the option Site Settings In the site settings, choose the Sites and workspaces option Create a new site Enter the values for the Title, the description, the site address. And choose for the TFS2010 Agile Dashboard as template. Create the site, by clicking on the Create button Step 2: Integrate portal site with team project Open Visual Studio Open the Team Explorer (View -> Team Explorer) Select in the Team Explorer tool window the Team Project for which you are create a new portal Open the Project Portal Settings (Team -> Team Project Settings -> Portal Setings...) Check the Enable team project portal checkbox Click on Configure URL... You will get a new dialog as below Enter the url to the TFS server in the web application combobox And specify the relative site path: sites/<project collection>/<site name> Confirm with OK Check in the Project Portal Settings dialog the checkbox "Reports and dashboards refer to data for this team project" Confirm the settings with OK (this takes a while...) When you now browse to the portal, you will see that the dashboards are now showing up with the data for the current team project. Step 3: Download process template To get a copy of the documents that are default in a team project, we need to have a fresh set of files that are not attached to a team project yet. You can do that with the following steps. Start the Process Template Manager (Team -> Team Project Collection Settings -> Process Template Manager...) Choose the Agile process template and click on download Choose a folder to download Step 4: Add Product and Iteration backlog Go to the Team Explorer in Visual Studio Make sure the team project is in the list of team projects, and expand the team project Right click the Documents node, and choose New Document Library Enter "Shared Documents", and click on Add Right click the Shared Documents node and choose Upload Document Go the the file location where you stored the process template from step 3 and then navigate to the subdirectory "Agile Process Template 5.0\MSF for Agile Software Development v5.0\Windows SharePoint Services\Shared Documents\Project Management" Select in the Open Dialog the files "Iteration Backlog" and "Product Backlog", and click Open Step 5: Bind Iteration backlog workbook to the team project Right click on the "Iteration Backlog" file and select Edit, and confirm any warning messages Place your cursor in cell A1 of the Iteration backlog worksheet Switch to the Team ribbon and click New List. Select your Team Project and click Connect From the New List dialog, select the Iteration Backlog query in the Workbook Queries folder. The final step is to add a set of document properties that allow the workbook to communicate with the TFS reporting warehouse. Before we create the properties we need to collect some information about your project. The first piece of information comes from the table created in the previous step.  As you collect these properties, copy them into notepad so they can be used in later steps. Property How to retrieve the value? [Table name] Switch to the Design ribbon and select the Table Name value in the Properties portion of the ribbon [Project GUID] In the Visual Studio Team Explorer, right click your Team Project and select Properties.  Select the URL value and copy the GUID (long value with lots of characters) at the end of the URL [Team Project name] In the Properties dialog, select the Name field and copy the value [TFS server name] In the Properties dialog, select the Server Name field and copy the value [UPDATE] I have found that this is not correct: you need to specify the instance of your SQL Server. The value is used to create a connection to the TFS cube. Switch back to the Iteration Backlog workbook. Click the Office button and select Prepare – Properties. Click the Document Properties – Server drop down and select Advanced Properties. Switch to the Custom tab and add the following properties using the values you collected above. Variable name Value [Table name]_ASServerName [TFS server name] [Table name]_ASDatabase tfs_warehouse [Table name]_TeamProjectName [Team Project name] [Table name]_TeamProjectId [Project GUID] Click OK to close the properties dialog. It is possible that the Estimated Work (Hours) is showing the #REF! value. To resolve that change the formula with: =SUMIFS([Table name][Original Estimate]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Original Estimate]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Also the Total Remaining Work in the Individual Capacity table may contain #REF! values. To resolve that change the formula with: =SUMIFS([Table name][Remaining Work]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Assigned To];[Team Member];[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Remaining Work]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Assigned To];[Team Member];VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Save and close the workbook. Step 6: Bind Product backlog workbook to the team project Repeat the steps for binding the Iteration backlog for thiw workbook too. In the worksheet Capacity, the formula of the Storypoints might be missing. You can resolve it with: =IF([Iteration]="";"";SUMIFS([Table name][Story Points];[Table name][Iteration Path];[Iteration]&"*")) Example =IF([Iteration]="";"";SUMIFS(VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Story Points];VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Iteration Path];[Iteration]&"*"))

    Read the article

  • Automatically create bug resolution task using the TFS 2010 API

    - by Bob Hardister
    My customer requires bug resolution to be approved and tracked.  To minimize the overhead for developers I implemented a TFS 2010 server-side plug-in to automatically create a child resolution task for the bug when the “CCB” field is set to approved. The CCB field is a custom field.  I also added the story points field to the bug WIT for sizing purposes. Redundant tasks will not be created unless the bug title is changed or the prior task is closed. The program writes an audit trail to a log file visible in the TFS Admin Console Log view. Here’s the code. BugAutoTask.cs /* SPECIFICATION * When the CCB field on the bug is set to approved, create a child task where the task: * name = Resolve bug [ID] - [Title of bug] * assigned to = same as assigned to field on the bug * same area path * same iteration path * activity = Bug Resolution * original estimate = bug points * * The source code is used to build a dll (Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll), * which needs to be copied to * C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins * on ALL TFS application-tier servers. * * Author: Bob Hardister. */ using System; using System.Collections.Generic; using System.IO; using System.Xml; using System.Text; using System.Diagnostics; using System.Linq; using Microsoft.TeamFoundation.Common; using Microsoft.TeamFoundation.Framework.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.WorkItemTracking.Server; using Microsoft.TeamFoundation.Client; using System.Collections; namespace BugAutoTaskCreation { public class BugAutoTask : ISubscriber { public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) { statusCode = 0; properties = null; statusMessage = String.Empty; // Error message for for tracing last code executed and optional fields string lastStep = "No field values found or set "; try { if ((notificationType == NotificationType.Notification) && (notificationEventArgs.GetType() == typeof(WorkItemChangedEvent))) { WorkItemChangedEvent workItemChange = (WorkItemChangedEvent)notificationEventArgs; // see ConnectToTFS() method below to select which TFS instance/collection // to connect to TfsTeamProjectCollection tfs = ConnectToTFS(); WorkItemStore wiStore = tfs.GetService<WorkItemStore>(); lastStep = lastStep + ": connection to TFS successful "; // Get the work item that was just changed by the user. WorkItem witem = wiStore.GetWorkItem(workItemChange.CoreFields.IntegerFields[0].NewValue); lastStep = lastStep + ": retrieved changed work item, ID:" + witem.Id + " "; // Filter for Bug work items only if (witem.Type.Name == "Bug") { // DEBUG lastStep = lastStep + ": changed work item is a bug "; // Filter for CCB (i.e. Baseline Status) field set to approved only bool BaselineStatusChange = false; if (workItemChange.ChangedFields != null) { ProcessBugRevision(ref lastStep, workItemChange, wiStore, ref witem, ref BaselineStatusChange); } } } } catch (Exception e) { Trace.WriteLine(e.Message); Logger log = new Logger(); log.WriteLineToLog(MsgLevel.Error, "Application error: " + lastStep + " - " + e.Message + " - " + e.InnerException); } statusCode = 1; statusMessage = "Bug Auto Task Evaluation Completed"; properties = null; return EventNotificationStatus.ActionApproved; } // PRIVATE METHODS private static void ProcessBugRevision(ref string lastStep, WorkItemChangedEvent workItemChange, WorkItemStore wiStore, ref WorkItem witem, ref bool BaselineStatusChange) { foreach (StringField field in workItemChange.ChangedFields.StringFields) { // DEBUG lastStep = lastStep + ": last changed field is - " + field.Name + " "; if (field.Name == "Baseline Status") { lastStep = lastStep + ": retrieved bug baseline status field value, bug ID:" + witem.Id + " "; BaselineStatusChange = (field.NewValue != field.OldValue); if ((BaselineStatusChange) && (field.NewValue == "Approved")) { // Instanciate logger Logger log = new Logger(); // *** Create resolution task for this bug *** // ******************************************* // Get the team project and selected field values of the bug work item Project teamProject = witem.Project; int bugID = witem.Id; string bugTitle = witem.Fields["System.Title"].Value.ToString(); string bugAssignedTo = witem.Fields["System.AssignedTo"].Value.ToString(); string bugAreaPath = witem.Fields["System.AreaPath"].Value.ToString(); string bugIterationPath = witem.Fields["System.IterationPath"].Value.ToString(); string bugChangedBy = witem.Fields["System.ChangedBy"].OriginalValue.ToString(); string bugTeamProject = witem.Project.Name; lastStep = lastStep + ": all mandatory bug field values found "; // Optional fields Field bugPoints = witem.Fields["Microsoft.VSTS.Scheduling.StoryPoints"]; if (bugPoints.Value != null) { lastStep = lastStep + ": all mandatory and optional bug field values found "; } // Initialize child resolution task title string childTaskTitle = "Resolve bug " + bugID + " - " + bugTitle; // At this point I can check if a resolution task (of the same name) // for the bug already exist // If so, do not create a new resolution task bool createResolutionTask = true; WorkItem parentBug = wiStore.GetWorkItem(bugID); WorkItemLinkCollection links = parentBug.WorkItemLinks; foreach (WorkItemLink wil in links) { if (wil.LinkTypeEnd.Name == "Child") { WorkItem childTask = wiStore.GetWorkItem(wil.TargetId); if ((childTask.Title == childTaskTitle) && (childTask.State != "Closed")) { createResolutionTask = false; log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID: " + bugID + ". Task not created as open one of the same name already exist, ID:" + childTask.Id); } } } if (createResolutionTask) { // Define the work item type of the new work item WorkItemTypeCollection workItemTypes = wiStore.Projects[teamProject.Name].WorkItemTypes; WorkItemType wiType = workItemTypes["Task"]; // Setup the new task and assign field values witem = new WorkItem(wiType); witem.Fields["System.Title"].Value = "Resolve bug " + bugID + " - " + bugTitle; witem.Fields["System.AssignedTo"].Value = bugAssignedTo; witem.Fields["System.AreaPath"].Value = bugAreaPath; witem.Fields["System.IterationPath"].Value = bugIterationPath; witem.Fields["Microsoft.VSTS.Common.Activity"].Value = "Bug Resolution"; lastStep = lastStep + ": all mandatory task field values set "; // Optional fields if (bugPoints.Value != null) { witem.Fields["Microsoft.VSTS.Scheduling.OriginalEstimate"].Value = bugPoints.Value; lastStep = lastStep + ": all mandatory and optional task field values set "; } // Check for validation errors before saving the new task and linking it to the bug ArrayList validationErrors = witem.Validate(); if (validationErrors.Count == 0) { witem.Save(); // Link the new task (child) to the bug (parent) var linkType = wiStore.WorkItemLinkTypes[CoreLinkTypeReferenceNames.Hierarchy]; // Fetch the work items to be linked var parentWorkItem = wiStore.GetWorkItem(bugID); int taskID = witem.Id; var childWorkItem = wiStore.GetWorkItem(taskID); // Add a new link to the parent relating the child and save it parentWorkItem.Links.Add(new WorkItemLink(linkType.ForwardEnd, childWorkItem.Id)); parentWorkItem.Save(); log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID:" + bugID + ", which automatically created child resolution task, ID:" + taskID); } else { log.WriteLineToLog(MsgLevel.Error, "Error in creating bug resolution child task for bug ID:" + bugID); foreach (Field taskField in validationErrors) { log.WriteLineToLog(MsgLevel.Error, " - Validation Error in task field: " + taskField.ReferenceName); } } } } } } } private TfsTeamProjectCollection ConnectToTFS() { // Connect to TFS string tfsUri = string.Empty; // Production TFS instance production collection tfsUri = @"xxxx"; // Production TFS instance admin collection //tfsUri = @"xxxxx"; // Local TFS testing instance default collection //tfsUri = @"xxxxx"; TfsTeamProjectCollection tfs = new TfsTeamProjectCollection(new System.Uri(tfsUri)); tfs.EnsureAuthenticated(); return tfs; } // HELPERS public string Name { get { return "Bug Auto Task Creation Event Handler"; } } public SubscriberPriority Priority { get { return SubscriberPriority.Normal; } } public enum MsgLevel { Info, Warning, Error }; public Type[] SubscribedTypes() { return new Type[1] { typeof(WorkItemChangedEvent) }; } } } Logger.cs using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Windows.Forms; namespace BugAutoTaskCreation { class Logger { // fields private string _ApplicationDirectory = @"C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs"; private string _LogFileName = @"\CFG_ACCT_AT_OWS_BugAutoTaskCreation.log"; private string _LogFile; private string _LogTimestamp = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); private string _MsgLevelText = string.Empty; // default constructor public Logger() { // check for a prior log file FileInfo logFile = new FileInfo(_ApplicationDirectory + _LogFileName); if (!logFile.Exists) { CreateNewLogFile(ref logFile); } } // properties public string ApplicationDirectory { get { return _ApplicationDirectory; } set { _ApplicationDirectory = value; } } public string LogFile { get { _LogFile = _ApplicationDirectory + _LogFileName; return _LogFile; } set { _LogFile = value; } } // PUBLIC METHODS public void WriteLineToLog(BugAutoTask.MsgLevel msgLevel, string logRecord) { try { // set msgLevel text if (msgLevel == BugAutoTask.MsgLevel.Info) { _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Warning) { _MsgLevelText = "[Warning @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Error) { _MsgLevelText = "[Error @" + MsgTimeStamp() + "] "; } else { _MsgLevelText = "[Error: unsupported message level @" + MsgTimeStamp() + "] "; } // write a line to the log file StreamWriter logFile = new StreamWriter(_ApplicationDirectory + _LogFileName, true); logFile.WriteLine(_MsgLevelText + logRecord); logFile.Close(); } catch (Exception) { throw; } } // PRIVATE METHODS private void CreateNewLogFile(ref FileInfo logFile) { try { string logFilePath = logFile.FullName; // write the log file header _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; string cpu = string.Empty; if (Environment.Is64BitOperatingSystem) { cpu = " (x64)"; } StreamWriter newLog = new StreamWriter(logFilePath, false); newLog.Flush(); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText + "Team Foundation Server Administration Log"); newLog.WriteLine(_MsgLevelText + "Version : " + "1.0.0 Author: Bob Hardister"); newLog.WriteLine(_MsgLevelText + "DateTime : " + _LogTimestamp); newLog.WriteLine(_MsgLevelText + "Type : " + "OWS Custom TFS API Plug-in"); newLog.WriteLine(_MsgLevelText + "Activity : " + "Bug Auto Task Creation for CCB Approved Bugs"); newLog.WriteLine(_MsgLevelText + "Area : " + "Build Explorer"); newLog.WriteLine(_MsgLevelText + "Assembly : " + "Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll"); newLog.WriteLine(_MsgLevelText + "Location : " + @"C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins"); newLog.WriteLine(_MsgLevelText + "User : " + Environment.UserDomainName + @"\" + Environment.UserName); newLog.WriteLine(_MsgLevelText + "Machine : " + Environment.MachineName); newLog.WriteLine(_MsgLevelText + "System : " + Environment.OSVersion + cpu); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText); newLog.Close(); } catch (Exception) { throw; } } private string MsgTimeStamp() { string msgTimestamp = string.Empty; return msgTimestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss:fff"); } } }

    Read the article

  • CodePlex Daily Summary for Tuesday, October 30, 2012

    CodePlex Daily Summary for Tuesday, October 30, 2012Popular ReleasesCleverBobCat: CleverBobCat 0.4: Added: ModuleCleverResourceTransferMCEBuddy 2.x: MCEBuddy 2.3.6: Changelog for 2.3.6 (32bit and 64bit) 1. Fixed a bug in multichannel audio conversion failure. AAC does not support 6 channel audio, MCEBuddy now checks for it and force the output to 2 channel if AAC codec is specified 2. Fixed a bug in Original Broadcast Date and Time. Original Broadcast Date and Time is reported in UTC timezone in WTV metadata. TVDB and MovieDB dates are reported in network timezone. It is assumed the video is recorded and converted on the same machine, i.e. local timezone...ZXMAK2: Version 2.6.8.0: Whats new: add Spectrum +3 model; tape serializer: show extended info for crc bad blocksMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.73: Fix issue in Discussion #401101 (unreferenced var in a for-in statement was getting removed). add the grouping operator to the parsed output so that unminified parsed code is closer to the original. Will still strip unneeded parens later, if minifying. more cleaning of references as they are minified out of the code.RiP-Ripper & PG-Ripper: PG-Ripper 1.4.03: changes FIXED: Kitty-Kats new Forum UrlJobboard Light Edition: Version 2.2: Major Release Parallalization Added Job posting now working Job Editing now working Search Box Clear Filters added Search criteria updated (more simple) jobdetails page displays missing selections parameters added to url Cleaned UI moreMJP's DirectX 11 Samples: MSAA Resolve Filtering: Sample application and source code from the article "Experimenting with Reconstruction Filters for MSAA Resolve" http://mynameismjp.wordpress.com/2012/10/28/msaa-resolve-filters/Liberty: v3.4.0.1 Release 28th October 2012: Change Log -Fixed -H4 Fixed the save verification screen showing incorrect mission and difficulty information for some saves -H4 Hopefully fixed the issue where progress did not save between missions and saves would not revert correctly -H3 Fixed crashes that occurred when trying to load player information -Proper exception dialogs will now show in place of crashesPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 7): This release is compatible with the version of the Smooth Streaming SDK released today (10/26). Release 1 of the player framework is expected to be available next week. IMPROVEMENTS & FIXESIMPORTANT: List of breaking changes from preview 6 Support for the latest smooth streaming SDK. Xaml only: Support for moving any of the UI elements outside the MediaPlayer (e.g. into the appbar). Note: Equivelent changes to the JS version due in coming week. Support for localizing all text used in t...Send multiple SMS via Way2SMS C#: SMS 1.1: Added support for 160by2Quick Launch: Quick Launch 1.0: A Lightweight and Fast Way to Manage and Launch Thousands of Tools and ApplicationsPress Win+Q and start to search and run. http://www.codeplex.com/Download?ProjectName=quicklaunch&DownloadId=523536Orchard Project: Orchard 1.6: Please read our release notes for Orchard 1.6: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Media Companion: Media Companion 3.507b: Once again, it has been some time since our release, and there have been a number changes since then. It is hoped that these changes will address some of the issues users have been experiencing, and of course, work continues! New Features: Added support for adding Home Movies. Option to sort Movies by votes. Added 'selectedBrowser' preference used when opening links in an external browser. Added option to fallback to getting runtime from the movie file if not available on IMDB. Added new Big...MSBuild Extension Pack: October 2012: Release Blog Post The MSBuild Extension Pack October 2012 release provides a collection of over 475 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...NAudio: NAudio 1.6: Release notes at http://mark-dot-net.blogspot.co.uk/2012/10/naudio-16-release-notes-10th.htmlPowerShell Community Extensions: 2.1 Production: PowerShell Community Extensions 2.1 Release NotesOct 25, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. See the ReleaseNotes.txt download above for more information.Umbraco CMS: Umbraco 4.9.1: Umbraco 4.9.1 is a bugfix release to fix major issues in 4.9.0 BugfixesThe full list of fixes can be found in the issue tracker's filtered results. A summary: Split buttons work again, you can now also scroll easier when the list is too long for the screen Media and Content pickers have information of the full path of the picked item Fixed: Publish status may not be accurate on nodes with large doctypes Fixed: 2 media folders and recycle bins after upgrade to 4.9 The template/code ...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...Rawr: Rawr 5.0.2: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1025.5): [NEW] Support for connecting to CRM Online via Office 365 (OSDP) [NEW] Current connection information and loaded ribbon name are displayed in the status bar [IMPROVED] Connect dialog minor improvements and error message descriptions [IMPROVED] Connecting to a CRM server will close currently loaded ribbon upon confirmation (if another ribbon was loaded previously) [FIX] Fixed bug in Open Ribbon dialog which would not allow to refresh entity list more than onceNew Projects10010dshjlahfajhflkjhkjhherkjhfkja: 10010dshjlahfajhflkjhkjhherkjhfkjaA supplementary Machine Learning and Evolutionary Computation suite for Orange: ML & EC tools such as: - Optimization Meta-heuristics (EDAs, cGA) - Feature Selection - Kernel Methods - Dependency NetworksATH TaskList 2012: ATH TaskList 2012Canlipe: This is a simple C# / Mono (.NET) static blog generator. Nothing fancy, just the basic functionality at the moment. The code is really dirty too.com.sogeti.certif: Basic but essential source code example for SharePoint 2010Duhking: A quasi-duck typing library for .Net. It doesn't provide "real" duck typing, but it kinda looks a little bit like a duck if you squint and look at it from a distance. This library is built on and for .Net 3.5,Guestbook 2: A highly customizable program to track people visiting a booth via recording their name and some basic information. Written in C#.Interval Mandelbrot Explorer: Explore the Mandelbrot set using interval arithmetic.Isel - Projecto: projecto para isel....killStudentMain: killStudentMainKWSystem-WithClient: c++??La villa del seis: La villa del seis is a multiplatform point-and-click graphical adventure. Also, you can play it like a text adventure (interactive fiction) on a text browser (like Links, w3m or Lynx) or in a normal browser without JavaScript support.malversuchen: Das ist ein TEstPhotoCloud: This is a sample ASP.NET Web Pages app that uses Twitter authentication, HTMl5 drag and drop file uploading, blob storage, and SignalR to build a photo sharing platnosci-test: testSharePoint GSC: SharePoint GSC is a set of SharePoint utilities developed in General de Software. SHCODE: SH Project please contact little_deer@hotmail.com for more details.SimpleImageCacherRT: WinRT????????????????????。Windows.UI.XAML.Controls.Image????Source?????????????????????。???????????????…SLIP Framework: SilverLightweight Prism FrameworkSmallCheck: F# exhaustive testing framework. Port of Haskell SmallCheck test librarySolution organizer: Help developers organizing their Visual Studio 2010 solutionsSukul: Web 2.0 Portal for powering modern CMStestjabbr1029: heTFS Reflect Work Items Links: This small utility allows complex TFS to TFS migration to preserve work item links during the process. It is used in complement with the TFS Integration tools.XactiSource Application Framework: The XactiSource Application Framework is an attempt to add basic functionality that can be used from one project to the next. Yasminoku: Yasminoku is an open source "Sudoku" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses mouse. Includes sudoku solver. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.Zalgo for Word: Zalgo addin for Word 2010

    Read the article

  • CodePlex Daily Summary for Monday, October 29, 2012

    CodePlex Daily Summary for Monday, October 29, 2012Popular ReleasesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.73: Fix issue in Discussion #401101 (unreferenced var in a for-in statement was getting removed). add the grouping operator to the parsed output so that unminified parsed code is closer to the original. Will still strip unneeded parens later, if minifying. more cleaning of references as they are minified out of the code.RiP-Ripper & PG-Ripper: PG-Ripper 1.4.03: changes FIXED: Kitty-Kats new Forum UrlMJP's DirectX 11 Samples: MSAA Resolve Filtering: Sample application and source code from the article "Experimenting with Reconstruction Filters for MSAA Resolve" http://mynameismjp.wordpress.com/2012/10/28/msaa-resolve-filters/Liberty: v3.4.0.1 Release 28th October 2012: Change Log -Fixed -H4 Fixed the save verification screen showing incorrect mission and difficulty information for some saves -H4 Hopefully fixed the issue where progress did not save between missions and saves would not revert correctly -H3 Fixed crashes that occurred when trying to load player information -Proper exception dialogs will now show in place of crashesPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 7): This release is compatible with the version of the Smooth Streaming SDK released today (10/26). Release 1 of the player framework is expected to be available next week. IMPROVEMENTS & FIXESIMPORTANT: List of breaking changes from preview 6 Support for the latest smooth streaming SDK. Xaml only: Support for moving any of the UI elements outside the MediaPlayer (e.g. into the appbar). Note: Equivelent changes to the JS version due in coming week. Support for localizing all text used in t...Send multiple SMS via Way2SMS C#: SMS 1.1: Added support for 160by2Quick Launch: Quick Launch 1.0: A Lightweight and Fast Way to Manage and Launch Thousands of Tools and ApplicationsPress Win+Q and start to search and run. http://www.codeplex.com/Download?ProjectName=quicklaunch&DownloadId=523536Orchard Project: Orchard 1.6: Please read our release notes for Orchard 1.6: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Media Companion: Media Companion 3.507b: Once again, it has been some time since our release, and there have been a number changes since then. It is hoped that these changes will address some of the issues users have been experiencing, and of course, work continues! New Features: Added support for adding Home Movies. Option to sort Movies by votes. Added 'selectedBrowser' preference used when opening links in an external browser. Added option to fallback to getting runtime from the movie file if not available on IMDB. Added new Big...MSBuild Extension Pack: October 2012: Release Blog Post The MSBuild Extension Pack October 2012 release provides a collection of over 475 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...NAudio: NAudio 1.6: Release notes at http://mark-dot-net.blogspot.co.uk/2012/10/naudio-16-release-notes-10th.htmlPowerShell Community Extensions: 2.1 Production: PowerShell Community Extensions 2.1 Release NotesOct 25, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. See the ReleaseNotes.txt download above for more information.Umbraco CMS: Umbraco 4.9.1: Umbraco 4.9.1 is a bugfix release to fix major issues in 4.9.0 BugfixesThe full list of fixes can be found in the issue tracker's filtered results. A summary: Split buttons work again, you can now also scroll easier when the list is too long for the screen Media and Content pickers have information of the full path of the picked item Fixed: Publish status may not be accurate on nodes with large doctypes Fixed: 2 media folders and recycle bins after upgrade to 4.9 The template/code ...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...Rawr: Rawr 5.0.2: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...MCEBuddy 2.x: MCEBuddy 2.3.5: Changelog for 2.3.5 (32bit and 64bit) 1. Fixed a bug causing MCEBuddy to crash during or after installation on Windows XP 2. Bugfix for resource leak with UPnP which would lead to a failure after many days 3. Increased the UPnP discovery re-scan interval from 10 minutes to 30 minutes 4. Added support for specifying TVDB and IMDB id’s in the conversion task page (forcing the internet lookup for metadata)SharePoint 2010 - Community Projects: LifeInSharePoint - SP2013 Branding for 2010: The SharePoint 2013 Masterpage for 2010 has the following features: * SharePoint Sandbox Solution – Office 365 Compatible * Custom Site Actions Menu Link * “Focus on Content” – hide site navigation to allow user to focus on content only * Get the SharePoint 2013 interface without having to upgrade :)Perspective : Easy 2D and 3D programming with Silverlight: Version 3.0.1: A major release, for Silverlight 5. What's new ?New since version 3.0 : antialiasing, better dynamic scene support New since version 2 : a high-level 3D framework for Silverlight 5, similar to Perspective 3D for WPF. New since 3D-only version : a full featured demo application. http://www.odewit.net//Articles/Sl3DIntro/BitmapTexture.png Documentation : See Readme.txt in the sources for release details. Perspective : Easy 3D programming with Silverlight 5 Perspective : Basic 3D mod...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1025.5): [NEW] Support for connecting to CRM Online via Office 365 (OSDP) [NEW] Current connection information and loaded ribbon name are displayed in the status bar [IMPROVED] Connect dialog minor improvements and error message descriptions [IMPROVED] Connecting to a CRM server will close currently loaded ribbon upon confirmation (if another ribbon was loaded previously) [FIX] Fixed bug in Open Ribbon dialog which would not allow to refresh entity list more than onceWPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...New Projects7COM0207 Web Scripting and Content Creation: .Access Viewer and Reporter: This is Access reader. This project provide facility to explorer access database file, provide facility to query them and also can generate reportAlienHack HackMap: AlienHack is an opensource hackmap for Thaicybergames(Cybergames) for Warcraft III written in VB.NETCleverBobCat: CleverBobCat is the Plugin developed to provide functionalities to BobCat Ind wheeled equipment for studying and colonization of planets in kerbal space programColorTranslator for Win8 Apps: This project will add the ColorTranslator class to the Windows.UI namespace so that you can convert colors easily like with the desktop winforms apps.Dnascout: A tool to search a DNA strand of {ACTG} characters for matches against a specified substring, calculate distances between matching substrings (span), and also generate lists of all unique/redundant strands in the file. dSUtilities: Here we will host Utilities for Plugin Manager and file system manipuationEnhancedRefinement: FAST?????????SharePoint Web?????????????。 ·??????????Web??? ·????????Web???Entity SQL for Linq to SQL: Entity SQL from Linq to SQL & ALinqFrameRateCounter: A C# XNA 3.0 GameComponent that calculates and displays the framerate of a game.Friend Analyzer: A Facebook friend parser that applies k-Means in order to determine relevant user categories and the most important person in said group.Give Me Menu: Project of TEPHeloWorld: Simple HelloWorld project. I missed a 'l' in the world HelloWorld and I got in.The HelloWorld project must have existed, I think.LprConfigurer: the config tool for kise lpr projectLTI Samples: This project is a collection of LTI sample projects.MCDA4ArcMap: Multi criteria decision analysis for ArcMap.My Diary - ??: My Diary - ?? Style Windows 8 Metronone111: noneOMR.Crawler: This project is experimental crawling on web and desktop contents. Simple implementation is bellow on console project.Plasmid OS: Plasmid is a free, open source operating system developed in C# on top of COSMOS. It is designed for speed and stability, and is planned to comply with POSIXproj_2_forum: forumowiskoRSLight: RSLightSLRTC#: Done for an IT competition by CS students from Romania.Speech Recognition: An application that uses speech commands to perform tasksStoragePeriod: StoragePeriod

    Read the article

  • Fake ISAPI Handler to serve static files with extention that are rewritted by url rewriter

    - by developerit
    Introduction I often map html extention to the asp.net dll in order to use url rewritter with .html extentions. Recently, in the new version of www.nouvelair.ca, we renamed all urls to end with .html. This works great, but failed when we used FCK Editor. Static html files would not get serve because we mapped the html extension to the .NET Framework. We can we do to to use .html extension with our rewritter but still want to use IIS behavior with static html files. Analysis I thought that this could be resolve with a simple HTTP handler. We would map urls of static files in our rewriter to this handler that would read the static file and serve it, just as IIS would do. Implementation This is how I coded the class. Note that this may not be bullet proof. I only tested it once and I am sure that the logic behind IIS is more complicated that this. If you find errors or think of possible improvements, let me know. Imports System.Web Imports System.Web.Services ' Author: Nicolas Brassard ' For: Solutions Nitriques inc. http://www.nitriques.com ' Date Created: April 18, 2009 ' Last Modified: April 18, 2009 ' License: CPOL (http://www.codeproject.com/info/cpol10.aspx) ' Files: ISAPIDotNetHandler.ashx ' ISAPIDotNetHandler.ashx.vb ' Class: ISAPIDotNetHandler ' Description: Fake ISAPI handler to serve static files. ' Usefull when you want to serve static file that has a rewrited extention. ' Example: It often map html extention to the asp.net dll in order to use url rewritter with .html. ' If you want to still serve static html file, add a rewritter rule to redirect html files to this handler Public Class ISAPIDotNetHandler Implements System.Web.IHttpHandler Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest ' Since we are doing the job IIS normally does with html files, ' we set the content type to match html. ' You may want to customize this with your own logic, if you want to serve ' txt or xml or any other text file context.Response.ContentType = "text/html" ' We begin a try here. Any error that occurs will result in a 404 Page Not Found error. ' We replicate the behavior of IIS when it doesn't find the correspoding file. Try ' Declare a local variable containing the value of the query string Dim uri As String = context.Request("fileUri") ' If the value in the query string is null, ' throw an error to generate a 404 If String.IsNullOrEmpty(uri) Then Throw New ApplicationException("No fileUri") End If ' If the value in the query string doesn't end with .html, then block the acces ' This is a HUGE security hole since it could permit full read access to .aspx, .config, etc. If Not uri.ToLower.EndsWith(".html") Then ' throw an error to generate a 404 Throw New ApplicationException("Extention not allowed") End If ' Map the file on the server. ' If the file doesn't exists on the server, it will throw an exception and generate a 404. Dim fullPath As String = context.Server.MapPath(uri) ' Read the actual file Dim stream As IO.StreamReader = FileIO.FileSystem.OpenTextFileReader(fullPath) ' Write the file into the response context.Response.Output.Write(stream.ReadToEnd) ' Close and Dipose the stream stream.Close() stream.Dispose() stream = Nothing Catch ex As Exception ' Set the Status Code of the response context.Response.StatusCode = 404 'Page not found ' For testing and bebugging only ! This may cause a security leak ' context.Response.Output.Write(ex.Message) Finally ' In all cases, flush and end the response context.Response.Flush() context.Response.End() End Try End Sub ' Automaticly generated by Visual Studio ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class Conclusion As you see, with our static files map to this handler using query string (ex.: /ISAPIDotNetHandler.ashx?fileUri=index.html) you will have the same behavior as if you ask for the uri /index.html. Finally, test this only in IIS with the html extension map to aspnet_isapi.dll. Url rewritting will work in Casini (Internal Web Server shipped with Visual Studio) but it’s not the same as with IIS since EVERY request is handle by .NET. Versions First release

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >