Search Results

Search found 1134 results on 46 pages for 'blender 2 67'.

Page 39/46 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Radius Authorization against ActiveDirectory and the users file

    - by mohrphium
    I have a problem with my freeradius server configuration. I want to be able to authenticate users against Windows ActiveDirectory (2008 R2) and the users file, because some of my co-workers are not listed in AD. We use the freeradius server to authenticate WLAN users. (PEAP/MSCHAPv2) AD Authentication works great, but I still have problems with the /etc/freeradius/users file When I run freeradius -X -x I get the following: Mon Jul 2 09:15:58 2012 : Info: ++++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 1 length 13 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: +++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: ++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/default Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] EAP Identity Mon Jul 2 09:15:58 2012 : Info: [eap] processing type tls Mon Jul 2 09:15:58 2012 : Info: [tls] Initiate Mon Jul 2 09:15:58 2012 : Info: [tls] Start returned 1 Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns handled Sending Access-Challenge of id 199 to 192.168.61.11 port 3072 EAP-Message = 0x010200061920 Message-Authenticator = 0x00000000000000000000000000000000 State = 0x85469e2a854487589fb1196910cb8ae3 Mon Jul 2 09:15:58 2012 : Info: Finished request 125. Mon Jul 2 09:15:58 2012 : Debug: Going to the next request Mon Jul 2 09:15:58 2012 : Debug: Waking up in 2.4 seconds. After that it repeats the login attempt and at some point tries to authenticate against ActiveDirectory with ntlm, which doesn't work since the user exists only in the users file. Can someone help me out here? Thanks. PS: Hope this helps, freeradius trying to auth against AD: Mon Jul 2 09:15:58 2012 : Info: ++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[control] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 7 length 67 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[smbpasswd] returns notfound Mon Jul 2 09:15:58 2012 : Info: ++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] Request found, released from the list Mon Jul 2 09:15:58 2012 : Info: [eap] EAP/mschapv2 Mon Jul 2 09:15:58 2012 : Info: [eap] processing type mschapv2 Mon Jul 2 09:15:58 2012 : Info: [mschapv2] # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: [mschapv2] +- entering group MS-CHAP {...} Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] Told to do MS-CHAPv2 for testtest with NT-Password Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --username=%{mschap:User-Name:-None} -> --username=testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] No NT-Domain was found in the User-Name. Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: %{mschap:NT-Domain} -> Mon Jul 2 09:15:58 2012 : Info: [mschap] ... expanding second conditional Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --domain=%{%{mschap:NT-Domain}:-AD.CXO.NAME} -> --domain=AD.CXO.NAME Mon Jul 2 09:15:58 2012 : Info: [mschap] mschap2: 82 Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --challenge=%{mschap:Challenge:-00} -> --challenge=dd441972f987d68b Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --nt-response=%{mschap:NT-Response:-00} -> --nt-response=7e6c537cd5c26093789cf7831715d378e16ea3e6c5b1f579 Mon Jul 2 09:15:58 2012 : Debug: Exec-Program output: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program-Wait: plaintext: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program: returned: 1 Mon Jul 2 09:15:58 2012 : Info: [mschap] External script failed. Mon Jul 2 09:15:58 2012 : Info: [mschap] FAILED: MS-CHAP2-Response is incorrect Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns reject Mon Jul 2 09:15:58 2012 : Info: [eap] Freeing handler Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns reject Mon Jul 2 09:15:58 2012 : Info: Failed to authenticate the user. Mon Jul 2 09:15:58 2012 : Auth: Login incorrect (mschap: External script says Logon failure (0xc000006d)): [testtest] (from client techap01 port 0 via TLS tunnel) PPS: Maybe the problem is located here: In /etc/freeradius/modules/ntlm_auth I have set ntlm to: program = "/usr/bin/ntlm_auth --request-nt-key --domain=AD.CXO.NAME --username=%{mschap:User-Name} --password=%{User-Password}" I need this, so users can login without adding @ad.cxo.name to their usernames. But how can I tell freeradius to try both logins, [email protected] (should fail) testtest (against users file - should work)

    Read the article

  • Linux IPTables / routing issue

    - by Jon
    Hi all, EDIT 1/3/10 22:00 GMT - rewrote some of it after further investigation It has been a while since I looked at IPtables and I seem to be worse than before as I can not seem to get my webserver online. Below is my firewall rules on the gateway server that is running the dhcp server accessing the net. The webserver is inside my network on a static IP (192.168.0.98, default port). When I use Nmap or GRC.com I see that port 80 is open on the gateway server but when I browse to it, (via public URL. http://www.houseofhawkins.com) it always fails with a connection error, (nmap cannot connect and figure out what the web server is either). I can nmap the webserver and browse to it just fine via same IP inside my network. I believe it is my IPTable rules that are not letting it through. Internally I can route all my requests. Each machine can browse to the website and traffic works just fine. I can MSTSC / ssh to all the webservers internally and they inturn can connect to the web. IPTABLE: *EDIT - Added new firewall rules 2/3/10 * #!/bin/sh iptables="/sbin/iptables" modprobe="/sbin/modprobe" depmod="/sbin/depmod" EXTIF="eth2" INTIF="eth1" load () { $depmod -a $modprobe ip_tables $modprobe ip_conntrack $modprobe ip_conntrack_ftp $modprobe ip_conntrack_irc $modprobe iptable_nat $modprobe ip_nat_ftp echo "enable forwarding.." echo "1" > /proc/sys/net/ipv4/ip_forward echo "enable dynamic addr" echo "1" > /proc/sys/net/ipv4/ip_dynaddr # start firewall # default policies $iptables -P INPUT DROP $iptables -F INPUT $iptables -P OUTPUT DROP $iptables -F OUTPUT $iptables -P FORWARD DROP $iptables -F FORWARD $iptables -t nat -F #echo " Opening loopback interface for socket based services." $iptables -A INPUT -i lo -j ACCEPT $iptables -A OUTPUT -o lo -j ACCEPT #echo " Allow all connections OUT and only existing and related ones IN" $iptables -A INPUT -i $INTIF -j ACCEPT $iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $iptables -A OUTPUT -o $EXTIF -j ACCEPT $iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $iptables -A FORWARD -i $EXTIF -o $INTIF -m state --state ESTABLISHED,RELATED -j ACCEPT $iptables -A FORWARD -i $INTIF -o $EXTIF -j ACCEPT $iptables -A FORWARD -j LOG --log-level 7 --log-prefix "Dropped by firewall: " $iptables -A INPUT -j LOG --log-level 7 --log-prefix "Dropped by firewall: " $iptables -A OUTPUT -j LOG --log-level 7 --log-prefix "Dropped by firewall: " #echo " Enabling SNAT (MASQUERADE) functionality on $EXTIF" $iptables -t nat -A POSTROUTING -o $EXTIF -j MASQUERADE $iptables -A INPUT -i $INTIF -j ACCEPT $iptables -A OUTPUT -o $INTIF -j ACCEPT #echo " Allowing packets with ICMP data (i.e. ping)." $iptables -A INPUT -p icmp -j ACCEPT $iptables -A OUTPUT -p icmp -j ACCEPT $iptables -A INPUT -p udp -i $INTIF --dport 67 -m state --state NEW -j ACCEPT #echo " Port 137 is for NetBIOS." $iptables -A INPUT -i $INTIF -p udp --dport 137 -j ACCEPT $iptables -A OUTPUT -o $INTIF -p udp --dport 137 -j ACCEPT #echo " Opening port 53 for DNS queries." $iptables -A INPUT -p udp -i $EXTIF --sport 53 -j ACCEPT #echo " opening Apache webserver" $iptables -A PREROUTING -t nat -i $EXTIF -p tcp --dport 80 -j DNAT --to 192.168.0.96:80 $iptables -A FORWARD -p tcp -m state --state NEW -d 192.168.0.96 --dport 80 -j ACCEPT } flush () { echo "flushing rules..." $iptables -P FORWARD ACCEPT $iptables -F INPUT $iptables -P INPUT ACCEPT echo "rules flushed" } case "$1" in start|restart) flush load ;; stop) flush ;; *) echo "usage: start|stop|restart." ;; esac exit 0 route info: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 5e0412a6.bb.sky * 255.255.255.255 UH 0 0 0 eth2 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 default 5e0412a6.bb.sky 0.0.0.0 UG 100 0 0 eth2 ifconfig: eth1 Link encap:Ethernet HWaddr 00:22:b0:cf:4a:1c inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::222:b0ff:fecf:4a1c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:79023 errors:0 dropped:0 overruns:0 frame:0 TX packets:57786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11580918 (11.5 MB) TX bytes:22872030 (22.8 MB) Interrupt:17 Base address:0x2b00 eth2 Link encap:Ethernet HWaddr 00:0c:f1:7c:45:5b inet addr:94.4.18.166 Bcast:94.4.18.166 Mask:255.255.255.255 inet6 addr: fe80::20c:f1ff:fe7c:455b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:57038 errors:0 dropped:0 overruns:0 frame:0 TX packets:34532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:21631721 (21.6 MB) TX bytes:7685444 (7.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1517 (1.5 KB) TX bytes:1517 (1.5 KB) EDIT OK so as requested I will try and expand on my infrastructure: I previously had it setup with a Sky broadband modem router that did the DHCP and I used its web interface to port forward the web across to the web server. The network looked something like this: I have now replaced the sky modem with a dlink modem which gives the IP to the gateway server that now does the DHCP. It looks like: The internet connection is a standard broadband connection with a dynamic IP, (use zoneedit.com to keep it updated). I have tried it on each of the webservers(one Ubuntu Apache server and one WS2008 IIS7). I think there must also be an issue with my IPTable rules as it can route to my win7 box which has the default IIS7 page and that would not display when I forwarded all port 80 to it. I would be really grateful for any and all help with this. Thanks Jon

    Read the article

  • Chef-solo cannot locate an nginx recipe template

    - by crftr
    I have been recently experimenting with Chef. I thought I would attempt to rebuild my personal web server using chef-solo. It's an AWS instance running the Amazon 64bit Linux AMI. My first objective is to install nginx. I have cloned the Opscode cookbook repository, and am using their nginx cookbook. My problem appears to be that chef-solo cannot find a template after it has started the process. The command I'm using is chef-solo -j /etc/chef/dna.json dna.json { "nginx": { "user": "ec2-user" }, "recipes": [ "nginx" ] } solo.rb file_cache_path "/var/chef-solo" cookbook_path "/var/chef-solo/cookbooks" ...the output [root@ip-10-202-221-135 chef-solo]# chef-solo -j /etc/chef/dna.json /usr/lib64/ruby/gems/1.9.1/gems/systemu-2.2.0/lib/systemu.rb:29: Use RbConfig instead of obsolete and deprecated Config. [Fri, 27 Jan 2012 19:41:36 +0000] INFO: *** Chef 0.10.8 *** [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Setting the run_list to ["nginx"] from JSON [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List is [recipe[nginx]] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List expands to [nginx] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Starting Chef Run for ip-10-202-221-135.ec2.internal [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Running start handlers [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Start handlers complete. [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Missing gem 'mysql' [Fri, 27 Jan 2012 19:41:38 +0000] INFO: Processing package[nginx] action install (nginx::default line 21) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing directory[/var/log/nginx] action create (nginx::default line 23) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxensite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxdissite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[nginx.conf] action create (nginx::default line 38) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/etc/nginx/sites-available/default] action create (nginx::default line 46) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: template[/etc/nginx/sites-available/default] mode changed to 644 [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (nginx::default line 46) has had an error [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (/var/chef-solo/cookbooks/nginx/recipes/default.rb:46:in `from_file') had an error: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `rename' /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `block in mv' /usr/lib64/ruby/1.9.1/fileutils.rb:1515:in `block in fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:1531:in `fu_each_src_dest0' /usr/lib64/ruby/1.9.1/fileutils.rb:1513:in `fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:508:in `mv' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:47:in `block in action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:48:in `block in render_template' /usr/lib64/ruby/1.9.1/tempfile.rb:316:in `open' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:45:in `render_template' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:99:in `render_with_context' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:39:in `action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource.rb:440:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:45:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block (2 levels) in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `each' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:94:in `block in execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:85:in `step' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:104:in `iterate' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:92:in `execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:76:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:312:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:160:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:192:in `block in run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `loop' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application.rb:67:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/bin/chef-solo:25:in `<top (required)>' /usr/bin/chef-solo:19:in `load' /usr/bin/chef-solo:19:in `<main>' [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Running exception handlers [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Exception handlers complete [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Stacktrace dumped to /var/chef-solo/chef-stacktrace.out [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Errno::ENOENT: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) What am I doing incorrectly?

    Read the article

  • UnicodeEncodeError when uploading files in Django admin

    - by Samuel Linde
    Note: I asked this question on StackOverflow, but I realize this might be a more proper place to ask this kind of question. I'm trying to upload a file called 'Testaråäö.txt' via the Django admin app. I'm running Django 1.3.1 with Gunicorn 0.13.4 and Nginx 0.7.6.7 on a Debian 6 server. Database is PostgreSQL 8.4.9. Other Unicode data is saved to the database with no problem, so I guess the problem must be with the filesystem somehow. I've set http { charset utf-8; } in my nginx.conf. LC_ALL and LANG is set to 'sv_SE.UTF-8'. Running 'locale' verifies this. I even tried setting LC_ALL and LANG in my nginx init script just to make sure locale is set properly. Here's the traceback: Traceback (most recent call last): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 307, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 197, in inner return view(request, *args, **kwargs) File "/srv/django/letebo/app/cms/admin.py", line 81, in change_view return super(PageAdmin, self).change_view(request, obj_id) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 28, in _wrapper return bound_func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 24, in bound_func return func(self, *args2, **kwargs2) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/transaction.py", line 217, in inner res = func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 985, in change_view self.save_formset(request, form, formset, change=True) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 677, in save_formset formset.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 482, in save return self.save_existing_objects(commit) + self.save_new_objects(commit) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 613, in save_new_objects self.new_objects.append(self.save_new(form, commit=commit)) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 717, in save_new obj.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 504, in save_base self.save_base(cls=parent, origin=org, using=using) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 543, in save_base for f in meta.local_fields if not isinstance(f, AutoField)] File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 255, in pre_save file.save(file.name, file, save=False) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 92, in save self.name = self.storage.save(name, content) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 48, in save name = self.get_available_name(name) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 74, in get_available_name while self.exists(name): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 218, in exists return os.path.exists(self.path(name)) File "/srv/.virtualenvs/letebo/lib/python2.6/genericpath.py", line 18, in exists st = os.stat(path) UnicodeEncodeError: 'ascii' codec can't encode characters in position 52-54: ordinal not in range(128) I tried running Gunicorn with debugging turned on, and the file uploads without any problem at all. I suppose this must mean that the issue is with Nginx. Still beats me where to look, though. Here are the raw response headers from Gunicorn and Nginx, if it makes any sense: Gunicorn: HTTP/1.1 302 FOUND Server: gunicorn/0.13.4 Date: Thu, 09 Feb 2012 14:50:27 GMT Connection: close Transfer-Encoding: chunked Expires: Thu, 09 Feb 2012 14:50:27 GMT Vary: Cookie Last-Modified: Thu, 09 Feb 2012 14:50:27 GMT Location: http://my-server.se:8000/admin/cms/page/15/ Cache-Control: max-age=0 Content-Type: text/html; charset=utf-8 Set-Cookie: messages="yada yada yada"; Path=/ Nginx: HTTP/1.1 500 INTERNAL SERVER ERROR Server: nginx/0.7.67 Date: Thu, 09 Feb 2012 14:50:57 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: close Vary: Cookie 500 UPDATE: Both locale.getpreferredencoding() and sys.getfilesystemencoding() outputs 'UTF-8'. locale.getdefaultlocale() outputs ('sv_SE', 'UTF8'). This seem correct to me, so I'm still not sure why I keep getting these errors.

    Read the article

  • Does an ESEUTIL defrag of an Exchange store also perform an integrity check/repair on it?

    - by Bigbio2002
    Earlier this morning, store.exe fuzzled up in one way or another, which necessitated a restart of our Exchange server. It came back online with no errors or problems, all the transaction logs replayed successfully, and all the stores mounted as normal. To me, it was just one of those random crashes; however, our consultant suspects it was caused by corruption in one of the stores. Perhaps he's correct, since he has far more experience than me, but that's not the point. To fix the suspected errors, he's planinng to run an ESEUTIL defrag (via PerfectDisk) to fix them, which he claims will also fix any errors present. From what I understand, defrag, verify, and repair are 3 separate actions, and a defrag does not imply any kind of integrity check. Is this correct? Are there any dangers of running a straight-up defrag on a database that might be corrupt? Edit: Here's the first error in the event log, which indicated the start of the problems we were having. Anyone know what it might indicate? Event Type: Error Event Source: Microsoft Exchange Server Event Category: None Event ID: 1000 Date: 11/23/2011 Time: 8:15:47 AM User: N/A Computer: SERVER Description: Faulting application exsp.dll, version 6.5.7638.1, stamp 430e735b, faulting module kernel32.dll, version 5.2.3790.4480, stamp 49c51f0a, debug? 0, fault address 0x0000bef7. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 41 00 70 00 70 00 6c 00 A.p.p.l. 0008: 69 00 63 00 61 00 74 00 i.c.a.t. 0010: 69 00 6f 00 6e 00 20 00 i.o.n. . 0018: 46 00 61 00 69 00 6c 00 F.a.i.l. 0020: 75 00 72 00 65 00 20 00 u.r.e. . 0028: 20 00 65 00 78 00 73 00 .e.x.s. 0030: 70 00 2e 00 64 00 6c 00 p...d.l. 0038: 6c 00 20 00 36 00 2e 00 l. .6... 0040: 35 00 2e 00 37 00 36 00 5...7.6. 0048: 33 00 38 00 2e 00 31 00 3.8...1. 0050: 20 00 34 00 33 00 30 00 .4.3.0. 0058: 65 00 37 00 33 00 35 00 e.7.3.5. 0060: 62 00 20 00 69 00 6e 00 b. .i.n. 0068: 20 00 6b 00 65 00 72 00 .k.e.r. 0070: 6e 00 65 00 6c 00 33 00 n.e.l.3. 0078: 32 00 2e 00 64 00 6c 00 2...d.l. 0080: 6c 00 20 00 35 00 2e 00 l. .5... 0088: 32 00 2e 00 33 00 37 00 2...3.7. 0090: 39 00 30 00 2e 00 34 00 9.0...4. 0098: 34 00 38 00 30 00 20 00 4.8.0. . 00a0: 34 00 39 00 63 00 35 00 4.9.c.5. 00a8: 31 00 66 00 30 00 61 00 1.f.0.a. 00b0: 20 00 66 00 44 00 65 00 .f.D.e. 00b8: 62 00 75 00 67 00 20 00 b.u.g. . 00c0: 30 00 20 00 61 00 74 00 0. .a.t. 00c8: 20 00 6f 00 66 00 66 00 .o.f.f. 00d0: 73 00 65 00 74 00 20 00 s.e.t. . 00d8: 30 00 30 00 30 00 30 00 0.0.0.0. 00e0: 62 00 65 00 66 00 37 00 b.e.f.7. 00e8: 0d 00 0a 00 ....

    Read the article

  • CD/DVD burn error in ImgBurn and Nero

    - by bobby
    I am getting the errors shown below when I try to burn a CD/DVD on my DVD writer. I am seeing this error for every CD/DVD I try to burn. I am not able to write any CDs or DVDs using ImgBurn. The burn log below is a failed burn in Nero. What could be causing this error? Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: <HL-DT-ST DVDRAM GSA-H12N> Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: <IDE> HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: <ATAPI-CD ROM-DRIVE-52MAX > Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: <IDE> HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 >>> Protocol of DlgWaitCD activities: <<< ========================================= 10:43:02 AM #8 Text 0 File ThreadedTransferInterface.cpp, Line 785 Nero Report 1 Nero Burning ROM Setup items (after recorder preparation) 0: TRM_DATA_MODE1 (2 - CD-ROM Mode 1, Joliet) 2 indices, index0 (150) not provided original disc pos #0 + 318784 (318784) = #318784/70:50.34 not relocatable, disc pos for caching/writing not required/not required -> TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blocks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes (<64KB) 10:43:02 AM #12 Phase 24 File dlgbrnst.cpp, Line 1767 Caching of files started 10:43:02 AM #13 Text 0 File Burncd.cpp, Line 4405 Cache writing successful. 10:43:02 AM #14 Phase 25 File dlgbrnst.cpp, Line 1767 Caching of files completed 10:43:02 AM #15 Phase 36 File dlgbrnst.cpp, Line 1767 Burn process started at 48x (7,200 KB/s) 10:43:02 AM #16 Text 0 File ThreadedTransferInterface.cpp, Line 2733 Verifying disc position of item 0 (not relocatable, no disc pos, no patch infos, orig at #0): write at #0 10:43:02 AM #17 Text 0 File MMC.cpp, Line 17806 StartDAO : CD-Text - Off 10:43:02 AM #18 Text 0 File MMC.cpp, Line 22488 Set BUFE: Buffer underrun protection -> ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • Bind: dns not 'spreaded'

    - by realtebo
    I've elfoip.net with bind $ whois elfoip.net | grep 'Name Server' Name Server: NS.ELFOIP.NET I need elfoip.net be able to serve third levels domain, like mickymouse.elfoip.net, etc... Yes, I'm trying to create an other useless dyndns clone. i've added some third level as A RR. Eg: executing this from the server itself $ dig @localhost mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 I was expecting in one or two days, from my pc i can digit in browser mattinauno.elfoip.net and get page a 192.81.221.113 But this is not happening. Are there any prerequisites to satisfy to allow dns of my isp to be able to forward dns resolution of *.elfoip.net to MY dns ? (Or to ask to him and then cache ?) TTL of zone is set a 5m I've not AllowQuey directive, is it necessary for other dns to cache from mine ? I've cheched the zone with bind utility named-checkzone but no error detected. How to diagnose why other dns doesn't take in account RR from mine ? from my home pc dig @ns.elfoip.net mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 ;; AUTHORITY SECTION: elfoip.net. 300 IN NS ns.elfoip.net. but dig @8.8.8.8 mattinauno.elfoip.net give no answers Whole zone file: note I've used nsupdate, so this file has been re-edited and re-formatted from this utility ! root@mirko:/var/named# cat elfoip.net.db $ORIGIN . $TTL 300 ; 5 minutes elfoip.net IN SOA ns.elfoip.net. hostmaster.elfoip.net. ( 2013062314 ; serial 3600 ; refresh (1 hour) 600 ; retry (10 minutes) 86400 ; expire (1 day) 60 ; minimum (1 minute) ) NS ns.elfoip.net. A 109.168.99.6 $ORIGIN elfoip.net. $TTL 60 ; 1 minute google A 173.194.35.56 maiscai A 192.81.221.113 mattinadue A 192.81.221.113 mattinauno A 192.81.221.113 $TTL 300 ; 5 minutes ns A 109.168.99.6 $TTL 60 ; 1 minute prova A 208.67.222.222 prova2 A 13.23.34.45 A 13.23.34.46 www CNAME elfoip.net. EDIT: added named.conf.local zone "elfoip.net" { type master; // file "/etc/bind/elfoip.net.db"; file "/var/named/elfoip.net.db"; allow-update { key elfoip.net ; }; }; EDIT: I've no setup list-on directive *EDIT Added a TCPDUMP after [email protected] wwww.elfoip.net from a machine which uses my company internal dns, who allow recursive query. root@mirko:~# tcpdump -i eth0 'port 53' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 11:57:23.293611 IP host9-210-static.22-87-b.business.telecomitalia.it.45958 > mirko.elfoip.net.domain: 61337+ A? www.elfoip.net. (32) 11:57:23.294114 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.45958: 61337* 2/1/1 CNAME elfoip.net., A 109.168.99.6 (95) 11:57:23.294554 IP mirko.elfoip.net.59571 > google-public-dns-a.google.com.domain: 45851+ PTR? 9.210.22.87.in-addr.arpa. (42) 11:57:23.330444 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.59571: 45851 1/0/0 PTR host9-210-static.22-87-b.business.telecomitalia.it. (106) 11:57:23.331181 IP mirko.elfoip.net.44171 > google-public-dns-a.google.com.domain: 33339+ PTR? 8.8.8.8.in-addr.arpa. (38) 11:57:23.439405 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.44171: 33339 1/0/0 PTR google-public-dns-a.google.com. (82) 11:57:31.350654 IP host9-210-static.22-87-b.business.telecomitalia.it.30108 > mirko.elfoip.net.domain: 38269 [1au] A? ns.elfoip.net. (42) 11:57:31.351117 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.30108: 38269* 1/1/1 A 109.168.99.6 (72) If i dig @8.8.8.8 www.elfoip.net, NOTHING happens in dump log !

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • IPTables masquerading with one NIC

    - by Tuinslak
    Hi, I am running an OpenVPN server with only one NIC. This is my current layout: public.ip > Cisco firewall > lan.ip > OpenVPN server lan.ip = 192.168.22.70 The Cisco firewall forwards the requests to the oVPN server, thus so far everything works and clients are able to connect. However, all clients connected should be able to access 3 networks: lan1: 192.168.200.0 (vpn lan) > tun0 lan2: 192.168.110.0 (office lan) > eth1 (gw 192.168.22.1) lan3: 192.168.22.0 (server lan) > eth1 (broadcast network) So tun0 is mapped to eth1. Iptables output: # iptables-save # Generated by iptables-save v1.4.2 on Wed Feb 16 14:14:20 2011 *filter :INPUT ACCEPT [327:26098] :FORWARD DROP [305:31700] :OUTPUT ACCEPT [291:27378] -A INPUT -i lo -j ACCEPT -A INPUT -i tun0 -j ACCEPT -A INPUT -i ! tun0 -p udp -m udp --dport 67 -j REJECT --reject-with icmp-port-unreachable -A INPUT -i ! tun0 -p udp -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A FORWARD -d 192.168.200.0/24 -i tun0 -j DROP -A FORWARD -s 192.168.200.0/24 -i tun0 -j ACCEPT -A FORWARD -d 192.168.200.0/24 -i eth1 -j ACCEPT COMMIT # Completed on Wed Feb 16 14:14:20 2011 # Generated by iptables-save v1.4.2 on Wed Feb 16 14:14:20 2011 *nat :PREROUTING ACCEPT [302:26000] :POSTROUTING ACCEPT [3:377] :OUTPUT ACCEPT [49:3885] -A POSTROUTING -o eth1 -j MASQUERADE COMMIT # Completed on Wed Feb 16 14:14:20 2011 Yet, clients are unable to ping any ip (including 192.168.200.1, which is the oVPN's IP) When the machine was directly connected to the internet, with 2 NICs, it was quite simply solved with masquerading and adding static routes in the oVPN client's config. However, as masquerading won't accept virtual interfaces (eth0:0, etc) I am unable to get masquerading to work again (and I'm not even sure whether I need virtual interfaces). Thanks. Edit: OpenVPN server: # ifconfig eth1 Link encap:Ethernet HWaddr ba:e6:64:ec:57:ac inet addr:192.168.22.70 Bcast:192.168.22.255 Mask:255.255.255.0 inet6 addr: fe80::b8e6:64ff:feec:57ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6857 errors:0 dropped:0 overruns:0 frame:0 TX packets:4044 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:584046 (570.3 KiB) TX bytes:473691 (462.5 KiB) Interrupt:14 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:33773 (32.9 KiB) TX bytes:33773 (32.9 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.200.1 P-t-P:192.168.200.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifconfig on a client: # ifconfig eth0 Link encap:Ethernet HWaddr 00:22:64:71:11:56 inet addr:192.168.110.94 Bcast:192.168.110.255 Mask:255.255.255.0 inet6 addr: fe80::222:64ff:fe71:1156/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3466 errors:0 dropped:0 overruns:0 frame:0 TX packets:1838 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:997924 (974.5 KiB) TX bytes:332406 (324.6 KiB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:37847 errors:0 dropped:0 overruns:0 frame:0 TX packets:37847 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2922444 (2.7 MiB) TX bytes:2922444 (2.7 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.200.30 P-t-P:192.168.200.29 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:689 errors:0 dropped:18 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:468778 (457.7 KiB) wlan0 Link encap:Ethernet HWaddr 00:16:ea:db:ae:86 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:704699 errors:0 dropped:0 overruns:0 frame:0 TX packets:730176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:520385963 (496.2 MiB) TX bytes:225210422 (214.7 MiB) static routes line at the end of the client's config (I've been playing around with the 192.168.200.0 -- (un)commenting to see if anything changes): route 192.168.200.0 255.255.255.0 route 192.168.110.0 255.255.255.0 route 192.168.22.0 255.255.255.0 route on a vpn client: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.200.29 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 192.168.22.0 192.168.200.29 255.255.255.0 UG 0 0 0 tun0 192.168.200.0 192.168.200.29 255.255.255.0 UG 0 0 0 tun0 192.168.110.0 192.168.200.29 255.255.255.0 UG 0 0 0 tun0 192.168.110.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.110.1 0.0.0.0 UG 0 0 0 eth0 edit: Weirdly enough, if I set push "redirect-gateway def1" in the server config, (and thus routes all traffic through VPN, which is not what I want), it seems to work.

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • I added some options to stop spam with Postfix, but now won't send email to remote domains

    - by willdanceforfun
    I had a working Postfix server, but added a few lines to my main.cf in a hope to block some common spam. Those lines I added were: smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit It appears my postfix is now receiving normal emails fine, and blocking spam emails. But when I now try to use this server myself to send to a remote domain (an email not on my server) I get bounced, with maillog saying something like this: Nov 12 06:19:36 srv postfix/smtpd[11756]: NOQUEUE: reject: RCPT from unknown[xx.xx.x.xxx]: 450 4.1.2 <[email protected]>: Recipient address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[192.168.1.100]> Is that saying 'domain not found' for gmail.com? Why is that recipient address rejected? An output of my postconf-n is: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = primarydomain.net myhostname = mail.primarydomain.net myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = $mydestination, primarydomain.net, secondarydomain.org sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_client_restrictions = permit_sasl_authenticated smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_domains = mail.secondarydomain.org virtual_alias_maps = hash:/etc/postfix/virtual Any insight greatly appreciated. Edit: here is the dig mx gmail.com from the server: ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31766 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 4, ADDITIONAL: 14 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 1207 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 30 alt3.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 10 alt1.gmail-smtp-in.l.google.com. ;; AUTHORITY SECTION: gmail.com. 109168 IN NS ns1.google.com. gmail.com. 109168 IN NS ns4.google.com. gmail.com. 109168 IN NS ns3.google.com. gmail.com. 109168 IN NS ns2.google.com. ;; ADDITIONAL SECTION: alt1.gmail-smtp-in.l.google.com. 207 IN A 173.194.70.27 alt1.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4001:c02::1b gmail-smtp-in.l.google.com. 200 IN A 173.194.67.26 gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:400c:c05::1b alt3.gmail-smtp-in.l.google.com. 207 IN A 74.125.143.27 alt3.gmail-smtp-in.l.google.com. 249 IN AAAA 2a00:1450:400c:c05::1b alt2.gmail-smtp-in.l.google.com. 207 IN A 173.194.69.27 alt2.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4008:c01::1b alt4.gmail-smtp-in.l.google.com. 207 IN A 173.194.79.27 alt4.gmail-smtp-in.l.google.com. 249 IN AAAA 2607:f8b0:400e:c01::1a ns2.google.com. 281970 IN A 216.239.34.10 ns3.google.com. 281970 IN A 216.239.36.10 ns4.google.com. 281970 IN A 216.239.38.10 ns1.google.com. 281970 IN A 216.239.32.10

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Getting ConnectionTimeoutException with the host did not accept the connection within timeout

    - by Sanjana
    Can Some one Help me, how we can solve the following problem. nested exception is org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 10000 ms at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.convertHttpInvokerAccessException(HttpInvokerClientInterceptor.java:211) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.invoke(HttpInvokerClientInterceptor.java:144) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy19.isEmployeeToken(Unknown Source) at com.clickandbuy.webapps.surfer.commons.ContextUtils.isEmployeeToken(ContextUtils.java:375) at com.clickandbuy.webapps.surfer.commons.ContextUtils.validateLogin(ContextUtils.java:248) at sun.reflect.GeneratedMethodAccessor1364.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.el.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:329) at org.jboss.el.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:274) at org.jboss.el.parser.AstMethodSuffix.getValue(AstMethodSuffix.java:59) at org.jboss.el.parser.AstValue.getValue(AstValue.java:67) at org.jboss.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:186) at org.springframework.binding.expression.el.BindingValueExpression.getValue(BindingValueExpression.java:54) at org.springframework.binding.expression.el.ELExpression.getValue(ELExpression.java:54) at org.springframework.webflow.action.EvaluateAction.doExecute(EvaluateAction.java:77) at org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188) at org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145) at org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51) at org.springframework.webflow.engine.ActionState.doEnter(ActionState.java:101) at org.springframework.webflow.engine.State.enter(State.java:194) at org.springframework.webflow.engine.Flow.start(Flow.java:535) at org.springframework.webflow.engine.impl.FlowExecutionImpl.start(FlowExecutionImpl.java:364) at org.springframework.webflow.engine.impl.FlowExecutionImpl.start(FlowExecutionImpl.java:222) at org.springframework.webflow.executor.FlowExecutorImpl.launchExecution(FlowExecutorImpl.java:140) at org.springframework.webflow.mvc.servlet.FlowHandlerAdapter.handle(FlowHandlerAdapter.java:193) at org.springframework.webflow.mvc.servlet.FlowController.handleRequest(FlowController.java:174) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:501) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.clickandbuy.webapps.surfer.commons.filter.LogUserIPFilter.doFilter(LogUserIPFilter.java:61) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.clickandbuy.webapps.surfer.commons.filter.AddHeaderFilter.doFilter(AddHeaderFilter.java:54) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.clickandbuy.webapps.commons.filter.SessionSizeFilter.doFilter(SessionSizeFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) Caused by: org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 10000 ms at org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:155) at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:125) at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323) at org.springframework.remoting.httpinvoker.CommonsHttpInvokerRequestExecutor.executePostMethod(CommonsHttpInvokerRequestExecutor.java:195) at org.springframework.remoting.httpinvoker.CommonsHttpInvokerRequestExecutor.doExecuteRequest(CommonsHttpInvokerRequestExecutor.java:129) at org.springframework.remoting.httpinvoker.AbstractHttpInvokerRequestExecutor.executeRequest(AbstractHttpInvokerRequestExecutor.java:136) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.executeRequest(HttpInvokerClientInterceptor.java:191) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.executeRequest(HttpInvokerClientInterceptor.java:173) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.invoke(HttpInvokerClientInterceptor.java:141) ... 57 more Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:519) at sun.reflect.GeneratedMethodAccessor284.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:140) ... 70 more Thanks In Advance Sanjana

    Read the article

  • Build problems when adding `__str__` method to Boost Python C++ class

    - by Rickard
    I have started to play around with boost python a bit and ran into a problem. I tried to expose a C++ class to python which posed no problems. But I can't seem to manage to implement the __str__ functionality for the class without getting build errors I don't understand. I'm using boost 1_42 prebuild by boostpro. I build the library using cmake and the vs2010 compiler. I have a very simple setup. The header-file (tutorial.h) looks like the following: #include <iostream> namespace TestBoostPython{ class TestClass { private: double m_x; public: TestClass(double x); double Get_x() const; void Set_x(double x); }; std::ostream &operator<<(std::ostream &ostr, const TestClass &ts); }; and the corresponding cpp-file looks like: #include <boost/python.hpp> #include "tutorial.h" using namespace TestBoostPython; TestClass::TestClass(double x) { m_x = x; } double TestClass::Get_x() const { return m_x; } void TestClass::Set_x(double x) { m_x = x; } std::ostream &operator<<(std::ostream &ostr, TestClass &ts) { ostr << ts.Get_x() << "\n"; return ostr; } BOOST_PYTHON_MODULE(testme) { using namespace boost::python; class_<TestClass>("TestClass", init<double>()) .add_property("x", &TestClass::Get_x, &TestClass::Set_x) .def(str(self)) ; } The CMakeLists.txt looks like the following: CMAKE_MINIMUM_REQUIRED(VERSION 2.8) project (testme) FIND_PACKAGE( Boost REQUIRED ) FIND_PACKAGE( Boost COMPONENTS python REQUIRED ) FIND_PACKAGE( PythonLibs REQUIRED ) set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREAD ON) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS}) INCLUDE_DIRECTORIES ( ${PYTHON_INCLUDE_PATH} ) add_library(testme SHARED tutorial.cpp) target_link_libraries(testme ${Boost_PYTHON_LIBRARY}) target_link_libraries(testme ${PYTHON_LIBRARY} The build error I get is the following: Compiling... tutorial.cpp C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(31) : error C2780: 'void boost::python::api::object_operators::visit(ClassT &,const char *,const boost::python::detail::def_helper &) const' : expects 3 arguments - 1 provided with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/object_core.hpp(203) : see declaration of 'boost::python::api::object_operators::visit' with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(67) : see reference to function template instantiation 'void boost::python::def_visitor_access::visit,classT>(const V &,classT &)' being compiled with [ DerivedVisitor=boost::python::api::object, classT=boost::python::class_, V=boost::python::def_visitor ] C:\Program Files (x86)\boost\boost_1_42\boost/python/class.hpp(225) : see reference to function template instantiation 'void boost::python::def_visitor::visit>(classT &) const' being compiled with [ DerivedVisitor=boost::python::api::object, W=TestBoostPython::TestClass, classT=boost::python::class_ ] .\tutorial.cpp(29) : see reference to function template instantiation 'boost::python::class_ &boost::python::class_::def(const boost::python::def_visitor &)' being compiled with [ W=TestBoostPython::TestClass, U=boost::python::api::object, DerivedVisitor=boost::python::api::object ] Does anyone have any idea on what went wrrong? If I remove the .def(str(self)) part from the wrapper code, everything compiles fine and the class is usable from python. I'd be very greatful for assistance. Thank you, Rickard

    Read the article

  • Class initialization issues loading java.util.LogManager in Android Dalvik VM

    - by Freddy B. Rose
    I've done changes in an Android native library and installed a new system.img file but am now getting an unrelated Error on startup. I can get past it by swallowing the error but I wanted to know if anyone can explain what the issue is. The Android implementation of Logger.java claims that it is Forcing the LogManager to be initialized since its class init code performs necessary one-time setup. But this forced initialization results in a NoClassDefFoundError. I'm thinking that it has something to do with the class not having been preloaded by Zygote yet but am not that familiar with the whole class loaders and VM business. If anyone has some insight it would be greatly appreciated. Thanks. I/Zygote ( 1253): Preloading classes... D/skia ( 1253): ------ build_power_table 1.4 D/skia ( 1253): ------ build_power_table 0.714286 W/dalvikvm( 1253): Exception Ljava/lang/StackOverflowError; thrown during Ljava/util/logging/LogManager;. W/dalvikvm( 1253): Exception Ljava/lang/NoClassDefFoundError; thrown during Ljava/security/Security;. W/dalvikvm( 1253): Exception Ljava/lang/ExceptionInInitializerError; thrown during Landroid/net/http/HttpsConnection;. E/Zygote ( 1253): Error preloading android.net.http.HttpsConnection. E/Zygote ( 1253): java.lang.ExceptionInInitializerError E/Zygote ( 1253): at java.lang.Class.classForName(Native Method) E/Zygote ( 1253): at java.lang.Class.forName(Class.java:237) E/Zygote ( 1253): at java.lang.Class.forName(Class.java:183) E/Zygote ( 1253): at com.android.internal.os.ZygoteInit.preloadClasses(ZygoteInit.java:295) E/Zygote ( 1253): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:590) E/Zygote ( 1253): at dalvik.system.NativeStart.main(Native Method) E/Zygote ( 1253): Caused by: java.lang.ExceptionInInitializerError E/Zygote ( 1253): at javax.net.ssl.KeyManagerFactory$1.run(KeyManagerFactory.java:57) E/Zygote ( 1253): at javax.net.ssl.KeyManagerFactory$1.run(KeyManagerFactory.java:56) E/Zygote ( 1253): at java.security.AccessController.doPrivilegedImpl(AccessController.java:264) E/Zygote ( 1253): at java.security.AccessController.doPrivileged(AccessController.java:84) E/Zygote ( 1253): at javax.net.ssl.KeyManagerFactory.getDefaultAlgorithm(KeyManagerFactory.java:55) E/Zygote ( 1253): at org.apache.harmony.xnet.provider.jsse.SSLParameters.(SSLParameters.java:142) E/Zygote ( 1253): at org.apache.harmony.xnet.provider.jsse.SSLContextImpl.engineInit(SSLContextImpl.java:82) E/Zygote ( 1253): at android.net.http.HttpsConnection.initializeEngine(HttpsConnection.java:101) E/Zygote ( 1253): at android.net.http.HttpsConnection.(HttpsConnection.java:65) E/Zygote ( 1253): ... 6 more E/Zygote ( 1253): Caused by: java.lang.NoClassDefFoundError: java.util.logging.LogManager E/Zygote ( 1253): at java.util.logging.Logger.initHandler(Logger.java:419) E/Zygote ( 1253): at java.util.logging.Logger.log(Logger.java:1094) E/Zygote ( 1253): at java.util.logging.Logger.warning(Logger.java:906) E/Zygote ( 1253): at org.apache.harmony.luni.util.MsgHelp.loadBundle(MsgHelp.java:61) E/Zygote ( 1253): at org.apache.harmony.luni.util.Msg.getString(Msg.java:60) E/Zygote ( 1253): at java.io.BufferedInputStream.read(BufferedInputStream.java:316) E/Zygote ( 1253): at java.io.FilterInputStream.read(FilterInputStream.java:138) E/Zygote ( 1253): at java.io.BufferedInputStream.fillbuf(BufferedInputStream.java:157) E/Zygote ( 1253): at java.io.BufferedInputStream.read(BufferedInputStream.java:243) E/Zygote ( 1253): at java.util.Properties.load(Properties.java:302) E/Zygote ( 1253): at java.security.Security$1.run(Security.java:80) E/Zygote ( 1253): at java.security.Security$1.run(Security.java:67) E/Zygote ( 1253): at java.security.AccessController.doPrivilegedImpl(AccessController.java:264) E/Zygote ( 1253): at java.security.AccessController.doPrivileged(AccessController.java:84) E/Zygote ( 1253): at java.security.Security.(Security.java:66) E/Zygote ( 1253): ... 15 more W/dalvikvm( 1253): threadid=3: thread exiting with uncaught exception (group=0x2aac6170)

    Read the article

  • HgWebDir push permission denied error

    - by Gregg
    I have a new Fedora 12 server that I am attempting to set up Mercurial on. I have yum installed mercurial, and most things seem to work fine. However, after setting up hgwebdir.cgi through apache, I am unable to do an hg push to the only repo currently being hosted. The error I get back is: searching for changes abort: HTTP Error 500: Permission denied: .hg/store/lock httpd is running as user apache UID PID PPID C STIME TTY TIME CMD root 1691 1 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1694 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1695 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1696 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1697 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1698 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1699 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1700 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1701 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd and I set permissions so that the apache user owns the whole repo and everything. In a last ditch attempt, I even made the repo globally writable. [root@builds .hg]# ll total 424K drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 . drwxrwxrwx. 19 apache apache 4.0K 2010-04-15 13:33 .. -rw-rw-rw-. 2 apache apache 57 2010-04-13 11:42 00changelog.i -rw-rw-rw-. 1 apache apache 93 2010-04-16 15:33 branchheads.cache -rw-rw-rw-. 1 apache apache 192K 2010-04-15 13:33 dirstate -rw-r--r--. 1 apache apache 156 2010-04-19 14:43 hgrc -rw-rw-rw-. 1 apache apache 42 2010-04-15 13:33 last-message.txt -rw-rw-rw-. 2 apache apache 23 2010-04-13 11:42 requires drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 store -rw-rw-rw-. 1 apache apache 45 2010-04-14 14:08 tags.cache -rw-rw-rw-. 1 apache apache 7 2010-04-16 15:33 undo.branch -rw-rw-rw-. 1 apache apache 192K 2010-04-16 15:33 undo.dirstate [root@builds .hg]# cd store [root@builds store]# ll total 308K drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 . drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 .. -rw-rw-rw-. 1 apache apache 20K 2010-04-16 15:33 00changelog.i -rw-rw-rw-. 1 apache apache 81K 2010-04-16 15:33 00manifest.i drwxrwxrwx. 17 apache apache 4.0K 2010-04-13 11:47 data drwxrwxrwx. 3 apache apache 4.0K 2010-04-13 11:43 dh -rw-rw-rw-. 2 apache apache 177K 2010-04-15 11:03 fncache -rw-rw-rw-. 1 apache apache 67 2010-04-16 15:33 undo I have a clone of the repo elsewhere on the machine running as a different user. If I set the the default value in the [paths] section of the clones hgrc file to the local filepath on the server, the push works fine, but if I switch it to use the url, I get the error every time. Some possible quirks in how I've set this up... hgwebdir.cgi is sitting in /var/www/cgi-bin and the repo is a child of /opt/hg. I turned off suexec as well, and this doesn't seem to clear up the issue. The only line I added in the apache config to get hgwebdir running is: ScriptAlias /hg "/var/www/cgi-bin/hgwebdir.cgi" The hgweb.config is also in /var/www/cgi-bin and it's contents are: [collections] /opt/hg = /opt/hg [trusted] users = * [web] baseurl = /hg push_ssl = false allow_push = * The repo browser is working fine, it's just push that doesn't work. Apache error_log doesn't have anything in about this error at all.

    Read the article

  • Class initialization issues loading java.util.logging.LogManager in Android Dalvik VM

    - by Freddy B. Rose
    I've done changes in an Android native library and installed a new system.img file but am now getting an unrelated Error on startup. I can get past it by swallowing the error but I wanted to know if anyone can explain what the issue is. The Android implementation of Logger.java claims that it is Forcing the LogManager to be initialized since its class init code performs necessary one-time setup. But this forced initialization results in a NoClassDefFoundError. I'm thinking that it has something to do with the class not having been preloaded by Zygote yet but am not that familiar with the whole class loaders and VM business. If anyone has some insight it would be greatly appreciated. Thanks. I/Zygote ( 1253): Preloading classes... D/skia ( 1253): ------ build_power_table 1.4 D/skia ( 1253): ------ build_power_table 0.714286 W/dalvikvm( 1253): Exception Ljava/lang/StackOverflowError; thrown during Ljava/util/logging/LogManager;. W/dalvikvm( 1253): Exception Ljava/lang/NoClassDefFoundError; thrown during Ljava/security/Security;. W/dalvikvm( 1253): Exception Ljava/lang/ExceptionInInitializerError; thrown during Landroid/net/http/HttpsConnection;. E/Zygote ( 1253): Error preloading android.net.http.HttpsConnection. E/Zygote ( 1253): java.lang.ExceptionInInitializerError E/Zygote ( 1253): at java.lang.Class.classForName(Native Method) E/Zygote ( 1253): at java.lang.Class.forName(Class.java:237) E/Zygote ( 1253): at java.lang.Class.forName(Class.java:183) E/Zygote ( 1253): at com.android.internal.os.ZygoteInit.preloadClasses(ZygoteInit.java:295) E/Zygote ( 1253): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:590) E/Zygote ( 1253): at dalvik.system.NativeStart.main(Native Method) E/Zygote ( 1253): Caused by: java.lang.ExceptionInInitializerError E/Zygote ( 1253): at javax.net.ssl.KeyManagerFactory$1.run(KeyManagerFactory.java:57) E/Zygote ( 1253): at javax.net.ssl.KeyManagerFactory$1.run(KeyManagerFactory.java:56) E/Zygote ( 1253): at java.security.AccessController.doPrivilegedImpl(AccessController.java:264) E/Zygote ( 1253): at java.security.AccessController.doPrivileged(AccessController.java:84) E/Zygote ( 1253): at javax.net.ssl.KeyManagerFactory.getDefaultAlgorithm(KeyManagerFactory.java:55) E/Zygote ( 1253): at org.apache.harmony.xnet.provider.jsse.SSLParameters.(SSLParameters.java:142) E/Zygote ( 1253): at org.apache.harmony.xnet.provider.jsse.SSLContextImpl.engineInit(SSLContextImpl.java:82) E/Zygote ( 1253): at android.net.http.HttpsConnection.initializeEngine(HttpsConnection.java:101) E/Zygote ( 1253): at android.net.http.HttpsConnection.(HttpsConnection.java:65) E/Zygote ( 1253): ... 6 more E/Zygote ( 1253): Caused by: java.lang.NoClassDefFoundError: java.util.logging.LogManager E/Zygote ( 1253): at java.util.logging.Logger.initHandler(Logger.java:419) E/Zygote ( 1253): at java.util.logging.Logger.log(Logger.java:1094) E/Zygote ( 1253): at java.util.logging.Logger.warning(Logger.java:906) E/Zygote ( 1253): at org.apache.harmony.luni.util.MsgHelp.loadBundle(MsgHelp.java:61) E/Zygote ( 1253): at org.apache.harmony.luni.util.Msg.getString(Msg.java:60) E/Zygote ( 1253): at java.io.BufferedInputStream.read(BufferedInputStream.java:316) E/Zygote ( 1253): at java.io.FilterInputStream.read(FilterInputStream.java:138) E/Zygote ( 1253): at java.io.BufferedInputStream.fillbuf(BufferedInputStream.java:157) E/Zygote ( 1253): at java.io.BufferedInputStream.read(BufferedInputStream.java:243) E/Zygote ( 1253): at java.util.Properties.load(Properties.java:302) E/Zygote ( 1253): at java.security.Security$1.run(Security.java:80) E/Zygote ( 1253): at java.security.Security$1.run(Security.java:67) E/Zygote ( 1253): at java.security.AccessController.doPrivilegedImpl(AccessController.java:264) E/Zygote ( 1253): at java.security.AccessController.doPrivileged(AccessController.java:84) E/Zygote ( 1253): at java.security.Security.(Security.java:66) E/Zygote ( 1253): ... 15 more W/dalvikvm( 1253): threadid=3: thread exiting with uncaught exception (group=0x2aac6170)

    Read the article

  • WCF client encrypt message to JAVA WS using username_token with message protection client policy

    - by Alex
    I am trying to create a WCF client APP that is consuming a JAVA WS that uses username_token with message protection client policy. There is a private key that is installed on the server and a public certificate file was exported from the JKS keystore file. I have installed the public key into certificate store via MMC under Personal certificates. I am trying to create a binding that will encrypt the message and pass the username as part of the payload. I have been researching and trying the different configurations for about a day now. I found a similar situation on the msdn forum: http://social.msdn.microsoft.com/Forums/en/wcf/thread/ce4b1bf5-8357-4e15-beb7-2e71b27d7415 This is the configuration that I am using in my app.config <customBinding> <binding name="certbinding"> <security authenticationMode="UserNameOverTransport"> <secureConversationBootstrap /> </security> <httpsTransport requireClientCertificate="true" /> </binding> </customBinding> <endpoint address="https://localhost:8443/ZZZService?wsdl" binding="customBinding" bindingConfiguration="cbinding" contract="XXX.YYYPortType" name="ServiceEndPointCfg" /> And this is the client code that I am using: EndpointAddress endpointAddress = new EndpointAddress(url + "?wsdl"); P6.WCF.Project.ProjectPortTypeClient proxy = new P6.WCF.Project.ProjectPortTypeClient("ServiceEndPointCfg", endpointAddress); proxy.ClientCredentials.UserName.UserName = UserName; proxy.ClientCredentials.ClientCertificate.SetCertificate(StoreLocation.CurrentUser, StoreName.My, X509FindType.FindByThumbprint, "67 87 ba 28 80 a6 27 f8 01 a6 53 2f 4a 43 3b 47 3e 88 5a c1"); var projects = proxy.ReadProjects(readProjects); This is the .NET CLient error I get: Error Log: Invalid security information. On the Java WS side I trace the log : SEVERE: Encryption is enabled but there is no encrypted key in the request. I traced the SOAP headers and payload and did confirm the encrypted key is not there. Headers: {expect=[100-continue], content-type=[text/xml; charset=utf-8], connection=[Keep-Alive], host=[localhost:8443], Content-Length=[731], vsdebuggercausalitydata=[uIDPo6hC1kng3ehImoceZNpAjXsAAAAAUBpXWdHrtkSTXPWB7oOvGZwi7MLEYUZKuRTz1XkJ3soACQAA], SOAPAction=[""], Content-Type=[text/xml; charset=utf-8]} Payload: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><s:Header><o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"><o:UsernameToken u:Id="uuid-5809743b-d6e1-41a3-bc7c-66eba0a00998-1"><o:Username>admin</o:Username><o:Password>admin</o:Password></o:UsernameToken></o:Security></s:Header><s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><ReadProjects xmlns="http://xmlns.dev.com/WS/Project/V1"><Field>ObjectId</Field><Filter>Id='WS-Demo'</Filter></ReadProjects></s:Body></s:Envelope> I have also tryed some other bindings but with no success: <basicHttpBinding> <binding name="basicHttp"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="Certificate"/> </security> </binding> </basicHttpBinding> <wsHttpBinding> <binding name="wsBinding"> <security mode="Message"> <message clientCredentialType="UserName" negotiateServiceCredential="false" /> </security> </binding> </wsHttpBinding> Your help will be greatly aprreciatted! Thanks!

    Read the article

  • Overwrite archetypes in Maven

    - by Random
    Hello again! I'm having some trouble using Maven for my archetypes and I will need to overwrite some. I launch an instruction that does an archetype:generate in an archetype already existing directory. Is there a parameter that let's me overwrite existing archetypes? I have search the maven definitve guide but it states that the only parameters accepted are: -DgroupId -DartifactId -Dversion -DpackageName -DarchetypeGroupId -DarchetypeArtifactId -DarchetypeVersion -DinteractiveMode I could just search the directory and delete the files, but this proccess is going to be done automatically (so no human involved, no brains involved) and I wouldn't like he machine deleting things around. Thanks for all! Edit: I almost forgot, here is some maven trace: [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [INFO] Archetype defined by properties [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating OldArchetype: archetype-foo-lib:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: foo.tecnologia [INFO] Parameter: packageName, Value: foo.tecnologia [INFO] Parameter: basedir, Value: C:\temp\Desarrollo [INFO] Parameter: package, Value: foo.tecnologia [INFO] Parameter: version, Value: 1.0 [INFO] Parameter: artifactId, Value: Foo-Lib-Test [ERROR] Directory Foo-Lib-Test already exists - please run from a clean directory org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory at org.apache.maven.archetype.old.DefaultOldArchetype.createArchetype(DefaultOldArchetype.java:242) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.processOldArchetype(DefaultArchetypeGenerator.java:253) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:143) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:286) at org.apache.maven.archetype.DefaultArchetype.generateProjectFromArchetype(DefaultArchetype.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execute(CreateProjectFromArchetypeMojo.java:184) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at com.foo.model.CSMavenCli.main(CSMavenCli.java:391) at com.foo.model.MavenAdmin.generateArchetype(MavenAdmin.java:399) at com.foo.model.ValidarPom.validarPom(ValidarPom.java:167) at com.foo.prueba.GenerarPOM.execute(GenerarPOM.java:93) at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58) at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67) at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] : org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory Directory Foo-Lib-Test already exists - please run from a clean directory [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Fri Apr 09 10:01:33 CEST 2010 [INFO] Final Memory: 15M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • Netbeans platform projects - problems with wrapped jar files that have dependencies

    - by I82Much
    For starters, this question is not so much about programming in the NetBeans IDE as developing a NetBeans project (e.g. using the NetBeans Platform framework). I am attempting to use the BeanUtils library to introspect my domain models and provide the properties to display in a property sheet. Sample code: public class MyNode extends AbstractNode implements PropertyChangeListener { private static final PropertyUtilsBean bean = new PropertyUtilsBean(); // snip protected Sheet createSheet() { Sheet sheet = Sheet.createDefault(); Sheet.Set set = Sheet.createPropertiesSet(); APIObject obj = getLookup().lookup (APIObject.class); PropertyDescriptor[] descriptors = bean.getPropertyDescriptors(obj); for (PropertyDescriptor d : descriptors) { Method readMethod = d.getReadMethod(); Method writeMethod = d.getWriteMethod(); Class valueType = d.getClass(); Property p = new PropertySupport.Reflection(obj, valueType, readMethod, writeMethod); set.put(p); } sheet.put(set); return sheet; } I have created a wrapper module around commons-beanutils-1.8.3.jar, and added a dependency on the module in my module containing the above code. Everything compiles fine. When I attempt to run the program and open the property sheet view (i.e.. the above code actually gets run), I get the following error: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:259) Caused: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory starting from ModuleCL@64e48e45[org.apache.commons.beanutils] with possible defining loaders [ModuleCL@75da931b[org.netbeans.libs.commons_logging]] and declared parents [] at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:261) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:399) Caused: java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory at org.apache.commons.beanutils.PropertyUtilsBean.<init>(PropertyUtilsBean.java:132) at org.myorg.myeditor.MyNode.<clinit>(MyNode.java:35) at org.myorg.myeditor.MyEditor.<init>(MyEditor.java:33) at org.myorg.myeditor.OpenEditorAction.actionPerformed(OpenEditorAction.java:13) at org.openide.awt.AlwaysEnabledAction$1.run(AlwaysEnabledAction.java:139) at org.netbeans.modules.openide.util.ActionsBridge.implPerformAction(ActionsBridge.java:83) at org.netbeans.modules.openide.util.ActionsBridge.doPerformAction(ActionsBridge.java:67) at org.openide.awt.AlwaysEnabledAction.actionPerformed(AlwaysEnabledAction.java:142) at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2028) at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2351) at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387) at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242) at javax.swing.AbstractButton.doClick(AbstractButton.java:389) at com.apple.laf.ScreenMenuItem.actionPerformed(ScreenMenuItem.java:95) at java.awt.MenuItem.processActionEvent(MenuItem.java:627) at java.awt.MenuItem.processEvent(MenuItem.java:586) at java.awt.MenuComponent.dispatchEventImpl(MenuComponent.java:317) at java.awt.MenuComponent.dispatchEvent(MenuComponent.java:305) [catch] at java.awt.EventQueue.dispatchEvent(EventQueue.java:638) at org.netbeans.core.TimableEventQueue.dispatchEvent(TimableEventQueue.java:125) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:296) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:211) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:196) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:188) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) I understand that beanutils is using the commons-logging component. I have tried adding the commons-logging component in two different ways (creating a wrapper library around the commons-logging library, and putting a dependency on the Commons Logging Integration library). Neither solves the problem. I noticed that the same problem occurs with other wrapped libraries; if they themselves have external dependencies, the ClassNotFoundExceptions propagate like mad, even if I've wrapped the jars of the libraries they require and added them as dependencies to the original wrapped library module. Pictorially: I'm at my wits end here. I noticed similar problems while googling: Is there a known bug on NB Module dependency Same issue I'm facing but when wrapping a different jar NetBeans stance on this - none of the 3 apply to me. None conclusively help me. Thank you, Nick

    Read the article

  • spring JDBC

    - by Adhir
    I am getting the following exception whe using derby to do a UPDATE in oracle Database org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.derby.client.am.DisconnectException: A communication error has been detected. Communication protocol being used: Reply.fill(). Communication API being used: InputStream.read(). Location where the error was detected: insufficient data. Communication function detecting the error: *. Protocol specific error codes(s) TCP/IP SOCKETS at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:82) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:522) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:737) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:795) at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:803) at com.poc.data.dao.UserDerbyDao.create(UserDerbyDao.java:19) at com.poc.register.RegisterUtil.registerUser(RegisterUtil.java:34) at com.poc.service.MyService.doRegister(MyService.java:108) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:175) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:67) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:166) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:114) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:74) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:114) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:66) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:658) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:616) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:607) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:309) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:425) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:590) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Unknown Source) Caused by: org.apache.derby.client.am.DisconnectException: A communication error has been detected. Communication protocol being used: Reply.fill(). Communication API being used: InputStream.read(). Location where the error was detected: insufficient data. Communication function detecting the error: *. Protocol specific error codes(s) TCP/IP SOCKETS at org.apache.derby.client.net.NetAgent.throwCommunicationsFailure(Unknown Source) at org.apache.derby.client.net.Reply.fill(Unknown Source) at org.apache.derby.client.net.Reply.ensureALayerDataInBuffer(Unknown Source) at org.apache.derby.client.net.Reply.readDssHeader(Unknown Source) at org.apache.derby.client.net.Reply.startSameIdChainParse(Unknown Source) at org.apache.derby.client.net.NetConnectionReply.readExchangeServerAttributes(Unknown Source) at org.apache.derby.client.net.NetConnection.readServerAttributesAndKeyExchange(Unknown Source) at org.apache.derby.client.net.NetConnection.flowServerAttributesAndKeyExchange(Unknown Source) at org.apache.derby.client.net.NetConnection.flowUSRIDPWDconnect(Unknown Source) at org.apache.derby.client.net.NetConnection.flowConnect(Unknown Source) at org.apache.derby.client.net.NetConnection.(Unknown Source) at org.apache.derby.jdbc.ClientDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:291) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:277) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:259) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:240) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:113) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:79) ... 37 more Any help Thanks in advance Adhir

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >