Search Results

Search found 15035 results on 602 pages for 'request'.

Page 18/602 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • http request via iptables --to-destination ip redirect results in no response

    - by Wouter Vegter
    I have two Ubuntu servers with each having their own ip addresses. Let's call them server1 and server2, having respectively ip 1.1.1.1 and 2.2.2.2 I have a nginx running on server2. The sole purpose I want server1 to have is to redirect all incoming http (so port 80) requests to server2 without clients noticing that their request is being redirected. I tried the following command on server1: iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2 But when I enter 1.1.1.1 in my browser I get no respond: the page keeps trying to load without giving any message or error message (I get a time-out after 2-3 mins). But when I do remove the above iptables rule I immediately do get a "page not found error" when I enter 1.1.1.1 in my browser; so something is working but not as it should: when I enter 1.1.1.1 I want the html page to load that is hosted on 2.2.2.2 Because when i enter 2.2.2.2 in my browser I do see the webpage loaded. Could anyone please help me with this? I am searching quite some time (on severfault & Google) on this now so that's why I ask. Many thanks for reading my question! Update: Thank you all for you information. Unfortunately I still get no response I have the following iptables configuration: root@ip-10-48-238-216:/home/ubuntu# sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@ip-10-48-238-216:/home/ubuntu# sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp dpt:www to:2.2.2.2 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination When i run tcpdump and do request via chrome to 1.1.1.1 i get the following root@ip-10-48-238-216:/home/ubuntu# sudo tcpdump -i eth0 port 80 -vv tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:56:18.346625 IP (tos 0x0, ttl 52, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xb398 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.346662 IP (tos 0x0, ttl 51, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9ee0 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.598747 IP (tos 0x0, ttl 52, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xac40 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 13:56:18.598777 IP (tos 0x0, ttl 51, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9788 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel the mentioned address relate to the following 212-123-161-112.ip.telfort.nl.16386 : my personal computer ww1dc1.shopreme.com.www : dns of server2 (2.2.2.2) ip-10-48-238-216.eu-west-1.compute.internal.www : amazon web services ec2 internal address of server1 (1.1.1.1) However, the tcpdump log on server2 (2.2.2.2) stays empty and I get no response back in my browser. I am able to ping from server1 to server2. And net.ipv4.ip_forward is set to 1 and so is /proc/sys/net/ipv4/ip_forward Could there be anything else that is missing?

    Read the article

  • HTTP request, strange socket behavoir

    - by hoodoos
    I expirience strange behavior when doing HTTP requests through sockets, here the request: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 10926 Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml Later on there goes body of my request with given content length. After that I recive something like: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT So everything seem to be fine here. I read all contents from network stream and successfuly recieve response. But my socket which I'm doing polling on switches it's modes like that: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really ) read ( here it switches to read back again ) last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 1076 Accept-Encoding: gzip Content-Encoding: gzip Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml I recieve response like that: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Encoding: gzip Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 07:26:33 GMT 2000 ? I recieve a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! ) What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something? PS All actual service references and login/passwords replaced with fake ones :)

    Read the article

  • With sqlalchemy how to dynamically bind to database engine on a per-request basis

    - by Peter Hansen
    I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security. Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date. We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter. Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files. My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request. We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation. Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this: # in lib/base.py on the BaseController class def __call__(self, environ, start_response): # note: web session contains {'username': XXX, 'password': YYY} url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session finance = create_engine(url1) staff = create_engine(url2) db_configure(staff, finance) # see below ... etc # in another file Session = scoped_session(sessionmaker()) def db_configure(staff, finance): s = Session() from db.finance import Employee, Customer, Invoice for c in [ Employee, Customer, Invoice, ]: s.bind_mapper(c, finance) from db.staff import Project, Hour for c in [ Project, Hour, ]: s.bind_mapper(c, staff) s.close() # prevents leaking connections between sessions? So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly. But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way. Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

    Read the article

  • Objects not loading on second request in Restkit

    - by Holger Edward Wardlow Sindbæk
    I'm sending off 2 requests simultaneously with Restkit and I receive a response back from both of them, but only one of the requests receives any objects. If I send them off one by one, then my objectloader receives all objects. First request: self.firstObjectManager = [RKObjectManager sharedManager]; [self.firstObjectManager loadObjectsAtResourcePath:[NSString stringWithFormat: @"/%@.json", subUrl] usingBlock:^(RKObjectLoader *loader){ [loader.mappingProvider setObjectMapping:designArrayMapping forKeyPath:@""]; loader.userData = @"design"; loader.delegate = self; [loader sendAsynchronously]; }]; Second request: self.secondObjectManager = [RKObjectManager sharedManager]; [self.secondObjectManager loadObjectsAtResourcePath:[NSString stringWithFormat: @"/%@.json", subUrl] usingBlock:^(RKObjectLoader *loader){ [loader.mappingProvider setObjectMapping:designerMapping forKeyPath:@""]; loader.userData = @"designer"; loader.delegate = self; [loader sendAsynchronously]; }]; My objecloader: -(void)objectLoader:(RKObjectLoader *)objectLoader didLoadObjects:(NSArray *)objects { //NSLog(@"This happened: %@", objectLoader.userData); if (objectLoader.userData == @"design") { NSLog(@"Design happened: %i", objects.count); }else if(objectLoader.userData == @"designer"){ NSLog(@"designer: %@", [objects objectAtIndex:0]); } } My response: 2012-11-18 14:36:19.607 RestKitTest5[14220:c07] Started loading of request: designer 2012-11-18 14:36:22.575 RestKitTest5[14220:c07] I restkit.network:RKRequest.m:680:-[RKRequest updateInternalCacheDate] Updating cache date for request <RKObjectLoader: 0x95c3160> to 2012-11-18 19:36:22 +0000 2012-11-18 14:36:22.576 RestKitTest5[14220:c07] response code: 200 2012-11-18 14:36:22.584 RestKitTest5[14220:c07] Design happened: 0 2012-11-18 14:36:22.603 RestKitTest5[14220:c07] I restkit.network:RKRequest.m:680:-[RKRequest updateInternalCacheDate] Updating cache date for request <RKObjectLoader: 0xa589b50> to 2012-11-18 19:36:22 +0000 2012-11-18 14:36:22.605 RestKitTest5[14220:c07] response code: 200 2012-11-18 14:36:22.606 RestKitTest5[14220:c07] designer: <DesignerData: 0xa269fc0> Update: Setting my base url RKURL *baseURL = [RKURL URLWithBaseURLString:@"http://iphone.meer.li"]; [RKObjectManager setSharedManager:[RKObjectManager managerWithBaseURL:baseURL]]; Solution Problem was that I used the shared manager for both object managers, so I ended up doing: RKURL *baseURL = [RKURL URLWithBaseURLString:@"http://iphone.meer.li"]; RKObjectManager *myObjectManager = [[RKObjectManager alloc] initWithBaseURL:baseURL]; self.firstObjectManager = myObjectManager; and: RKURL *baseURL = [RKURL URLWithBaseURLString:@"http://iphone.meer.li"]; RKObjectManager *myObjectManager = [[RKObjectManager alloc] initWithBaseURL:baseURL]; self.secondObjectManager = myObjectManager;

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • org.apache.commons.httpclient.HttpClient stuck on request

    - by Roman
    Hi All I have that code : while(!lastPage && currentPage < maxPageSize){ StringBuilder request = new StringBuilder("http://catalog.bizrate.com/services/catalog/v1/us/" + " some more ..."); currentPage++; HttpClient client = new HttpClient(new MultiThreadedHttpConnectionManager()); client.getHttpConnectionManager().getParams().setConnectionTimeout(15000); GetMethod get = new GetMethod(request.toString()); HostConfiguration configuration = new HostConfiguration(); int iGetResultCode = client.executeMethod(configuration, get); if (iGetResultCode != HttpStatus.SC_OK) { System.err.println("Method failed: " + get.getStatusLine()); return; } XMLStreamReader reader = XMLInputFactory.newInstance().createXMLStreamReader(get.getResponseBodyAsStream()); while (reader.hasNext()) { int type = reader.next(); // some more xml parsing ... } reader.close(); get.releaseConnection(); } Somehow the code gets suck from time to time on line : executing request. I cant find the configuration for a request time out (not the connection timeout) , can someone help me maybe , or is there something that I am doing basely wrong ? The client I am using.

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • KeepAlive packets over a Soap request

    - by Nycto
    I've been debugging some Soap requests we are making between two servers on the same VLAN. The app on one server is written in PHP, the app on the other is written in Java. I can control and make changes to the PHP code, but I can't affect the Java server. The PHP app forms the XML using the DOMDocument objects, then sends the request using the cURL extension. When the soap request took longer than 5 minutes to complete, it would always wait until the max timeout limit and exit with a message like this: Operation timed out after 900000 milliseconds with 0 bytes received After sniffing the packets that were being sent, it turns out that the problem was caused by a 5 minute timeout in the network that was closing what it thought was a stale connection. There were two ways to fix it: bump up the timeout in iptables, or start sending KeepAlive packets over the request. To be thorough, I would like to implement both solutions. Bumping up the timeout was easy for ops to do, but sending KeepAlive packets is turning out to be difficult. The cURL library itself supports this (see the --keepalive-time flag for the CLI app), but it doesn't appear that this has been implemented in the PHP cURL library. I even checked the source to make sure it wasn't an undocumented feature. So my question is this: How the heck can I get these packets sent? I see a few clear options, but I don't like any of them: Write a wrapper that will kick off the request by shell_execing the CLI app. This is a hack that I just don't like Update the cURL extension to support this. This is a non-option according to Ops. Open the socket myself. I know just enough to be dangerous. I also haven't seen a way to do this with fsockopen, but I could be missing something. Switch to another library. What exists that supports this? Thanks for any help you can offer.

    Read the article

  • CalDAV : request error problem

    - by Louis W
    We have an xserve which hosts our internal calendars through CalDAV. Personally have 4 calendars from the server set to show in my iCal. Frequently throughout the day at random moments I will get the below error. If I hit ignore, it will just continue to work without any issues. This is more of an annoyance then anything (the dock icon jumping all the time). I have full permission to read anything, so it is not an actual permission issue. Anyone know what might be the issue? Request Error Access to account "FOO" is not permitted. The server has responded: "HTTP/1.1 403 Forbidden" to operation CalDAVAccountRefreshQueueableOperation

    Read the article

  • ASP.NET Ajax - Asynch request has separate session???

    - by Marcus King
    We are writing a search application that saves the search criteria to session state and executes the search inside of an asp.net updatepanel. Sometimes when we execute multiple searches successively the 2nd or 3rd search will sometimes return results from the first set of search criteria. Example: our first search we do a look up on "John Smith" - John Smith results are displayed. The second search we do a look up on "Bob Jones" - John Smith results are displayed. We save all of the search criteria in session state as I said, and read it from session state inside of the ajax request to format the DB query. When we put break points in VS everything behaves as normal, but without them we get the original search criteria and results. My guess is because they are saved in session, that the ajax request somehow gets its own session and saves the criteria to that, and then retrieves the criteria from that session every time, but the non-async stuff is able to see when the criteria is modified and saves the changes to state accordingly, but because they are from two different sessions there is a disparity in what is saved and read. EDIT::: To elaborate more, there was a suggestion of appending the search criteria to the query string which normally is good practice and I agree thats how it should be but following our requirements I don't see it as being viable. They want it so the user fills out the input controls hits search and there is no page reload, the only thing they see is a progress indicator on the page, and they still have the ability to navigate and use other features on the current page. If I were to add criteria to the query string I would have to do another request causing the whole page to load, which depending on the search criteria can take a really long time. This is why we are using an ajax call to perform the search and why we aren't causing another full page request..... I hope this clarifies the situation.

    Read the article

  • Apache Server access log shows another domain's request and got redirected

    - by user3162764
    I found my apache2 access log (debian) includes some entries not related to my domain and got '301' redirection: ,-,-,[19/Aug/2014:10:09:54 +0800],"GET /admin.php HTTP/1.0",301,493,,, ,-,-,[19/Aug/2014:10:09:55 +0800],"GET /administrator/index.php HTTP/1.0",301,521,,, ,-,-,[19/Aug/2014:10:09:55 +0800],"GET /wp-login.php HTTP/1.0",301,499,,, Obviously those requests are not to my domain, but from this source, debian will default deny all proxy request: https://wiki.apache.org/httpd/ClientDeniedByServerConfiguration Besides, I cannot find there is mod_proxy under /etc/apache2/mods-enabled. I am anxious about: 1. is the server acting as open proxy? 2. why http 301 is returned? Thx.

    Read the article

  • On boot firmware request for intel GMA 3100 chipset timing out

    - by Yannick M.
    I am currently in the process of installing a Gentoo linux box with a Vanilla 2.6.29-r5 kernel with gentoo-xen-kernel patches in order to run the Xen Hypervisor. After rebooting with the new kernel, the booting process seemed to hang on: [ 0.863005] platform microcode: firmware: requesting intel-ucode/06-0f-07 [ 60.863442] Microcode Update Driver: v2.00-xen <[email protected]>, Peter Oruba Apparently the firmware request times out after 60 seconds (/sys/class/firmware/timeout) and booting just continues. I have done some research and have found that on RHEL-4 this problem was related to the mount of /sys changed and the firmware.agent hotplug script couldn't parse the line correctly. However I am having some difficulty tracking down how to fix this on Gentoo. Any and all ideas are greatly appreciated! Thanks

    Read the article

  • onComplete for Ajax.Request sometimes does not hide the load animation in Prototype

    - by TenJack
    I have a Ajax.Request in which I use onLoading and onComplete to show and hide a load animation gif. The problem is that every 10 clicks or so, the load animation fails to hide and just stays there animating even though the ajax request has returned successfully. I have a number of div elements that each has its own respective load animation and an onclick with the Ajax.Request that looks like this: <div id="word_block_<%= word_obj.word %>" class="word_block" > <%= image_tag("ajax-loader_word_block.gif", :id => "load_animation_#{word_obj.word}", :style => 'display:none') %> <a href="#" onclick="new Ajax.Request('/test/ajax_load', {asynchronous:true, evalScripts:true, onLoading:function() { Element.show('load_animation_<%= word_obj.word %>')}, onComplete:function(){ Element.hide('load_animation_<%= word_obj.word %>')}}); return false;">Click Here</a> </div> Does it look like anything could be wrong with this? Maybe I should try removing the inline onclick and add an onclick programmatically with javascript? I really have no idea why this keeps happening. I am using the prototype library with ruby on rails.

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Application Request Routing (ARR) - Single Server Reverse Proxy(ish) Setup

    - by Justin
    I have 1 webserver that has two .NET apps running on it. These are set up on the server as app1.mydomain.com and app2.mydomain.com. I would like to be able to take any request going to app1.mydomain.com/subfolder and rewrite it to app2.mydomain.com/subfolder using ARR. I am having difficulty getting this to work on a single server, and all the ARR examples on the net seem to imply that I require another server dedicated to ARR sitting in front of the two web servers. Is what I am attempting to do possible on one web server, and if so how?! Thanks all.

    Read the article

  • Why Won't this http post request work?

    - by dubbeat
    Hi, I'm wondering why this http post request won't work for my iphone app. I know for a fact that the url is correct and that the variables I'm sending are correct but for some reason the request is not being recieved by the aspx page. NSMutableString *httpBodyString; NSURL *url; NSMutableString *urlString; httpBodyString=[NSMutableString stringWithFormat:@"%@%@%@%@%@",@"?g=",promoValueObject.country,@"&c=",promoData.artistid,@"&d=iphone"]; urlString=[[NSMutableString alloc] initWithString:@"http://www.mysite.com/stats_promo.aspx"]; url=[[NSURL alloc] initWithString:urlString]; [urlString release]; NSMutableURLRequest *urlRequest=[NSMutableURLRequest requestWithURL:url]; [url release]; [urlRequest setHTTPMethod:@"POST"]; [urlRequest setHTTPBody:[httpBodyString dataUsingEncoding:NSISOLatin1StringEncoding]]; //[httpBodyString release]; NSURLConnection *connectionResponse = [[NSURLConnection alloc] initWithRequest:urlRequest delegate:self]; if (!connectionResponse) { NSLog(@"Failed to submit request"); } else { NSLog(@"--------- Request submitted ---------"); NSLog(@"connection: %@ method: %@, encoded body: %@, body: %a", connectionResponse, [urlRequest HTTPMethod], [urlRequest HTTPBody], httpBodyString); }

    Read the article

  • Replies to request coming over a relay goes to relay's internal IP, not to original request's source IP

    - by seaquest
    Dhcpd running on Linux gets a dhcp request over dhcrelay which is running on other remote machine. Oct 6 10:09:46 2012 dhcpd: DHCPDISCOVER from 00:1e:68:06:eb:37 (oguz-U300) via 172.16.17.81 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes 10:35:01.112500 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.0.81.67 > 192.168.0.1.67: BOOTP/DHCP, Request from 00:1e:68:06:eb:37, length: 300, hops:1, xid:0xe378fc7e, flags: [none] (0x0000) Gateway IP: 172.16.17.81 Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] It matches to a subnet and send reply. However reply does not go to the requesting dhcrelay external IP(192.168.0.81). Instead, it goes to the internal interface IP of machine running dhcrelay. And I think because of this remote machine running dhcrelay or the dhcrealy itself discarding packet. Oct 6 10:09:46 2012 dhcpd: DHCPOFFER on 172.16.17.11 to 00:1e:68:06:eb:37 (oguz-U300) via 172.16.17.81 10:35:02.050108 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.0.1.67 > 172.16.17.81.67: BOOTP/DHCP, Reply, length: 300, hops:1, xid:0xe378fc7e, flags: [none] (0x0000) Your IP: 172.16.17.11 Gateway IP: 172.16.17.81 Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] Is this a normal behaviour? Machine running dhcrelay: eth1(ext) Link encap:Ethernet HWaddr 00:90:0B:21:43:F4 inet addr:192.168.0.81 Bcast:192.168.0.255 Mask:255.255.255.0 eth2(int) Link encap:Ethernet HWaddr 00:90:0B:21:43:F5 inet addr:172.16.17.81 Bcast:172.16.17.255 Mask:255.255.255.0 3582 ? Ss 0:00 /usr/sbin/dhcrelay -i eth2 192.168.0.1 Machine running dhcpd: eth1 Link encap:Ethernet HWaddr 00:90:0B:23:97:D1 inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 option domain-name "test.com"; option subnet-mask 255.255.255.0; authoritative; ignore client-updates; ddns-update-style ad-hoc; default-lease-time 86400; max-lease-time 86400; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.135 192.168.0.169; option broadcast-address 192.168.0.255; option domain-name-servers 192.168.0.1; option domain-name "test.com"; option routers 192.168.0.1; } subnet 172.16.17.0 netmask 255.255.255.0 { local-address 192.168.0.1; server-identifier 192.168.0.1; range 172.16.17.10 172.16.17.11; option broadcast-address 172.16.17.255; option routers 172.16.17.81; } (I put local-address and server-identifier. But this does not help ) Regards, -- Oguz YILMAZ UPDATE: The first problem is found. I have configured dhcrelay only on listening internel interface. It seems (of course) is should also listen to external interface for replies. It appears it is not important where the packet destined to. dhrelay will forward it to internal net. HOWEVER, I have deleted route on dhcpd server to reach 172.16.17.x subnet. It again tries to send reply to 172.16.17.81. Because it does not know the route it send it from default gateway to the internet. eth0: IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto: UDP (17), length: 328) 192.168.1.2.67 > 172.16.17.81.67: BOOTP/DHCP, Reply, length: 300, hops:1, xid:0x32830125, secs:3, flags: [none] (0x0000) eth0: Your IP: 172.16.17.11 eth0: Gateway IP: 172.16.17.81 eth0: Client Ethernet Address: 00:1e:68:06:eb:37 [|bootp] How can I force dhcpd to force to send replies to requesting IP? Because, it is not much meaningful to add routes to subnet we distribute IP for. Internet - dhcpd - 192.168.0.1 - SOMENET - 192.168.0.81 - dhcrelay - 172.16.17.0/24 192.168.0.1 has no route for 172.16.17.0 and has no interface directly attached to that net.

    Read the article

  • LDAP Bind request failing

    - by Madhur Ahuja
    I have a Windows Server 2008 R2 Active Directory domain controller with domain madhurmoss.com I have a Linux box which is trying to connect to LDAP (389) on above box, which is failing. Upon inspection in Wireshark, I see a bind request with following query sAMAccountName=Administrator,DC=madhurmoss,DC=com and result with invalid Credentials 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db0 I want it to connect it through Administrator which lies in CN=Administrator,CN=Users,DC=madhurmoss,DC=com The supplied credentials are correct. I believe the query sAMAccountName=Administrator,DC=madhurmoss,DC=com is wrong. Can anyone guide me what could be wrong ?

    Read the article

  • "Request not supported" in IPCONFIG (WinXP SP3)

    - by pablog
    In a customer PC (Windows XP SP3), suddenly the network went down: the network adapter appears with an error mark. I replaced the network card, but the new one does the same thing. When I enter IPCONFIG, XP shows this error (in standard and safe mode): Internal error occurred Request not supported Unable to query host name If I start the system with a boot cd the PC runs fine, so the problem seems to be in the XP installation. I tried: uninstalling and reinstalling the network card in the Device Manager disabling and reenabling the card netsh int ip reset netsh winsock reset catalog and a couple of "reset" programs (WinsockxpFix.exe, etc) with no luck. Is there any way to fix it without reinstalling XP? TIA, Pablo

    Read the article

  • SharePoint site access request denied permissions

    - by Nat
    Here is a good catch-22. When a user without any permissions on a site requests access from the _layouts/AccessDenied.aspx page it takes them to the Request Access page (_layouts/ecm_reqacc.aspx). When the user fills out the form with a simple message it is supposed to send an email to the address specified in the site collection and take them to _layouts/confirmation.aspx. Unfortunately the users are getting another access denied error instead. I have tried going to _layouts/accessdenied.aspx on a site I am the administrator of and the email is sent fine, so it is not a problem with sending the emails. What should I check and/or give access to in order for authenticated, but not permissioned users the ability to send access requests?

    Read the article

  • Modify request querystring paramaters to build a new link without resorting to string manipulation

    - by Andrew M
    Hi All, I want to dynamically populate a link with the URI of the current request, but set one specific query string parameter. All other querystring paramaters (if there are any) should be left untouched. And I don't know in advance what they might be. Eg, imagine I want to build a link back to the current page, but with the querystring parameter "valueOfInterest" always set to be "wibble" (I'm doing this from the code-behind of an aspx page, .Net 3.5 in C# FWIW). Eg, a request for either of these two: /somepage.aspx /somepage.aspx?valueOfInterest=sausages would become: /somepage.aspx?valueOfInterest=wibble And most importantly (perhaps) a request for: /somepage.aspx?boring=something /somepage.aspx?boring=something&valueOfInterest=sausages would preserve the boring params to become: /somepage.aspx?boring=something&valueOfInterest=wibble Caveats: I'd like to avoid string manipulation if there's something more elegant in asp.net that is more robust. However if there isn't something more elegant, so be it. I've done (a little) homework: I found a blog post which suggested copying the request into a local HttpRequest object, but that still has a read-only collection for the querystring params. I've also had a look at using a URI object, but that doesn't seem to have a querystring

    Read the article

  • IIS6 cannot start additional processes when page request is called from other than localhost

    - by awe
    I have a web application dll that is running under iis6. This is starting a number of processes to be able to handle more than one request at the time. This is working perfect in a numbers of installations. In this particular installation, it runs perfectly when initialized by a call in IE on the server using http://localhost/apppath . The problem is when the processes are from another location, i.e. another computer on the same network initialising the call throught http://servername/apppath . In this case, the initial dll running under iis is executing (proved by logging), but it fails to initialize the additional processes. If the additional processes are already initialized by a call from localhost, it also works when called from another machine (in this case, it is just attached to the existing processes).

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >