Search Results

Search found 92198 results on 3688 pages for 'http error'.

Page 208/3688 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • Custom Errors "Could no load type error"

    - by anil
    I am creating ASP.Net MVC application and want to handle errors globally. I have set web.config value as mode=On and defaulRedirect="Error.aspx". I have Error.aspx at two places. One at Views/Shared folder and one at project root folder level. But each time an unhandled error occurs, the error page at root level gets called. How can I make it redirect to the one at Shared folder level? I want to do this becasuse, currently I am getting error as "could not load type System.Web.Mvc.ViewPage" when any error is occuring. Regards, Anil

    Read the article

  • the HTTP request is unauthorized with client authentication scheme 'Anonymous'

    - by user1246429
    I use firstdata webservice API.I use C# client call firstdata webservice API with WCF. But show error message: "But show error message "System.ServiceModel.Security.MessageSecurityException: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was ''. --- System.Net.WebException: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) --- End of inner exception stack trace --- Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ValidateAuthentication(HttpWebRequest request, HttpWebResponse response, WebException responseException, HttpChannelFactory factory) at System.ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at com.firstdata.globalgatewaye4.api.ServiceSoap.SendAndCommit(SendAndCommitRequest request) at com.firstdata.globalgatewaye4.api.ServiceSoapClient.com.firstdata.globalgatewaye4.api.ServiceSoap.SendAndCommit (SendAndCommitRequest request) at com.firstdata.globalgatewaye4.api.ServiceSoapClient.SendAndCommit(Transaction SendAndCommitSource)" My web.config info below: <behaviors> <endpointBehaviors> <behavior name="FDGGBehavior"> <clientCredentials> <clientCertificate findValue="WS1909642825._.1" storeLocation="LocalMachine" x509FindType="FindBySubjectName" storeName="TrustedPeople" /> <serviceCertificate> <authentication certificateValidationMode="PeerTrust" /> </serviceCertificate> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <binding name="ServiceSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Certificate" proxyCredentialType="Ntlm" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <endpoint address="https://api.globalgatewaye4.firstdata.com/transaction/v11" binding="basicHttpBinding" bindingConfiguration="ServiceSoap" contract="com.firstdata.globalgatewaye4.api.ServiceSoap" name="ServiceSoap" behaviorConfiguration="FDGGBehavior" /> How can resolve question?

    Read the article

  • Why there is an invalid context error?

    - by Tattat
    Here is the code I use to draw: - (void) drawSomething { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBStrokeColor(context, 1, 0, 0, 1); CGContextSetLineWidth(context, 6.0); CGContextMoveToPoint(context, 100.0f, 100.0f); CGContextAddLineToPoint(context, 200.0f, 200.0f); CGContextStrokePath(context); NSLog(@"draw"); } But I got the error like this: [Session started at 2010-04-03 17:51:07 +0800.] Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextSetRGBStrokeColor: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextSetLineWidth: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextMoveToPoint: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextAddLineToPoint: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextDrawPath: invalid context Why it prompt me to say that the context is invalided?

    Read the article

  • nginx + @font-face + Firefox / IE9

    - by Philip Seyfi
    Just transferred my site from a shared hosting to Linode's VPS, and I'm also completely new to nginx, so please don't be harsh if I missed something evident ^^ I've got my WordPress site running pretty well on nginx & MaxCDN, but my @font-face fonts (served from cdn.domain.com) stopped working in IE9 and FF (@font-face failed cross-origin request. Resource access is restricted.) I've googled for hours and tried adding all of the following to my config files: location ~* ^.+\.(eot|otf|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } location ^/fonts/ { add_header Access-Control-Allow-Origin *; } location / { if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } if ($filename ~* ^.*?\.(eot)|(otf)|(ttf)|(woff)$){ add_header 'Access-Control-Allow-Origin' '*'; } } With all of the following combinations: add_header Access-Control-Allow-Origin *; add_header 'Access-Control-Allow-Origin' *; add_header Access-Control-Allow-Origin '*'; add_header 'Access-Control-Allow-Origin' '*'; Of course, I've restarted nginx after every change. The headers just don't get sent at all no matter what I do. I have the default Ubuntu apt-get build nginx which should include the headers module by default... How do I check what modules are installed, or what else could be causing this error?

    Read the article

  • ASP.net post and default page.

    - by diamandiev
    Scenario: I have a regular aspx page with a form. When someone clicks a button the form submitted via post like normal. HOWEVER. The page where the form resides is the default page(Default.aspx). So when someone goes to the site: http://site.com/ and submits the forms he gets redirected to http://site.com/default.aspx. I tried setting the action of the form to http://site.com/. However asp.net does not allow to use root urls with a POST. So is there any workaround? Ajax is not an option.

    Read the article

  • Why does IIS not support chunked transfer encoding?

    - by Graeme Perrow
    I am making an HTTP connection to an IIS web server and sending a POST request with the data encoded using Transfer-Encoding: chunked. When I do this, IIS simply closes the connection, with no error message or status code. According to the HTTP 1.1 spec, All HTTP/1.1 applications MUST be able to receive and decode the "chunked" transfer-coding so I don't understand why it's (a) not handling that encoding and (b) it's not sending back a status code. If I change the request to send the Content-Length rather than Transfer-Encoding, the query succeeds, but that's not always possible. When I try the same thing against Apache, I get a "411 Length required" status and a message saying "chunked Transfer-Encoding forbidden". Why do these servers not support this encoding?

    Read the article

  • How to prevent HTTP 304 in Django test server

    - by Augusto Men
    I have a couple of projects in Django and alternate between one and another every now and then. All of them have a /media/ path, which is served by django.views.static.serve, and they all have a /media/css/base.css file. The problem is, whenever I run one project, the requests to base.css return an HTTP 304 (not modified), probably because the timestamp hasn't changed. But when I run the other project, the same 304 is returned, making the browser use the file cached by the previous project (and therefore, using the wrong stylesheet). Just for the record, here are the middleware classes: MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.transaction.TransactionMiddleware', ) I always use the default address http://localhost:8000. Is there another solution (other than using different ports - 8001, 8002, etc.)?

    Read the article

  • How do you redirect https to http

    - by mauriciopastrana
    that is, the opposite of what (seemingly) everyone teaches. I have a server on https for which I paid an SSL cert for and a mirror for which I haven't and keep around for just for emergencies so it doesn't merit getting a cert for. On my client's desktops I have SOME shortcuts which point to http://production_server and https://productionserver (both work), however; I know that if my prod. server goes down, then DNS forwarding kicks in and those clients which have https on their shortcut will be staring at https://mirrorserver (Which doesnt work) and a big fat IE7 red screen of uneasyness for my company. Unfortunately, I can't just switch this around at the client level. These users are very computer illiterate: and are very likely to freak out from seeing https "insecurity" errors (specially the way FFX3 and IE7 handle it nowadays: FULL STOP, kinda thankfully, but not helping me here LOL). It's very easy to find apache solutions for http-https redirection, but for the life of me I can't do the opposite. Ideas? Cheers, /mp

    Read the article

  • Web SMTP Server(foo.com) will not send mail to exchange server which is also(foo.com)

    - by Atom
    I think I understand this problem fully, but I do not know how to approach it or where to go in terms of troubleshooting. I've got my one domain http://foo.com that runs a Zen Cart installation that needs to be able to send emails to users(order confirmation, password reset). This works fine to send to any other domain BUT foo.com. I'm running a locally hosted exchange server that is foo.com, and we can send and receive email just fine. If I run tail -f /usr/local/psa/var/log/maillog I recieve this error: Apr 1 10:08:51 foo qmail-local-handlers[25824]: Handlers Filter before-local for qmail started ... Apr 1 10:08:51 foo qmail-local-handlers[25824]: from= Apr 1 10:08:51 foo qmail-local-handlers[25824]: [email protected] Apr 1 10:08:51 foo qmail-local-handlers[25824]: cannot reinject message to '[email protected]' Apr 1 10:08:51 foo qmail: 1270141731.583139 delivery 32410: failure: This_address_no_longer_accepts_mail./ Apr 1 10:08:51 foo qmail: 1270141731.584098 status: local 0/10 remote 0/20 I understand that the foo.com SMTP service doesn't have any account but the one that is used to authenticate mail being sent, so of course, I understand why it's saying 'this address no longer accepts mail'. My question is, how can I get the foo.com(web) SMTP service to handle emails meant for my exchange server([email protected]) or how do I handle the mail that needs to be sent to our exchange server? Is this something to do with MX records? Thanks in advance A

    Read the article

  • SQL Server Reporting Services proxy timeout (ASP.NET)

    - by Philip
    Morning, We are using SSRS (2005) and have a ASP.NET frontend using the SSRS WebControl. I've boiled the problem down the time it takes for one particular report to be generated is greater than the timeout on the proxy server. It looks like the way the SSRS web control tries to do things is by performing an HTTP request for the report, however the problem with this is the request can timeout potentially before the report has generated. Looking at the HTTP traffic the response is a 504 (gateway timeout). Is there a way to increase the timeout or change SSRS WebControl to use more robust polling mechanism (which isn't dependant on the timeout of the HTTP request). I could be wrong but I don't think ServerReport.Timeout property would resolve the issue we are seeing? Any thoughts? Philip

    Read the article

  • winsock compile error

    - by ioil
    The following errors are from a file with just windows and winsock2 included. C:\Users\ioil\Desktop\dm\bin>dmc sockit.c typedef struct fd_set { ^ C:\Users\ioil\Desktop\dm\bin\..\include\win32\WINSOCK2.H(85) : Error: 'fd_set' is already defined } fd_set; ^ C:\Users\ioil\Desktop\dm\bin\..\include\win32\WINSOCK2.H(88) : Error: identifier or '( declarator )' expected struct timeval { ^ C:\Users\ioil\Desktop\dm\bin\..\include\win32\WINSOCK2.H(129) : Error: 'timeval' is already defined }; ^ C:\Users\ioil\Desktop\dm\bin\..\include\win32\WINSOCK2.H(132) : Error: identifier or '( declarator )' expected struct hostent { ^ C:\Users\ioil\Desktop\dm\bin\..\include\win32\WINSOCK2.H(185) : Error: 'hostent' is already defined Fatal error: too many errors --- errorlevel 1 C:\Users\ioil\Desktop\dm\bin> What's already been tried : placing the winsock.dll file in the same directory as the compiler and program to be compiled, placing it in the system32 directory, and entering it in the registry with the regsrv32 command. Don't really know where to go from here, appreciate any advice . . .

    Read the article

  • Can't access web service when connected to the network :: HTTP 407

    - by Ian
    Hi All, I have a console application that communicates with a web service. Both of them are on the same machine. When I am accessing the web service with the LAN disabled, it connects without a problem. But if the LAN is enabled and connected to our office network, I receive this error: "HTTP 407 Proxy Authentication required - The ISA Server requires authorization to fulfill the request. Access to the Web Proxy service is denied." We've been hunting the source of the problem for three days now. We have tried everything that we can think of. Any ideas what's causing the problem? Additional notes: - The machine is in a Workgroup setup but with DNS suffix (computer.local). When accessing the web service, we type the address as "http://machine.computer.local/service.asmx" I talked to the IT guys and they said that we don't have an ISA server installed There is no "proxy" set in IE. The machine is in mint condition.

    Read the article

  • Why is wget so much faster than Firefox at some downloads?

    - by Earlz
    Recently, I needed to do an update of Xilinx WebPack, mind you, this is one hefty piece of software. It weighs in at 6gigs, which definitely isn't "quick" on any internet I've ever had available to me. So, when I went to download it(using Firefox of course), I was very... unsettled by the fact that the download was only going at 110kByte/s. My internet connection is capable of about 2200kByte/s download, so what gives!? My workaround in the past for this issue has been to take the link to my Linode linux server and download it there with wget, where the download will zip along at 14MByte/s, and then either copying it to my website directory and downloading it that way through HTTP, or using sftp. Both ways work about as well and will sufficiently max out my connection. However, I recently figured out the missing variable. I tried doing the download locally with wget and was able to max out my connection! TL;DR; Now, my question. Why is wget so much faster than firefox at downloading this file? I hardly ever have such a difference in download speeds except for with this one file.

    Read the article

  • post commit hook fail

    - by jarad mayers
    I have Master/Slave setup using Win2k8R with SVN 1.6.9 and using TortoiseSVN 1.6.7. The access is through Apache and using http. Everything works but when I commit I get the following message: Error: post-commit hook failed (exit code 1) with output: Error: The process cannot access the file because it is being used by another process. This happen when using multiple TortoiseSVN dialog for committing the files in rapid succession. If I use one TortoiseSVN dialog and wait till the commit reply is back then I won't see the problem. In other words, committing one at the time cause no issue. The post-commit script output is logged. Even though I get the above error but when I check the Master and Slave repository the files have been replicated okay with no issue. I am wondering how this issue can be solved.

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

  • Difficulty screen scraping http://www.momondo.com using nokogiri

    - by Khai Kiong
    I have some difficulty to extract the total price (css selector = '.total') from the flight result. http://www.momondo.com/multicity/?Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false#Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false I get the error "undefined method `text' for nil:NilClass nokogiri ". My code desc "Fetch product prices" task :fetch_details => :environment do require 'nokogiri' require 'open-uri' include ERB::Util OneWayFlight.find_all_by_money(nil).each do |flight| url = "http://www.momondo.com/multicity/Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false#Search=true&TripType=oneway&SegNo=1&SO0=KUL&SD0=KBR&SDP0=31-12-2012&AD=2&CA=0,0&DO=false&NA=false" doc = Nokogiri::HTML(open(url)) price = doc.at_css(".total").text[/[0-9\.]+/] flight.update_attribute(:price, price) end end

    Read the article

  • How to determine what the Seemingly Unrelated Regression error means in R

    - by user2154571
    I'm using the systemfit() function to conduct a seemingly unrelated regression and am getting the following error: Error in solve(sigma, tol = solvetol) : Lapack routine dsptrf returned error code 1 Yet, I'm unable to find the meaningful interpretation of what the error suggests is going on. Below is some simulated code that works to show what functions I'm using (the simulated code does not produce an error). Thanks for thoughts on this error. y <- sample(seq(1:4), 100, replace = TRUE) x1 <- sample(seq(0:1), 100, replace = TRUE) -1 x2 <- sample(seq(0:1), 100, replace = TRUE) - 1 x3 <- sample(seq(1:4), 100, replace = TRUE) frame <- as.data.frame(cbind(y,x1,x2, x3)) mod_1 <- y ~ x1 + x3 + x1:x3 mod_2 <- y ~ x2 + x3 + x2:x3 output <- systemfit(list(mod_1, mod_2), data = frame, method = "SUR")

    Read the article

  • Delphi 7 compile error - “Duplicate resource(s)” between .res and .dfm

    - by Robo
    I got a very similar error to the one below: http://stackoverflow.com/questions/97800/how-can-i-fix-this-delphi-7-compile-error-duplicate-resources However, the error I got is this: [Error] WARNING. Duplicate resource(s): [Error] Type 10 (RCDATA), ID TFMMAINTQUOTE: [Error] File P:\[PATH SNIPPED]\Manufacturing.RES resource kept; file FMaintQuote.DFM resource discarded. Manufacturing.res is the default resource file (application is called Manufacturing.exe), and FMainQuote is one of the forms. .dfm files are plain text files, so I'm not sure what resources is being duplicated, how to find it and fix it? If I tried to compile the project again, it works OK, but the exe's icon is different to the one I've set in Project Options using the "Load Icon" button. The icon on the app is some sort of bell image that I don't recognize.

    Read the article

  • 404 not found in telnet, works fine in browser

    - by Viranch Mehta
    i am having a very irritating problem, when i open a url ( http://celebs.widewallpapers.net/md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg ) in browser, it works fine.. but when i try to access it by telnet on bash, i get 404 not found!! my exact terminal: $ telnet celebs.widewallpapers.net 80 HEAD /md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg HTTP/1.0 [enter] [enter] HTTP/1.1 404 Not Found Server: nginx Date: Sun, 23 May 2010 21:36:05 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 166 Connection: close please help me with this as i m trying to make a C batch-downloader, which is almost working as same as the telnet.

    Read the article

  • how to get the http header in asynchronous request mode using ASIHHTTP

    - by user262325
    Hello everyone I hope to display the download file size by reading http header. I know there is way do this: ASIHTTPRequest request = [ASIHTTPRequest requestWithURL:url]; [request startSynchronous]; NSString poweredBy = [[request responseHeaders] objectForKey:@"X-Powered-By"]; NSString *contentType = [[request responseHeaders] objectForKey:@"Content-Type"]; but this is Synchronous mode, in Asynchronous mode it can be done as below: (void)requestFinished:(ASIHTTPRequest *)request { unsigned long long contentLength = [request contentLength]; } but 'requestFinished' is at the end of download. Is there an event to get the http header info at the beginning of download? Thank interdev

    Read the article

  • How to run a website domain without redirecting if IP is already used for another website? [duplicate]

    - by SSpoke
    This question already has an answer here: Hosting multiple distinct folders for distinct domains 1 answer I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems. Now my second domain name I can't use the same ip address as it points to the first domain name. So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far. Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. How do I avoid the Iframe solution for my second domain while not messing up my first domain? Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

    Read the article

  • Custom realm/starting Tomcat 6.0 from Netbeans 6.8/first HTTP request

    - by Drew
    I'm using NetBeans 6.8 and Tomcat 6.0.xx. I've created a custom realm and updated the NetBeans project build.xml to deploy the realm to Tomcat. When I debug the project, NetBeans starts the Tomcat server and makes an initial HTTP GET request for 'manager/list'. Tomcat graciously hands this request off to my custom realm for authentication. The request gets denied and NetBeans displays the following error in the output window: (note: error is displayed after NetBeans gets access denied) Access to Tomcat server has not been authorized. Set the correct username and password with the "manager" role in the Tomcat customizer in the Server Manager. Do I have something incorrectly configured? How do I prevent NetBeans from issuing this initial request? Thanks, Drew

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • Jquery ajax request error callback is called instead of success even after response recieved from server

    - by Muhammad Tahir Butt
    I am using jquery ajax funtion to get some content from my webservice. Response from the server is received but every time error callback is called instead of success callback. And this error is returned in xhr.error: function (){if(l){var t=l.length;(function i(t){x.each(t,function(t,n){var r=x.type(n);"function"===r?e.unique&&p.has(n)||l.push(n):n&&n.length&&"string"!==r&&i(n)})})(arguments),n?o=l.length:r&&(s=t,c(r))}return this} Here is the screenshot of response from server: and here is the code i am using to make the request: function abcdef() { $.ajax({ url: "http://192.168.61.129:8000/get-yt-access-token/", type: "GET", contentType:"application/json", error: function(xhr, textStatus, errorThrown){ alert("its error! " + xhr.error); }, success: function(data){ alert(data); } }); }

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >