Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 13/683 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How do I handle a POST request in Perl and FastCGI?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • What is actually happening to this cancelled HTTP request?

    - by Brian Schroth
    When a user takes a particular action on a page, an AJAX call is made to save their data. Unfortunately, this call is synchronous as they need to wait to see if the data is valid before being allowed to continue. Obviously, this eliminates a lot of the benefit of using Asynchronous Javascript And XML, but that's a subject for another post. That's the design I'm working with. The request is made using the dojo.xhrPost function, with a 60s timeout parameter, and the error handler redirects to an error page. What I am finding in testing is that in Firefox, if I initiate the ajax request and then press ESC, the page hangs waiting for a response, and then eventually after exactly 90s (not 60s, the function's timeout), the error handler will kick in and redirect to the error page. I expected this to happen, but either immediately as soon as the request was cancelled, or after 60s due to the timeout value being 60s. What I don't understand is why is it 90s? What is actually happening under the hood when the user cancels their request in Firefox, and how does it differ from IE, where everything works fine exactly the same as if the request had not been cancelled? Is the 90s related to any user-configurable browser settings?

    Read the article

  • Rails: how can I access the request object outside a helper or controller?

    - by rlandster
    In my application_helper.rb file I have a function like this: def internal_request? server_name = request.env['SERVER_NAME'] [plus more code...] end This function is needed in controllers, model, and views. So, I put this code in a utility function file in the lib/ directory. However, this did not work: I got complaints about request not being defined. How can I access the request object in a file in the lib/ directory?

    Read the article

  • Anti-Forgery Request Helpers for ASP.NET MVC and jQuery AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, this is a little crazy Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Submit token via AJAX The browser side problem is, if server side turns on anti-forgery validation for POST, then AJAX POST requests will fail be default. Problem For AJAX scenarios, when request is sent by jQuery instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The tokens are printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called somewhere. Now the browser has token in HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token. Here $.appendAntiForgeryToken() is provided:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by iframe, while the token is in the parent window. Here window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • server will not reply (syn -ack)

    - by Brent
    I like to use the following commands to manage 'TIME_WAIT', in the hope to free up resources. echo 20 > /proc/sys/net/ipv4/tcp_fin_timeout sysctl -w net.ipv4.tcp_tw_reuse=1 sysctl -w net.ipv4.tcp_tw_recycle=1 I found something interesting while doing a tcpdump. Sometimes if a client makes a connection (syn), the server will not reply (syn -ack). My question is, could it be because of the top three commands.

    Read the article

  • Exchange out of office, sending an OoO reply for every email

    - by rodey
    We are running Exchange 2007 with a mix of Outlook 2003 and Outlook 2007. Is it possible to set the Out of Office Assistant so that when OoO is enabled, it sends a reply to every single email the user receives? Also, is there something I can reference that covers the options for OoO such as length of time between replies, etc.? Thanks!

    Read the article

  • Exchange out of office, sending an OoO reply for every email

    - by rodey
    We are running Exchange 2007 with a mix of Outlook 2003 and Outlook 2007. Is it possible to set the Out of Office Assistant so that when OoO is enabled, it sends a reply to every single email the user receives? Also, is there something I can reference that covers the options for OoO such as length of time between replies, etc.? Thanks!

    Read the article

  • Thunderbird 3 "Reply to all", also replies to myself

    - by Jj
    I have my job IMAP account set up in TB 3.0.3. It's becoming very annoying that in group emails where I'm in the CC or TO list I hit "Reply To All" and I find myself in the recipients list. So when I send the email I also get a copy of my own email. I haven't found where to disable or modify this. This doesn't happen with my Gmail account (also set up in TB)

    Read the article

  • Exchange 2010 global auto reply - hub transport rules?

    - by RodH257
    Our office is closing for the holidays, and I want to setup an auto reply if anyone attempts to email. Rather than get everyone to do it individually I want to set a blanket message on the Exchange 2010 server. Looking around here I found hub transport rules can be used, but I don't want to send a rejection message, like in this post I want to keep the message but just say that we won't get back to you fora couple of weeks. Can anyone point me in the right direction?

    Read the article

  • Always set "reply-to" for certain recipients?

    - by Benjamin Oakes
    Is it possible to always have Thunderbird set "reply-to" for a certain set of recipients? I sometimes email my significant other at work (about upcoming office parties, events, etc. that she would need to know about), but I'd like to handle the rest of the discussion from my personal email account.

    Read the article

  • Notes: No reply or respond option

    - by Eqbal
    One of my colleagues got an invite that looks like an email but there is no "Reply" button available. If its a meeting invite (which I can't tell), it has no "Respond" option available. Anyone seen this before?

    Read the article

  • GMail appearing to ignore Reply-To.

    - by Samuurai
    I'm using a gmail account to send emails from my website. I'm using the same account to pick up emails which are generated by the contact facility on my site. I'm using the Reply-To field to attempt to make it easier to hit reply and easily get back to people. The message comes up with the 'from' address and ignores the 'reply-to' address. Here's my header: Return-Path: <[email protected]> Received: from svr1 (ec2-79-125-266-266.eu-west-1.compute.amazonaws.com [79.125.266.266]) by mx.google.com with ESMTPS id u14sm23273123gvf.17.2010.03.10.14.33.24 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 10 Mar 2010 14:33:25 -0800 (PST) Received: from localhost ([127.0.0.1] helo=www.rds.com) by aquacouture with esmtp (Exim 4.69) (envelope-from <[email protected]>) id 1NpUSx-0001dK-JM for [email protected]; Wed, 10 Mar 2010 22:33:23 +0000 User-Agent: CodeIgniter Date: Wed, 10 Mar 2010 22:33:23 +0000 From: "New Inquiry" <[email protected]> Reply-To: "Beren" <[email protected]> To: [email protected] Subject: =?utf-8?Q?Test?= X-Sender: [email protected] X-Mailer: CodeIgniter X-Priority: 3 (Normal) Message-ID: <[email protected]> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="B_ALT_4b981e3390ccd" This is a multi-part message in MIME format. Your email application may not support this format. --B_ALT_4b981e3390ccd Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit test --B_ALT_4b981e3390ccd Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable test --B_ALT_4b981e3390ccd--

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • Pylons error "No object (name: request) has been registered for this thread" with debug = false

    - by Evgeny
    I'm unable to access the request object in my Pylons 0.9.7 controller when I set debug = false in the .ini file. I have the following code: def run_something(self): print('!!! request = %r' % request) print('!!! request.params = %r' % request.params) yield 'Stuff' With debugging enabled this works fine and prints out: !!! request = <Request at 0x9571190 POST http://my_url> !!! request.params = UnicodeMultiDict([... lots of stuff ...]) If I set debug = false I get the following: !!! request = <paste.registry.StackedObjectProxy object at 0x4093790> Error - <type 'exceptions.TypeError'>: No object (name: request) has been registered for this thread The stack trace confirms that the error is on the print('!!! request.params = %r' % request.params) line. I'm running it using the Paste server and these two lines are the very first lines in my controller method. This only occurs if I have yield statements in the method (even though the statements aren't reached). I'm guessing Pylons sees that it's a generator method and runs it on some other thread. My questions are: How do I make it work with debug = false ? Why does it work with debug = true ? Obviously this is quite a dangerous bug, since I normally develop with debug = true, so it can go unnoticed during development.

    Read the article

  • Splitting big request in multiple small ajax requests

    - by Ionut
    I am unsure regarding the scalability of the following model. I have no experience at all with large systems, big number of requests and so on but I'm trying to build some features considering scalability first. In my scenario there is a user page which contains data for: User's details (name, location, workplace ...) User's activity (blog posts, comments...) User statistics (rating, number of friends...) In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests. This is why I decided to run only step 1 (User's details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers. For me at least this makes more sense. However there are 3 requests instead of one for every user page view. Will this affect the system on the long term? I'm assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?

    Read the article

  • 401 Unauthorized returned on GET request (https) with correct credentials

    - by Johnny Grass
    I am trying to login to my web app using HttpWebRequest but I keep getting the following error: System.Net.WebException: The remote server returned an error: (401) Unauthorized. Fiddler has the following output: Result Protocol Host URL 200 HTTP CONNECT mysite.com:443 302 HTTPS mysite.com /auth 401 HTTP mysite.com /auth This is what I'm doing: // to ignore SSL certificate errors public bool AcceptAllCertifications(object sender, System.Security.Cryptography.X509Certificates.X509Certificate certification, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors sslPolicyErrors) { return true; } try { // request Uri uri = new Uri("https://mysite.com/auth"); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri) as HttpWebRequest; request.Accept = "application/xml"; // authentication string user = "user"; string pwd = "secret"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); request.Headers.Add("Authorization", auth); ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications); // response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); // Cleanup reader.Close(); dataStream.Close(); response.Close(); } catch (WebException webEx) { Console.Write(webEx.ToString()); } I am able to log in to the same site with no problem using ASIHTTPRequest in a Mac app like this: NSURL *login_url = [NSURL URLWithString:@"https://mysite.com/auth"]; ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:login_url]; [request setDelegate:self]; [request setUsername:name]; [request setPassword:pwd]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Accept" value:@"application/xml"]; [request startAsynchronous];

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Apache gives empty reply

    - by Jorge Bernal
    It happens randomly, and only on moodle installations. Apache don't add a line in the logs when this happens, and I don't know where to look. koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 curl: (52) Empty reply from server koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 HTTP/1.1 200 OK The apache conf is quite straightforward and works perfectly in the other vhosts <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /srv/apache/training.ebox-technologies.com/htdocs ServerName training.eboxhq.com ErrorLog /var/log/apache2/training.ebox-technologies.com-error.log CustomLog /var/log/apache2/training.ebox-technologies.com-access.log combined <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresActive On ExpiresDefault "access plus 1 week" Header add Cache-Control public </FilesMatch> </VirtualHost> Using apache 2.2.9 php 5.2.6 and moodle 1.9.5+ (Build: 20090722) Any ideas welcome :)

    Read the article

  • Domain of sender address does not resolve (in reply to MAIL FROM command)

    - by horen
    When sending out emails with postfix I sometimes get this error: 451 #4.1.8 Domain of sender address <[email protected]> does not resolve (in reply to MAIL FROM command) The domain mydomain.tld is resolvable though, meaning A, MX, PTR records are set properly. However, the sending server does have a different domain anotherdomain.tld but it is allowed to send emails from mydomain.tld since I set the MX records of mydomain.tld to anotherdomain.tld. The envelope from of the problematic emails is [email protected]. Is there some other dns entry I have to set? Or how else could I solve the problem? (I would like to keep the server structure though)

    Read the article

  • VMWare use of Gratuitous ARP REPLY

    - by trs80
    I have an ESXi cluster that hosts several Windows Server VMs and around 30 Windows workstation VMs. Packet captures show a high number of ARP replies of the form: -sender_ip: VM IP -sender_mac: VM virtual MAC -target_ip: 0.0.0.0 -target_mac: Switch interface MAC The specific addresses aren't really a concern -- they're all legitimate and we're not having any problems with communications (most of the questions surrounding GARP and VMWare have to do with ping issues, a problem we don't have). I'm looking for an explanation of the traffic pattern in an environment that functions as expected. So the question is why would I see a high number of unsolicited ARP replies? Is this a mechanism VMWare uses for some purpose? What is it? Is there an alternative? EDIT: Quick diagram: [esxi]--[switch vlan]--[inline IDS]--[fw]--(rest of network) The IDS is complaining about these unsolicited ARPs. Several IDS vendors trigger on ARP replies without a prior request, or for ARP replies that have a target IP of 0.0.0.0. The target MAC in these replies is the VLAN interface on the switch. Capture points: -The IDS grabs the offending packets -The FW can see the same ones -A VM on the ESXi host does not see these, although there is an ARP request for a specific IP on the ESXi host that has source_ip=0.0.0.0 and source_mac=[switch vlan interface]. I can't share the captures, unfortunately. Really I'm interested in finding out if this is normal for an ESXi deployment.

    Read the article

  • NSD reply from unexpected source

    - by Ximik
    I have server with NSD. There are MAIN_IP and ADD_IP. When I try to get IP of my site from server I have right output dig @localhost my_site.com But when I try to make this from my PC, I have dig @my_ns_server.com my_site.com ;; reply from unexpected source: MAIN_IP#53, expected ADD_IP#53 (ADD_IP is IP of my_ns_server.com) What should I do? UPD: My interfaces conf auto eth2 allow-hotplug eth2 iface eth2 inet static address xxx.xxx.xxx.234 netmask 255.255.255.252 network xxx.xxx.xxx.232 broadcast xxx.xxx.xxx.235 gateway xxx.xxx.xxx.233 dns-nameservers MY_ISP_IP dns-search MY_ISP_DOMAIN auto eth2:0 iface eth2:0 inet static address xxx.xxx.xxx.124 netmask 255.255.255.0 xxx.xxx.xxx is the same for all IPs

    Read the article

  • Making Thunderbird auto-add SMTP identities whenever I reply

    - by 0xC0000022L
    How can I teach Thunderbird to automatically add an SMTP identity whenever I reply to an email directed to <whatever>@<mydomain>? So if an SMTP is configured for <mydomain> but no identity exists for <whatever>@<mydomain>, how can I make Thunderbird dynamically recognize this and add it. Currently I have to manually add the identity every single time, but I would prefer it to to be added ad-hoc. As long as Thunderbird was configured to know about the SMTP serving <mydomain> this should be trivial, but I couldn't find an option. An add-on or something like a catch-all/wildcard identity would also do as long as it doesn't require manually setting up a new identity every time.

    Read the article

  • Change HttpContext.Request.InputStream

    - by user320478
    I am getting lot of errors for HttpRequestValidationException in my event log. Is it possible to HTMLEncode all the inputs from override of ProcessRequest on web page. I have tried this but it gives context.Request.InputStream.CanWrite == false always. Is there any way to HTMLEncode all the feilds when request is made? public override void ProcessRequest(HttpContext context) { if (context.Request.InputStream.CanRead) { IEnumerator en = HttpContext.Current.Request.Form.GetEnumerator(); while (en.MoveNext()) { //Response.Write(Server.HtmlEncode(en.Current + " = " + //HttpContext.Current.Request.Form[(string)en.Current])); } long nLen = context.Request.InputStream.Length; if (nLen > 0) { string strInputStream = string.Empty; context.Request.InputStream.Position = 0; byte[] bytes = new byte[nLen]; context.Request.InputStream.Read(bytes, 0, Convert.ToInt32(nLen)); strInputStream = Encoding.Default.GetString(bytes); if (!string.IsNullOrEmpty(strInputStream)) { List<string> stream = strInputStream.Split('&').ToList<string>(); Dictionary<int, string> data = new Dictionary<int, string>(); if (stream != null && stream.Count > 0) { int index = 0; foreach (string str in stream) { if (str.Length > 3 && str.Substring(0, 3) == "txt") { string textBoxData = str; string temp = Server.HtmlEncode(str); //stream[index] = temp; data.Add(index, temp); index++; } } if (data.Count > 0) { List<string> streamNew = stream; foreach (KeyValuePair<int, string> kvp in data) { streamNew[kvp.Key] = kvp.Value; } string newStream = string.Join("", streamNew.ToArray()); byte[] bytesNew = Encoding.Default.GetBytes(newStream); if (context.Request.InputStream.CanWrite) { context.Request.InputStream.Flush(); context.Request.InputStream.Position = 0; context.Request.InputStream.Write(bytesNew, 0, bytesNew.Length); //Request.InputStream.Close(); //Request.InputStream.Dispose(); } } } } } } base.ProcessRequest(context); }

    Read the article

  • Best place to request Ubuntu for a minor improvement (In Unity dash search)

    - by mac
    Which is the best place to request Ubuntu for a minor improvement? My request feature is this : In Ubuntu dash when I search for "Upd" it gives me update manager and some other files. Now when I click enter by default the first entry will be selected. Can we make this a slightly better experience by highlighting the first item in search results which will be selected by default if we press enter - Just like in Gnome shell Search for upd in unity dash Search for upd in gnome-shell If you notice, update manager is highlighted by default in gnome shell and appears more intuitive. Can we implement the same in Unity ? Sorry for posting this in askubuntu. I just wanted to know which is the best place to discuss this. Thanks

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >