Search Results

Search found 15035 results on 602 pages for 'request'.

Page 569/602 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • How can I maintain a sorted hash in Perl?

    - by srk
    @aoh =( { 3 => 15, 4 => 8, 5 => 9, }, { 3 => 11, 4 => 25, 5 => 6, }, { 3 => 5, 4 => 18, 5 => 5, }, { 0 => 16, 1 => 11, 2 => 7, }, { 0 => 21, 1 => 13, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); I want the hashes in each array index sorted in reverse order based on values.. @sorted = sort { ........... please fill this..........} @aoh; expected output @aoh =( { 4 => 8, 5 => 9, 3 => 15, }, { 5 => 6, 3 => 11, 4 => 25, }, { 5 => 5, 3 => 5, 4 => 18, }, { 2 => 7, 1 => 11, 0 => 16, }, { 1 => 13, 0 => 21, 2 => 31, }, { 0 => 11, 1 => 14, 2 => 31, }, ); Please help.. Thanks in advance.. Stating my request again: I only want the hashes in each array index to be sorted by values.. i dont want the array to be sorted..

    Read the article

  • Optimal template for change content via XMLHTTPRequest with JQuery,PHP,SQL [closed]

    - by B.F.
    This is my method to handle XMLHTTPRequests. Avoids mysql request, foreign access, nerves user, double requests. jquery var allow=true; var is_loaded=""; $(document).ready(function(){ .... $(".xx").on("click",functio(){ if(allow){ allow=false; if(is_loaded!="that"){ $.post("job.php", {job:"that",word:"aaa",number:"123"},function(data){ $(".aaa").html(data); is_loaded="that"; }); } setTimeout(function(){allow=true},500); } .... }); job.php <?PHP ob_start('ob_gzhandler'); if(!isset($_SERVER['HTTP_X_REQUESTED_WITH']) or strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) != 'xmlhttprequest')exit("bad boy!"); if($_POST['job']=="that"){ include "includes/that.inc; } elseif($_POST['job']== .... ob_end_flush(); ?> that.inc if(!preg_match("/\w/",$_POST['word'])exit("bad boy!"); if(!is_numeric($_POST['number'])exit("bad boy!"); //exclude more. $path="temp/that_".$row['word']."txt"; if(file_exists($path) and filemtime("includes/that.inc")<$filemtime($path)){ readfile($path); } else{ include "includes/openSql.inc"; $call=sql_query("SELECT * FROM that WHERE name='".mysql_real_escape_string($_POST['word'])."'"); if(!$call)exit("ups"); $out=""; while($row=mysql_fetch_assoc($call)){ $out.=$_POST['word']." loves the color ".$row['color'].".<br/>"; } echo $out; $fn=fopen($path,"wb"); fputs($fn,$out); fclose($fn); } if something change at the database, you just have to delete involved files. Hope it was English.

    Read the article

  • curl problems in c++ class

    - by Danilo
    I read a few articles on c++ / curl here on stackoverflow and assembled the following. The main goal is to handle the whole request in an instance of a class -- and maybe later in a secondary thread. My problem is: "content_" seems to stay empty though its the same addr and HttpFetch.h: class HttpFetch { private: CURL *curl; static size_t handle(char * data, size_t size, size_t nmemb, void * p); size_t handle_impl(char * data, size_t size, size_t nmemb); public: std::string content_; static std::string url_; HttpFetch(std::string url); void start(); std::string data(); }; HttpFetch.cpp: HttpFetch::HttpFetch(std::string url) { curl_global_init(CURL_GLOBAL_ALL); //pretty obvious curl = curl_easy_init(); content_.append("Test"); std::cout << &content_ << "\n"; curl_easy_setopt(curl, CURLOPT_URL, &url); curl_easy_setopt(curl, CURLOPT_WRITEDATA, &content_); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &HttpFetch::handle); //curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); //tell curl to output its progress curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L); //std::cout << &content_ << "\n"; } void HttpFetch::start() { curl_easy_perform(curl); curl_easy_cleanup(curl); } size_t HttpFetch::handle(char * data, size_t size, size_t nmemb, void * p) { std::string *stuff = reinterpret_cast<std::string*>(p); stuff->append(data, size * nmemb); std::cout << stuff << "\n"; // has content from data in it! return size * nmemb; } main.cpp: #include "HttpFetch.h" int main(int argc, const char * argv[]) { HttpFetch call = *new HttpFetch("http://www.example.com"); call.start(); ::std::cout << call.content_ << "\n" } Thanks in advance

    Read the article

  • Why is the page still caching even after the no-cache headers have been sent?

    - by Matthew Grasinger
    I've done a ton of research on this and have asked many people with help and still no success. Here are the details... I'm involved in developing a website that pulls data from various data files, combines them in a temp .csv file, and then is graphed using a popular graphing library: dygraphs. The bulk of the website is written in PHP. The parameters that determine the data that is graphed are stored in the users session, the .csv is named after the users session and available for download, and then the .csv file is written in a script that passes it to the dygraphs object. And we've found, even with the no-cache headers sent: header("Cache-Control: no-cache, must-revalidate"); header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); Many users experience in the middle of a session, (if enough different graphs are generated) the page displaying an older, static rendering of the page (data they had graphed earlier in the session) as if it were cached and loaded instead of getting a new request. It only gets weirder though: I've checked using developer tools in both Firefox and Chrome and both browsers are receiving the no-cache headers just fine; Even when the problem occurs if you view the page source, the source is the correct content (a table/legend is also dynamically created using php, the source shows the correct table, but what is rendered is older content); the page begins to render correctly until the graph is about to be display, and then shows the older content; the older content displays as if it were a completely static overlay--the cached graph does not have the same dynamic features (roll over data point display, zoom and pan, etc.) And it is as if the correct page were somewhere beneath it (the download button for the csv file moves depending on how large the table is. The older, static page does nothing if you click the download .csv button, but if you can manage to find the one in the page beneath it you can click and still download the .csv. The data in the .csv is correct) It is one of the strangest things I've seen in development thus far. Some other relevant facts are that all the problems I've personally experience occurred while I was using Chrome. Non of these symptoms have been reported by Firefox users. IE users have had the same problems (IE users are forced to use chrome frame). I'm at my wits end at this point. We've sent the php headers; we've tried setting the cache profile for php on IIS as "DisableCache" (or whatever); we've tried sending a random query string to the results page; we've tried all the appropriate meta tags--all with no success.

    Read the article

  • Powershell script to change screen Orientation

    - by user161964
    I wrote a script to change Primary screen orientation to portrait. my screen is 1920X1200 It runs and no error reported. But the screen does not rotated as i expected. The code was modified from Set-ScreenResolution (Andy Schneider) Does anybody can help me take a look? some reference site: 1.set-screenresolution http://gallery.technet.microsoft.com/ScriptCenter/2a631d72-206d-4036-a3f2-2e150f297515/ 2.C code for change oridentation (MSDN) Changing Screen Orientation Programmatically http://msdn.microsoft.com/en-us/library/ms812499.aspx my code as below: Function Set-ScreenOrientation { $pinvokeCode = @" using System; using System.Runtime.InteropServices; namespace Resolution { [StructLayout(LayoutKind.Sequential)] public struct DEVMODE1 { [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] public string dmDeviceName; public short dmSpecVersion; public short dmDriverVersion; public short dmSize; public short dmDriverExtra; public int dmFields; public short dmOrientation; public short dmPaperSize; public short dmPaperLength; public short dmPaperWidth; public short dmScale; public short dmCopies; public short dmDefaultSource; public short dmPrintQuality; public short dmColor; public short dmDuplex; public short dmYResolution; public short dmTTOption; public short dmCollate; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)] public string dmFormName; [MarshalAs(UnmanagedType.U4)] public short dmDisplayOrientation public short dmLogPixels; public short dmBitsPerPel; public int dmPelsWidth; public int dmPelsHeight; public int dmDisplayFlags; public int dmDisplayFrequency; public int dmICMMethod; public int dmICMIntent; public int dmMediaType; public int dmDitherType; public int dmReserved1; public int dmReserved2; public int dmPanningWidth; public int dmPanningHeight; }; class User_32 { [DllImport("user32.dll")] public static extern int EnumDisplaySettings(string deviceName, int modeNum, ref DEVMODE1 devMode); [DllImport("user32.dll")] public static extern int ChangeDisplaySettings(ref DEVMODE1 devMode, int flags); public const int ENUM_CURRENT_SETTINGS = -1; public const int CDS_UPDATEREGISTRY = 0x01; public const int CDS_TEST = 0x02; public const int DISP_CHANGE_SUCCESSFUL = 0; public const int DISP_CHANGE_RESTART = 1; public const int DISP_CHANGE_FAILED = -1; } public class PrmaryScreenOrientation { static public string ChangeOrientation() { DEVMODE1 dm = GetDevMode1(); if (0 != User_32.EnumDisplaySettings(null, User_32.ENUM_CURRENT_SETTINGS, ref dm)) { dm.dmDisplayOrientation = DMDO_90 dm.dmPelsWidth = 1200; dm.dmPelsHeight = 1920; int iRet = User_32.ChangeDisplaySettings(ref dm, User_32.CDS_TEST); if (iRet == User_32.DISP_CHANGE_FAILED) { return "Unable To Process Your Request. Sorry For This Inconvenience."; } else { iRet = User_32.ChangeDisplaySettings(ref dm, User_32.CDS_UPDATEREGISTRY); switch (iRet) { case User_32.DISP_CHANGE_SUCCESSFUL: { return "Success"; } case User_32.DISP_CHANGE_RESTART: { return "You Need To Reboot For The Change To Happen.\n If You Feel Any Problem After Rebooting Your Machine\nThen Try To Change Resolution In Safe Mode."; } default: { return "Failed"; } } } } else { return "Failed To Change."; } } private static DEVMODE1 GetDevMode1() { DEVMODE1 dm = new DEVMODE1(); dm.dmDeviceName = new String(new char[32]); dm.dmFormName = new String(new char[32]); dm.dmSize = (short)Marshal.SizeOf(dm); return dm; } } } "@ Add-Type $pinvokeCode -ErrorAction SilentlyContinue [Resolution.PrmaryScreenOrientation]::ChangeOrientation() }

    Read the article

  • Apache 2.2 and FastCGI stops responding, warnings, crashes

    - by Brett
    I've seen this question posted a few times using a Google search, with no real answers. I have a multi-threaded FastCGI application running with Apache 2.2 on FreeBSD 7.2. There are a few issues with it, and I am unable to really figure out the source of the problem even after poking through a bunch of the mod_fastcgi source code. My FastCGI application gets anywhere from 2 to 15 or so hits per second, and mostly services a back-end API (the majority of web server usage is for this, and not actually serving content). Everything seems to work ok under normal conditions, but recently this problem has been becoming worse. It starts out with the FastCGI process manager apparently trying to close unneeded processes, sending them a SIGTERM signal. I catch the signal, clean up some stuff, and exit (by calling exit()) with status code 0. This process seems to result in three log messages in my httpd error log: [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" (pid 98182) termination signaled [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" (pid 98182) terminated by calling exit with status '0' [Tue Jun 01 14:03:31 2010] [warn] FastCGI: (dynamic) server "/home/program/wwwroot/domains/www.mydomain.com/cgi-bin/program.cgi" restarted (pid 98294) I am not sure why it says it is restarting the process, but in any case no core dump is ever generated so I do believe it is the FastCGI process manager doing it's thing. This makes sense because it begins to happen after the initial load increase from restarting Apache. Since it's down for a few seconds, it gets hit with a couple of hundred requests over the first few seconds it's running again (sometimes even hitting the upper limit of MAXCLIENTS in Apache), and this seems to be the process manager doing the work of spawning more processes to handle the increased load. So this all seems fine, but here is where things get weird. There are really two problems that I see. First, my multithreaded FastCGI process spawns 25 worker threads, and all seem to be used according to my internal log files (multiple processes are clearly using multiple threads to do work). However it seems that 3 or 4 FastCGI processes is not enough to handle the 5 to 15 hit per second load, even though the requests take about .02s or so to process internally. In order to be at all responsive, it seems I need 50 or more FastCGI processes, leading me to believe that FastCGI does not realize that my program is multithreaded. I've read the documentation at http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html and do not see any option pertaining to multithreaded-ness, and my internal code is more or less set up just like the examples provided by the FastCGI library. The second problem I am having is that once process termination has happened a bunch of times as above (and seemingly at random), I begin getting a lot of these messages in my error log: [Tue Jun 01 14:06:22 2010] [warn] (32)Broken pipe: FastCGI: write() to PM failed (ignore if a restart or shutdown is pending) The messages occur for about half the hits I get to the server, and it completely kills the responsiveness of my application - it seems FastCGI will look for a working "pipe" until it finds one, and fail to realize that whatever application it is trying to contact is dead. It does still work though, it's just incredibly unresponsive - sometimes taking up to 40 or so seconds to process a request. I recompiled mod_fastcgi with some extra debugging around the point of the error message, and it appears that the error happens when it tries to write() to the application. The call to write() fails with a -1 return code, and sets errno to EPIPE. I am noticing that the issue happens mostly when either a crash occurs in one of the FastCGI processes, or a bunch of them are seemingly terminated by the process manager. I haven't had any core dumps though, except for one, where the backtrace outputted by gdb is just a single call to free() at address 0x0000000000000000 with nothing else in the stack trace, so I don't really know what to make of that. I'm thinking it happens sometime after the SIGTERM signal is caught, maybe some global variable not being cleaned up properly or something.

    Read the article

  • ImgBurn fails to burn data CD-R disk due to "Layouts do not match" error

    - by 0xAether
    I have a reoccurring problem with the program ImgBurn. Whenever I try and burn anything to a CD-R using ImgBurn it burns just fine, except for when I go and verify the disk. It tells me that the "Layouts do not match". Windows 7 shows the disk as completely blank. Although, I see on the bottom of the disk it has been written to. I can burn ISO files to DVD-R's just fine. This only seems to happen with CD-R's. The CD-R's I'm using are Memorex Cool Colors 52x CD-R's. I have looked on Google, and it seems like I'm not the only one this happens to. Unfortunately, no one is able to provide an explanation. I have included the log file from the last CD I just burnt. If you need anything else to better diagnose this problem, I will gladly provide it. ; //****************************************\\ ; ImgBurn Version 2.5.7.0 - Log ; Monday, 19 November 2012, 16:11:57 ; \\****************************************// ; ; I 16:04:55 ImgBurn Version 2.5.7.0 started! I 16:04:55 Microsoft Windows 7 Ultimate x64 Edition (6.1, Build 7601 : Service Pack 1) I 16:04:55 Total Physical Memory: 4,156,380 KB - Available: 3,317,144 KB I 16:04:55 Initialising SPTI... I 16:04:55 Searching for SCSI / ATAPI devices... I 16:04:56 -> Drive 1 - Info: Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:04:56 Found 1 DVD±RW/RAM! I 16:05:37 Operation Started! I 16:05:37 Source File: C:\Users\Aaron\Desktop\VMware Workstation 9.iso I 16:05:37 Source File Sectors: 223,057 (MODE1/2048) I 16:05:37 Source File Size: 456,820,736 bytes I 16:05:37 Source File Volume Identifier: VMwareWorksta9 I 16:05:37 Source File Volume Set Identifier: 20121119_2102 I 16:05:37 Source File File System(s): ISO9660, Joliet I 16:05:37 Destination Device: [1:0:0] Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:05:37 Destination Media Type: CD-R (Disc ID: 97m17s06f, Moser Baer India) I 16:05:37 Destination Media Supported Write Speeds: 10x, 16x, 20x, 24x I 16:05:37 Destination Media Sectors: 359,847 I 16:05:37 Write Mode: CD I 16:05:37 Write Type: SAO I 16:05:37 Write Speed: 6x I 16:05:37 Lock Volume: Yes I 16:05:37 Test Mode: No I 16:05:37 OPC: No I 16:05:37 BURN-Proof: Enabled W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - Wanted: 1,058 KB/s (6x), Got: 1,764 KB/s (10x) / 11,080 KB/s (63x) W 16:05:37 The drive only supports writing these discs at 10x, 16x, 20x, 24x. I 16:05:38 Filling Buffer... (80 MB) I 16:05:40 Writing LeadIn... I 16:06:07 Writing Session 1 of 1... (1 Track, LBA: 0 - 223056) I 16:06:07 Writing Track 1 of 1... (MODE1/2048, LBA: 0 - 223056) I 16:11:00 Synchronising Cache... I 16:11:18 Exporting Graph Data... I 16:11:18 Graph Data File: C:\Users\Aaron\AppData\Roaming\ImgBurn\Graph Data Files\Optiarc_DVD_RW_AD-7560S_SH03_MONDAY-NOVEMBER-19-2012_4-05_PM_97m17s06f_6x.ibg I 16:11:18 Export Successfully Completed! I 16:11:18 Operation Successfully Completed! - Duration: 00:05:41 I 16:11:18 Average Write Rate: 1,522 KB/s (10.1x) - Maximum Write Rate: 1,544 KB/s (10.3x) I 16:11:18 Cycling Tray before Verify... W 16:11:23 Waiting for device to become ready... I 16:11:47 Device Ready! E 16:11:47 CompareImageFileLayouts Failed! - Session Count Not Equal (1/0) E 16:11:47 Verify Failed! - Reason: Layouts do not match. I 16:11:57 Close Request Acknowledged I 16:11:57 Closing Down... I 16:11:57 Shutting down SPTI... I 16:11:57 ImgBurn closed!

    Read the article

  • DNS lookup failures while accessing my website some proxy error

    - by Bond
    Here is a situation until today morning,every thing has been working perfectly fine with me. From past 6 months many of my domains wer accessible as http://site1.myserver.com http://site2.myserver.com http://site3.myserver.com http://site4.myserver.com All these were Reverse Proxy configurations. I have some applications on each of them. until today morning some people reported me that http://site1.myserver.com/app1 is not working but http://site1.myserver.com is accessible but http://site2.myserver.com is accessible but http://site3.myserver.com is accessible but http://site4.myserver.com not accessible In past 6 months I have not changed any of these Apache configurations (things were working perfectly so) The error which can be seen in browser are while accessing http://site1.myserver.com/app1 Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /app1. Reason: DNS lookup failure for: myserver.com and same is the error for http://site4.myserver.com So what should I check in I have checked all the apache logs to an extent which I could see and 192.168.1.25 - - [10/Jan/2011:14:50:48 +0530] "GET /app1 HTTP/1.1" 502 531 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" Mon Jan 10 14:27:42 2011] [error] (113)No route to host: proxy: HTTP: attempt to connect to 192.168.1.3:80 (192.168.1.3) failed [Mon Jan 10 14:27:42 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:46 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:47 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:35:29 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:48 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 and for site4.myserver.com I get [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:43 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:02:38 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:10 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:06:21 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:06:31 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:26:03 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Weird networking problem ( Linksys, Windows 7 )

    - by Rohit Nair
    Okay it's a bit tough to figure out where to start from, but here is the basic summary of the issue: During general internet usage, there are times when any attempt to visit a website stalls at "Waiting for somedomain.com". This problem occurs in Firefox, IE and Chrome. No website will load, INCLUDING the router configuration page at 192.168.1.1. Curiously, ping works fine, and other network apps such as MSN Messenger continue to work and I can send and receive messages. Disconnecting and reconnecting to the wireless network seems to fix the problem for a bit, but there are times when it relapses into not loading after every 2-3 http requests. Restarting the router seems to fix the issue, but it can crop up hours or days later. I have a CCNA cert and I know my way around the Windows family of operating systems, so I'm going to list all the things I've tried here. Other computers on the network seem to suffer the same problem, which makes me think it might be a specific problem with something in Win7. The random nature of this issue makes it a bit difficult to confirm, but I can definitely say that I have experienced this on the following systems: Windows 7 64-bit on my desktop Windows Vista 32-bit on my desktop ( the desktop has 2 wireless NICs and the problem existed on both ) Windows Vista 32-bit on my laptop ( both with wireless and wired ) Windows XP SP3 on another laptop ( both wireless and wired ) Using Wireshark to sniff packets seemed to indicate that although HTTP requests were being SENT out, no packets were coming in to respond to the HTTP request. However, other network apps continued to work i.e I would still receive IMs on Windows Live Messenger. Disabling IPV6 had no effect. Updating router firmware to the latest stock firmware by Linksys had no effect. Switching to dd-wrt firmware had no effect. By "no effect" I mean that although the restart required by firmware updates fixed the problem at the time, it still came back. A couple of weeks back, after a LOT of googling and flipping of various options, I figured it might be a case of router slowdown ( http://www.dd-wrt.com/wiki/index.php/Router%5FSlowdown ) caused by the fact that I occasionally run a torrent client. I tried changing the configuration as suggested in that router slowdown link, and restarted the router. However I have not run the torrent client for 12 days now, and yet I still randomly experience this problem. Currently the computer I am using is running Windows 7 64-bit. I would just like to reiterate some of the reasons that I was confused by the issue. Even the router config page at 192.168.1.1 would not load, indicating that it's not a problem with the WAN link, but probably a router issue or a local computer issue. For some reason, disconnecting and reconnecting to the wireless network immediately seems to fix the problem. Updating the router firmware, even switching to open source firmware did nothing. So it seemed to be a computer issue. On the other hand, I have not seen any mass outrage of people having networking problems with Windows 7 and Linksys routers, especially a problem of this sort, and I have tweaked every network setting I could think of. Although HTTP seems to have trouble, ping works fine, DNS lookups work fine, other networking apps work fine. However if I disconnect from Windows Live Messenger and try to reconnect, it fails to reconnect. So although it could receive data over the existing TCP/IP connection, trying to start a new one failed? Does anyone have any further ideas on debugging or fixing this issue? I am reasonably certain there are no viruses or other malicious apps on my network, and I am also reasonably certain that nobody is accessing my router without my consent. Router: Linksys WRT54G2 1.0 running dd-wrt firmware Wireless Card: Alfa AWUS036H OS: Windows 7 64-bit EDIT: I tried switching to a clean wireless channel free from interference, but the problem still persisted. I tried connecting directly with a cable, but the problem still persisted. Signed A very confused and bewildered geek whose knowledge seems to be useless in the face of this frustrating network issue.

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • cannt run phpunit tests on bash ubuntu 11.10

    - by Mohamad Elbialy
    i'm working with ubuntu 11.10 as root on my local machine, i've installed xampp 1.7.7 and i'm a newbie to ubuntu, while following a tutorial on sitepoint(http://www.sitepoint.com/getting-started-with-pear/) on how to install pear to use PhpUnit, i didnt notice it then, but it seems that i installed or used an existing php version 5.3.6 in CL to do that, also the pear installation was built on this version, while xampp being installed,i now have two versions of php,xampp's 5.3.8 and the 5.3.6, anyway, what i want to do is to use the existing xampp php version and build pear on that, to make all my work through xampp.so my questions are: how to uninstall the php V5.3.6 and it's pear installation? how to link the CL with the php ver. of xampp? how to build the next pear installation on the php ver. of xampp? i want all my web dev. work through xampp, is there anything else i need to unistall, to avoid this confusion? 4. i did the following in attampet to solve the problem: i wrote this in bash: gedit ~/.bashrc i added that to the end of ~/.bashrc file in attempt to change environment path: export PATH=/opt/lampp/bin:$PATH export PATH=/opt/lampp/lib/php:$PATH export PATH=/opt/lampp/lib/php/PHPUnit/pearcmd.php:$PATH i checked the php and pear version using 'php -v' and 'pear list' i got an ouput of: PHP 5.3.8 (cli) (built: Sep 19 2011 13:29:27) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies and for pear: Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.9 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHPUnit 1.3.2 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable when i run: 'phpunit MessageTest.php': i get PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 5.i ran the following commands as reported in other questions as a solution to that error: sudo apt-get remove phpunit sudo pear channel-discover pear.phpunit.de sudo pear channel-discover pear.symfony-project.com sudo pear channel-discover components.ez.no sudo pear update-channels sudo pear upgrade-all sudo pear install --alldeps phpunit/PHPUnit sudo apt-get install phpunit and updated include path of php.ini to be: include_path = ".:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR" the php file MessageTest.php: <?php require 'PHPUnit/Autoload.php'; $path = '/opt/lampp/lib/php/PEAR'; set_include_path(get_include_path() . PATH_SEPARATOR . $path); require_once 'PHPUnit/Framework/TestCase.php'; require_once 'Message/Controller/MessageController.php'; class MessageTest extends PHPUnit_Framework_TestCase{ private $message; public function setUp() { $this->message = new MessageController(); } public function tearDown() { } public function testRepeat(){ $yell = "Hello, Any One Out There?"; $this->message->repeat($yell); //sending a request $returnedMessage = $this->message->repeat($yell);//get a response $this->assertEquals($returnedMessage, $yell); } } ?> MessageController class from MessageController.php that i'm trying to test <?php class MessageController { public function actionHelloWorld() { echo 'helloWorld'; } public function repeat($inputString){ return $inputString; } } $msg = new MessageController; ?> I'm not using any PHP framework, i just made the files and classes sounds like it that's all. and still i get the same error: PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 sure, i'm getting demanding here, i've wasted a lot of time and got really frustrated over this, hope you guys dont get bored reading through my questions, i appreciate your help thanks in advance, Mohamad elbialy

    Read the article

  • Apache logs 200 instead of 404

    - by elle
    I've been getting the following in my apache access log: "GET /work//?module=www&section=working=../../../../../../../../../../../../../../../../../../../../../../../..//proc/self/environ%0000 HTTP/1.1" 200 5187 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" If I try the URL, I get a 404 instead of 200 which the above request received. Is there a way I can confirm that the 200 was real and not spoofed? Where is the long info on the client coming from?

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • nginx: problem configuring a proxy_pass

    - by Ofer Bar
    I'm converting a web app from apache to nginx. In apache's httpd.conf I have: ProxyPass /proxy/ http:// ProxyPassReverse /proxy/ http:// The idea is the client send this url: http://web-server-domain/proxy/login-server-addr/loginUrl.php?user=xxx&pass=yyy and the web server calls: http://login-server-addr/loginUrl.php?user=xxx&pass=yyy My nginx.conf is attached below and it is not working. At the moment it looks like it is calling the server, but returning an application error. This seems promising but any attempt to debug this failed! I can't trace any of the calls as nginx refuses to place them in the error file. Also, placing echo statement on the login server did not help either which is weird. The nginx documentation isn't very helpful about this. Any suggestion on how to configure a proxy_pass? Thanks! user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # # The default server # server { rewrite_log on; listen 80; server_name _; #charset koi8-r; #access_log logs/host.access.log main; #root /var/www/live/html; index index.php index.html index.htm; location ~ ^/proxy/(.*$) { #location /proxy/ { # rewrite ^/proxy(.*) http://$1 break; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_pass http://$1; #proxy_pass "http://173.231.134.36/messages_2.7.0/loginUser.php?userID=ofer.fly%40gmail.com&password=y4HTD93vrshMNcy2Qr5ka7ia0xcaa389f4885f59c9"; break; } location / { root /var/www/live/html; #if ( $uri ~ ^/proxy/(.*) ) { # proxy_pass http://$1; # break; #} #try_files $uri $uri/ /index.php; } error_page 404 /404.html; #location = /404.html { # root /usr/share/nginx/html; #} # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; #fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME /var/www/live/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; }

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • nginx php-fpm empty site

    - by Katafalkas
    I am writing this as I was stuck trying to fix it whole night. We have recently migrated to a new server. Before migration we have been running our site on nginx + cgi (script). After migration we decided to try apache + mod_php. It was rather terrible, and I would like to migrate back to nginx, but this time I want it with php-fpm (as people say its a cool) So I did follow a number of guides and I think i done everything correctly. As well as that I have our old config files for "server" section, which i reviewed and places into new config. So I ended up having an empty site when I enter our URL. (By empty I mean blank page, no letter, errors or anything at all.) In the access log there are some weird errors like: 123.242.148.54 - - [22/Mar/2012:06:08:11 +0200] "-" 400 0 "-" "-" My guess is that php-fpm is not working, but not sure how to confirm it. Maybe some1 could give some help ? Would much appreciate. my nginx config: user nginx; worker_processes 12; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; #default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; fastcgi_buffer_size 256k; fastcgi_buffers 4 256k; server { listen 80; server_name www.example.com; access_log /var/log/nginx/example.access.log; error_log /var/log/nginx/error.log; root /home/www/example; index index.php; client_max_body_size 50M; #error_page 404 /404.html; # phpMyAdmin location /phpmyadmin { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; try_files $uri $uri/ /index.php; } location ~ ^/phpmyadmin/.*\.php$ { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Munin location /monitoring { root /var/www/; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/monitoring_users; error_log /var/log/nginx/monitoring.log; index index.html; } # The site location / { try_files $uri $uri/ /index.php/?$uri&$args; } # PHP interpreter location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } }

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • How to publish an ASP.NET MVC application to a free host

    - by Lirik
    Hi, I'm using a free web host (0000free) which supports ASP.NET MVC, but it uses Mono. This is the first time I deploy an MVC application, so I'm a little confused as to where I need to deploy it. I have Visual Studio 2010 and I used its Publish Feature (i.e. right click on the project name and click publish) and I tried several things: Publish method: FTP to the root folder. Publish method: FTP to the publich_html folder. Publish method: File System to the root folder. Publish method: File System to the publich_html folder. Publish method: File System to a local directory on my computer and then FTP to root and also tried the public_html folder. I went into the cPanel (control panel) to try and see if ASP.NET has to be added/enabled for my web site, but I didn't see anything there. I can't browse to Index.aspx nor can I redirect to it from index.html (as suggested from other posts on the host forum), right now I have a link from index.html to Index.aspx but it's not working either (see http://www.mydevarmy.com) I've also tried renaming Index.aspx to Default.aspx, but that doesn't work either. The search utility of the forum of the host is somewhat weak, so I use google to search their forum: http://www.google.com/search?q=publish+asp.net+site%3A0000free.com%2Fforum%2F&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a I've been reading Pro ASP.NET MVC Framework and they have a chapter about publishing, but it doesn't provide any specific information with respect to the location of publishing, this is all they say (and it's not very helpful in my case): Where Should I Put My Application? You can deploy your application to any folder on the server. When IIS first installs, it automatically creates a folder for a web site called Default Web Site at c:\Inetpub\wwwroot\, but you shouldn’t feel any obligation to put your application files there. It’s very common to host applications on a different physical drive from the operating system (e.g., in e:\websites\ example.com). It’s entirely up to you, and may be influenced by concerns such as how you plan to back up the server. Here is the exception I get when I try to view my Index.aspx page: Unrecognized attribute 'targetFramework'. (/home/devarmy/public_html/Web.config line 1) Description: HTTP 500. Error processing request. Stack Trace: System.Configuration.ConfigurationErrorsException: Unrecognized attribute 'targetFramework'. (/home/devarmy/public_html/Web.config line 1) at System.Configuration.ConfigurationElement.DeserializeElement (System.Xml.XmlReader reader, Boolean serializeCollectionKey) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSection.DoDeserializeSection (System.Xml.XmlReader reader) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSection.DeserializeSection (System.Xml.XmlReader reader) [0x00000] in <filename unknown>:0 at System.Configuration.Configuration.GetSectionInstance (System.Configuration.SectionInfo config, Boolean createDefaultInstance) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSectionCollection.get_Item (System.String name) [0x00000] in <filename unknown>:0 at System.Configuration.Configuration.GetSection (System.String path) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetSection (System.String sectionName, System.String path, System.Web.HttpContext context) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetSection (System.String sectionName, System.String path) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetWebApplicationSection (System.String sectionName) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.get_CompilationConfig () [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.Build (System.Web.VirtualPath vp) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.GetCompiledType (System.Web.VirtualPath virtualPath) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] in <filename unknown>:0 at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] in <filename unknown>:0

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • Varnish POST problem "9 FetchError c backend write error: 11" for application/x-www-form-urlencoded content

    - by ompap
    Cutting a longish story short, we have managed to get a more precise error out of Varnishlog. Varnishlog tells us that we are sending a 31 TxRequest - POST 31 TxHeader - Content-Type: application/x-www-form-urlencoded but we are getting 9 FetchError c backend write error: 11 31 BackendClose - [backend name] 9 VCL_call c error 9 VCL_return c deliver 9 Length c 488 9 VCL_call c deliver 9 VCL_return c deliver 9 TxProtocol c HTTP/1.1 9 TxStatus c 503 We still do not know what this is exactly, but apparently Content-Type: application/x-www-form-urlencoded is not getting through as it should. Help still needed, please! Original message below. The title was "Varnish not letting Joomla users to log in - 503 guru meditation error", but I changed it to get more attention to the problem and not to the symptoms. Hello, We have a production site for a local newspaper which is currently behind an Apache reverse proxy, basicly the site on one server and the other being reserved as a reverse proxy only (well, there is more but that has no relevance here). Apache as a reverse proxy works, but could be faster. We want to change the reverse proxy to use Varnish instead of Apache on an Ubuntu 10.4 Server. The Varnish is version 2.10 installed directly from Ubuntu repos. Ubuntu 10.4 uses PHP 5.3.2. For anonymous surfers the site works wonderfully with Varnish. So far we can get very good speed out of Varnish, we just have a few problems with logging in or out. The big one is, that the users cannot log in: they get a Varnish 503 error page every time. The logs do not reveal the cause. It feels as if the request would never leave Varnish. So we are merely guessing - not a strong starting point. We have gone through what has been suggested on various plces on the web. We have increased the timeouts to backend xxx { .host = "xxx.xx"; .port = "http"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } but we seem to get the 503 guru error page much faster than that, as in approx. 5 seconds. We have increased the Varnish headers size to 128 in daemon. In vcl_recv we have if (req.http.Authenticate || req.http.Authorization) { return(pass); } and in vcl_fetch ## auhtentication handling if (req.http.Authenticate || req.http.Authorization) { return(pass); } We do not strip cookies. We have tried to make sure that error pages are not cached. As said above, we cannot see anything in the backend Apache logs, apparently it never gets asked for Joomla user authentication. Varnish does not seem to get much mentioning with connection to Joomla. (We cannot dump Joomla, that selection has been done and we just have to live with what we have been given) Has anyone a working Varnish - Joomla combination? Thanks for reading. Please help. We need some hints - desperately. Any suggestions? ompap

    Read the article

  • Do I need to be worried about these SMART drive temperatures?

    - by Steve Lorimer
    I have 5 hard drives in a machine sitting in a cupboard. /dev/sda is a 500GB Seagate drive, and is the boot disk. /dev/sd{b,c,d,e} are 2TB drives in a raid6 configuration. smartctl is showing significantly higher temperatures (like ~140 degrees celsius) on the raid drives than the boot drive. Do I need to be worried? /dev/sdb and /dev/sde are new Western Digital Black drives (new=1 week) /dev/sdc and /dev/sdd are 5 year old Hitachi drives /dev/sda [SAT], Temperature_Celsius changed from 40 to 39 /dev/sdc [SAT], Temperature_Celsius changed from 142 to 146 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 142 /dev/sdd [SAT], Temperature_Celsius changed from 142 to 146 /dev/sda [SAT], Airflow_Temperature_Cel changed from 61 to 62 /dev/sda [SAT], Temperature_Celsius changed from 39 to 38 /dev/sde [SAT], Temperature_Celsius changed from 107 to 108 /dev/sdb [SAT], Temperature_Celsius changed from 108 to 109 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sda [SAT], Airflow_Temperature_Cel changed from 62 to 61 /dev/sda [SAT], Temperature_Celsius changed from 38 to 39 Update: Adding detailed drive information as per request: /dev/sda =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Pipeline HD 5900.2 Device Model: ST3500312CS Serial Number: 5VV47HXA LU WWN Device Id: 5 000c50 02aad5ad6 Firmware Version: SC13 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 5900 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 1.5 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdb =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1398726 LU WWN Device Id: 5 0014ee 003b8bd25 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdc =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WSTUD LU WWN Device Id: 5 000cca 369cc9f5d Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdd =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WST4D LU WWN Device Id: 5 000cca 369cc9f48 Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sde =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1483782 LU WWN Device Id: 5 0014ee 3002d235c Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >