Search Results

Search found 21758 results on 871 pages for 'mime message'.

Page 791/871 | < Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • GRUB 2 freezing at OS selection screen, what could be the cause?

    - by Michael Kjörling
    Mains power is somewhat unreliable where I live, so every now and then, the computer gets rebooted when the PSU can't maintain proper voltage during a brown-out or momentary black-out. It's happened a few times recently that when power is restored, the BIOS POST completes successfully, GRUB starts to load and then freezes. I've seen this at the Welcome to GRUB! message, but it seems to happen more often just past the switch to the graphical OS list. At this point, the computer will not respond to anything (arrow keys, control commands, Ctrl+Alt+Del, ...) - it simply sits there displaying this image, seemingly doing nothing more. At that point, turning the computer off using the power button and letting it sit for a while (cooling down?) has allowed it to boot successfully. Turning the computer off and immediately back on seems to give the same result (successful POST then freeze in GRUB). This behavior began recently, although does not seem to be directly correlated with my hard disk woes (although it may be relevant that GRUB resides on that physical disk, I don't know). Once the computer has booted, it runs without a hitch. I know that a "proper" solution would be to invest in a UPS, but what might be causing behavior like this? I was thinking in terms of perhaps the CPU shutting down as a thermal control measure, but if that was the cause then wouldn't I see similar freezes during use (which I do not)? What else could cause freezes apparently closely but not perfectly related to the BIOS handover from POST to OS bootloader? The BIOS settings are to reset to previous power status after a power loss. Since the PC in question is almost always turned on, this means restore to full power status. I have no expansion cards installed that make any BIOS extensions known by screen output during the boot process, at least, but I do have a few expansion cards installed. Haven't made any changes in that regard in a long time, now. I haven't touched GRUB itself for a long time, whether configuration or binaries, so I don't think that's the problem. Also, it doesn't really make sense that a bug in GRUB would manifest itself only once in a blue moon but significantly more often after a power failure.

    Read the article

  • WD1000FYPS harddrive is marked 0 mb in 3ware (and no SMART)

    - by osgx
    After reboot my SATA 1TB WD1000FYPS (previously is was "Drive error") is marked 0 mb in 3ware web gui. Complete message: Available Drives (Controller ID 0) Port 1 WDC WD1000FYPS-01ZKB0 0.00 MB NOT SUPPORTED [Remove Drive] SMART gives me only Device Model and ATA protocol version 1 (not 7-8 as it must be for SATA) What does it mean? Just before reboot, when is was marked only with "Device Error", smart was: Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: WD-WCASJ1130*** Firmware Version: 02.01B01 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Mar 7 18:47:35 2010 MSK SMART support is: Available - device has SMART capability. SMART support is: Enabled SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 188 186 021 Pre-fail Always - 7591 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 229 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 3 7 Seek_Error_Rate 0x000e 193 193 000 Old_age Always - 125 9 Power_On_Hours 0x0032 078 078 000 Old_age Always - 16615 10 Spin_Retry_Count 0x0012 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 192 Power-Off_Retract_Count 0x0032 198 198 000 Old_age Always - 1564 193 Load_Cycle_Count 0x0032 146 146 000 Old_age Always - 164824 194 Temperature_Celsius 0x0022 117 100 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 199 199 000 Old_age Always - 1 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 What can be wrong with he? Can it be restored? PS new smart is === START OF INFORMATION SECTION === Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: [No Information Found] Firmware Version: [No Information Found] Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 1 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Mar 8 00:29:44 2010 MSK SMART is only available in ATA Version 3 Revision 3 or greater. We will try to proceed in spite of this. SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported. Checking for SMART support by trying SMART ENABLE command. Command failed, ata.status=(0x00), ata.command=(0x51), ata.flags=(0x01) Error SMART Enable failed: Input/output error SMART ENABLE failed - this establishes that this device lacks SMART functionality. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. PPS There was a rapid grow of " 192 Power-Off_Retract_Count " before dying. The hard was used in raid, with several hards from the same fabric packaging box (close id's). The hard drives were placed identically. Rapid means almost linear grow from 300 to 1700 in 6-7 hours. Maximal temperature was 41C. (thanks to munin's smart monitoring)

    Read the article

  • Error with postgres and Rails in Bundle Install on Ubuntu 12.10

    - by jason328
    I'm trying to install postgres onto Ubuntu. When running the Bundle Install in terminal I'm receiving this message at the end of the running code. How do I get pg to install properly? Libpq-dev is installed as well. Using coffee-rails (3.2.2) Using diff-lcs (1.1.3) Using jquery-rails (2.0.2) Installing pg (0.12.2) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for PQsetClientEncoding()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make compiling compat.c compiling pg.c pg.c: In function ‘pgconn_wait_for_notify’: pg.c:2117:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c: In function ‘pgconn_block’: pg.c:2592:3: error: format not a string literal and no format arguments [-Werror=format- security] pg.c:2598:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c:2607:4: error: format not a string literal and no format arguments [-Werror=format- security] pg.c: In function ‘pgconn_locreate’: pg.c:2866:11: warning: variable ‘lo_oid’ set but not used [-Wunused-but-set-variable] pg.c: In function ‘find_or_create_johab’: pg.c:3947:3: warning: implicit declaration of function ‘rb_encdb_alias’ [-Wimplicit- function-declaration] cc1: some warnings being treated as errors make: *** [pg.o] Error 1 Gem files will remain installed in /home/jason/.bundler/tmp/10083/gems/pg-0.12.2 for inspection. Results logged to /home/jason/.bundler/tmp/10083/gems/pg-0.12.2/ext/gem_make.out An error occurred while installing pg (0.12.2), and Bundler cannot continue. Make sure that `gem install pg -v '0.12.2'` succeeds before bundling.

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • Can RPM packages be installed into Cygwin?

    - by Dejian Zhao
    I noticed that there is a command - rpm - under Cygwin 1.7. Does that mean RPM packages can be installed into Cygwin? I tried to install ncbi-blast-2.2.26+-3.i686.rpm (see: ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ ) into Cygwin 1.7.13 with the command "install -i ncbi-blast-2.2.26+-3.i686.rpm". However, error message appeared as below. I tried to search for the missing libs using the setup.exe of Cygwin. It seems that some of them were not present, such as libc.so.6, libdl.so.2, libm.so.6, libnsl.so.1, and libz.so.1. Where can I get these libs? Thanks! $ rpm -i ncbi-blast-2.2.26+-3.i686.rpm error: Failed dependencies: /usr/bin/perl is needed by ncbi-blast-2.2.26+-3 libbz2.so.1 is needed by ncbi-blast-2.2.26+-3 libc.so.6 is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.3) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.3) is needed by ncbi-blast-2.2.26+-3 libdl.so.2 is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1 is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GCC_3.0) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libm.so.6 is needed by ncbi-blast-2.2.26+-3 libnsl.so.1 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.3.2) is needed by ncbi-blast-2.2.26+-3 librt.so.1 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(CXXABI_1.3) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4.5) is needed by ncbi-blast-2.2.26+-3 libz.so.1 is needed by ncbi-blast-2.2.26+-3 perl(Archive::Tar) is needed by ncbi-blast-2.2.26+-3 perl(Digest::MD5) is needed by ncbi-blast-2.2.26+-3 perl(File::Temp) is needed by ncbi-blast-2.2.26+-3 perl(File::stat) is needed by ncbi-blast-2.2.26+-3 perl(Getopt::Long) is needed by ncbi-blast-2.2.26+-3 perl(Net::FTP) is needed by ncbi-blast-2.2.26+-3 perl(Pod::Usage) is needed by ncbi-blast-2.2.26+-3 perl(constant) is needed by ncbi-blast-2.2.26+-3 perl(strict) is needed by ncbi-blast-2.2.26+-3 perl(warnings) is needed by ncbi-blast-2.2.26+-3

    Read the article

  • Cannot install g++ on ubuntu

    - by Erel Segal
    I don't have g++: erelsgl@ubuntu:/etc/apt$ which g++ erelsgl@ubuntu:/etc/apt$ erelsgl@ubuntu:/etc/apt$ g++ The program 'g++' can be found in the following packages: * g++ * pentium-builder Try: sudo apt-get install <selected package> So I try to install it: erelsgl@ubuntu:~/srilm$ sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done g++ is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up g++ (4:4.4.3-1ubuntu1) ... update-alternatives: error: alternative path /usr/bin/g++ doesn't exist. dpkg: error processing g++ (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of build-essential: build-essential depends on g++ (>= 4:4.3.1); however: Package g++ is not configured yet. dpkg: error processing build-essential (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: g++ build-essential E: Sub-process /usr/bin/dpkg returned an error code (1) I also try to install build-essential, and get same results. I also tried "sudo apt-get update" - didn't help. This is my apt-cache: erelsgl@ubuntu:/etc/apt$ apt-cache policy g++ build-essential g++: Installed: 4:4.4.3-1ubuntu1 Candidate: 4:4.4.3-1ubuntu1 Version table: *** 4:4.4.3-1ubuntu1 0 500 http://il.archive.ubuntu.com/ubuntu/ lucid/main Packages 100 /var/lib/dpkg/status build-essential: Installed: 11.4build1 Candidate: 11.4build1 Version table: *** 11.4build1 0 500 http://il.archive.ubuntu.com/ubuntu/ lucid/main Packages 100 /var/lib/dpkg/status erelsgl@ubuntu:/etc/apt$ I also tried this and got the same error: erelsgl@ubuntu:~/Ace/Files/corpus$ sudo dpkg --configure -a Setting up g++ (4:4.4.3-1ubuntu1) ... update-alternatives: error: alternative path /usr/bin/g++ doesn't exist. dpkg: error processing g++ (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of build-essential: build-essential depends on g++ (>= 4:4.3.1); however: Package g++ is not configured yet. dpkg: error processing build-essential (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g++ build-essential

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • [Windows 7] Certain Programs cannot access internet

    - by Cindy
    Operating System: Windows 7 (x64) Problem: Certain Programs are unable to access the internet. They claim that there is no connection when you already are connected. Hello, before we start. Just letting you know I'm new here, and I'm very new to Windows 7. I installed it two days ago. I just installed Windows 7 on my laptop and I have a few problems. I play World of Warcraft, as well as a variety of games. And when I first attempt to log into the game, I get a windows error message, but it doesn't stop there. I thought World of Warcraft got corrupted during the upgrade. It seems that I am unable to access the internet from other online games as well. Most say in along the lines of "Cannot connect to patch server, try again later." I cannot use a downloader Also, I have internet explorer. The x32 version of the browser cannot connect to the internet, and when I try to enter "google.com", it says the same thing. I'm only accessing this site through Internet Explorer x64, which I would have been fine with is it's compatible with Adobe Flash. The only thing that seems to connect to the internet are Internet Explorer x64 and Windows Live Messenger. Here are the steps I have taken, but none worked. 1.) Disable Windows Firewall 2.) Have Windows Firewall Enabled, but allow the specific programs to access internet. And allowed all incoming access. 3.) Disabled UAC, Ran the programs as an admin, and set compatibility to Vista. 4.) Uninstalled an anti-virus program. (McAffee Security Suite 2010) 5.) Reinstalled the programs 6.) Reinstalled Windows 7 7.) Retaken the steps on the Administrator account. Please assist me in this problem. I need to get back into the game. Thanks so much in advance.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • Missing Dependency Errors when Installing OpenVas Server

    - by David
    I'm trying to install OpenVAS on Red Hat Enterprise Linux 5.5. I've successfully run yum install openvas-client, but yum install openvas-server prints the following errors: --> Finished Dependency Resolution openvas-client-3.0.1-1.el5.art.i386 from installed has depsolving problems --> Missing Dependency: libopenvas_hg.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) openvas-client-3.0.1-1.el5.art.i386 from installed has depsolving problems --> Missing Dependency: libopenvas_nasl.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) openvas-client-3.0.1-1.el5.art.i386 from installed has depsolving problems --> Missing Dependency: libopenvas_omp.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) openvas-scanner-3.2-0.2.el5.art.i386 from atomic has depsolving problems --> Missing Dependency: net-snmp-utils is needed by package openvas-scanner-3.2-0.2.el5.art.i386 (atomic) openvas-client-3.0.1-1.el5.art.i386 from installed has depsolving problems --> Missing Dependency: libopenvas_misc.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) openvas-scanner-3.2-0.2.el5.art.i386 from atomic has depsolving problems --> Missing Dependency: openldap-clients is needed by package openvas-scanner-3.2-0.2.el5.art.i386 (atomic) openvas-client-3.0.1-1.el5.art.i386 from installed has depsolving problems --> Missing Dependency: libopenvas_base.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) Error: Missing Dependency: net-snmp-utils is needed by package openvas-scanner-3.2-0.2.el5.art.i386 (atomic) Error: Missing Dependency: libopenvas_base.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) Error: Missing Dependency: libopenvas_hg.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) Error: Missing Dependency: libopenvas_nasl.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) Error: Missing Dependency: openldap-clients is needed by package openvas-scanner-3.2-0.2.el5.art.i386 (atomic) Error: Missing Dependency: libopenvas_omp.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) Error: Missing Dependency: libopenvas_misc.so.3 is needed by package openvas-client-3.0.1-1.el5.art.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Notice that each of the missing dependencies is followed by the words (installed) or the words (atomic) - for the name of the repository. When I try to install any of these sub-dependencies, the installation fails (either due to missing dependencies or since the rpm is already installed). For example, if I try to install a rpm for "libopenvas_hg.so.3", I get an error message indicating that it is already installed. Yet "libopenvas_hg.so.3" is listed as a missing dependency. Why? Do I need to uninstall all of the "missing" dependences first?

    Read the article

  • Client A can ping server S, but client B cannot

    - by Soundar Rajan
    I moved the question to here from stackoverflow.com http://stackoverflow.com/questions/2917569/unable-to-ping-server-from-client-b-but-able-to-ping-from-client-a-please-help I am trying to configure a IIS 6.0/Windows Server 2003 web server with a ASP.net application. When I try to ping the server from client computer A I get the following: PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping succeeds! From client computer B, BOTH the pings fail. PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping fails with a message "Ping request could not find host 74.208.192.xxx:80" Both clients A and B are on the same subnet. The server is outside (a virtual server hosted by an ISP) I have an ASP.NET application in a virtual directory on the server. In IE or firefox, if I enter http://74.208.192.xxx/subdir/subdir/../Default.aspx, it works from both the clients! The server has default firewall settings but web server enabled (Port 80 is open). From client A (note the different 'reply to' address when the ping succeeds. C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx Pinging 74.208.192.xx with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.xx: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx:80 Pinging 74.208.192.xx:80 [208.67.216.xxx] with 32 bytes of data: Reply from 208.67.216.xxx: bytes=32 time=35ms TTL=54 ... Reply from 208.67.216.xxx: bytes=32 time=33ms TTL=54 Ping statistics for 208.67.216.xxx: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 32ms, Maximum = 54ms, Average = 38ms From client B C:\Documents and Settings\user>ping 74.208.192.81 Pinging 74.208.192.81 with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.81: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Documents and Settings\user>ping 74.208.192.81:80 Ping request could not find host 74.208.192.81:80. Please check the name and try again. My main problem is I have a web service (asmx) file and the web service client program is not able to access it from client B, but able to access it from client A. I am trying to find out why and thought this ping issue may shed some light. I can ping yahoo.com both the computers.

    Read the article

  • IIS can't load Oracle.Web assembly (for ASP.NET membership provider)

    - by Konamiman
    I am trying to configure an IIS web site to use an Oracle database for ASP.NET membership, but I can't get it to work. IIS doesn't seem to be able to load the assembly containing the Oracle membership provider. That's what I have so far: An Oracle 10g database online and with all the tables for ASP.NET membership created. Windows 2008 R2 Standard with the web server role installed, including support for ASP.NET. Oracle 11g Release 2 ODAC 11.2.0.1.2 installed. The installed components are: Oracle data provider for .NET, Oracle providers for ASP.NET, Oracle instant client. The default web site on IIS (I am using that for testing) has the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <membership defaultProvider="OracleMembershipProvider"> <providers> <remove name="SqlMembershipProvider" /> <add name="OracleMembershipProvider" type="Oracle.Web.Security.OracleMembershipProvider, Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342" connectionStringName="OracleServer" /> </providers> </membership> </system.web> </configuration> (Additional attributes on the "add" element omitted for brevity. Also, the connection string is defined for the whole server.) The Oracle.Web.dll file is on the GAC. That's the relevant part of the C:\Windows\Assembly folder: The web site application pool is configured for .NET 2.0, and has 32-bit applications enabled. I have allowed untrusted providers in the IIS' administration.config file (just for the sake of testing, I'll explicitly add the assembly to the trusted providers list later). With all of this setup in place, when I click on the ".NET Users" icon on the IIS manager, I get a warning about the provider having too much privileges, and when I accept I get the following message: There was an error while performing this operation. Details: Could not load file or assembly 'Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. The system cannot find the file specified. So, what am I missing? How can I get the Oracle membership provider to work? Thank you! UPDATE: It seems that the problem is not with IIS itself, but with the IIS administrator only. When using the web site configuration tool provided by Visual Studio, everything works fine.

    Read the article

  • Nameserver not resolving or domain not pingable [closed]

    - by Ricky
    Sorry, if anyone can think of a better title please change it! I want to host my own websites from home. For testing purposes, I have a virtual machine running a trial version of Windows Server 2008 Enterprise. Note I currently run a VPS and host my own websites but due to a nice speed upgrade on our line I now want to host from home. I have several domains but I wanted to test with one, that is rickyoleary.com. Our ISP does not provide static IP addresses unless we have a business account so I've been looking at no-ip.com. I admit my networking isn't the best, hence this question but I've been bashing my head all day on this one. I created a host name, muffinbubble.no-ip.org which runs on IP: 86.148.124.15. I've setup IIS on the server with a simple test page. I've then forwarded port 80 traffic from the router and from what I can see, it's working. If I access my website (I was unable to link to this for some reason so please copy and paste this) - http://86.148.124.15/ - I see my test page. So the next step was to create my nameservers. This domain is with namecheap.com so I created my nameservers, ns1.rickyoleary.com and ns2.rickyoleary.com. Both these point to the same IP (and yes, that will be changed after testing), the same IP as above: 86.148.124.15. On the server itself I have set up DNS entries as below which I believe to be correct and added rickyoleary.com and www.rickyoleary.com in the host headers (or bindings) in IIS 7.0. If I try and look up my domain, rickyoleary.com it shows ns1.rickyoleary.com and ns2.rickyoleary.com as the nameservers. I then tried to use just-ping.com on my nameserver ns1.rickyoleary.com. I get 100% packets lost, but the correct IP address is returned (I'm guessing the router does not allow pings, but is still accessible...). I get no response when pinging rickyoleary.com. Here's the problems: I cannot ping ns1.rickyoleary.com or ns2.rickyoleary.com from a command prompt. I'm not sure if this is an issue. When I added the nameservers in Windows Server 2008 and clicked 'resolve' a message box displays stating "No such host is known". I cannot ping rickyoleary.com. rickyoleary.com is not showing my test page on my server. Now - please note, I've waited around 6 hours for propagation. From my experience, although you're told to wait 24 - 48 hours, the changes are normally pretty quick so perhaps I'm being impatient or naive to think it should all be working fine until then. I would really appreciate some help here. Thanks.

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • Problem installing Cardbus/PCMCIA drivers (USB 2.0 2-port)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Android emulator performance on linux

    - by Rado
    I installed the android SDK and eclipse plugin on my laptop, but I was surprised to find out that the emulator eats up 100% of one of my cpu cores. I have exactly the same setup on a desktop machine that does not have this issue. Both computers are running arch linux and both were updated yesterday. Granted, the desktop has better hardware than the laptop, but I was expecting to get closer to 50% cpu usage than 100% on the laptop. Both android virtual devices have the same specs: CPU: ARM Target: Android 2.3.3 - API Level 10 Skin: WVGA800 SD Card: 512M hw.lcd.density: 240 vm.heapSize: 24 hw.ramSize: 256 Laptop host has Intel Core 2 T7200 @ 2GHz cpu with 2Gb RAM. Desktop host has AMD Phenom II X4 940 @ 3GHz cpu with 8Gb RAM. The android emulator uses only 1 core and here are the CPU usage results: Laptop: Cpu0 : 22.8%us, 76.5%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu1 : 11.2%us, 2.4%sy, 0.0%ni, 86.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055484k total, 1860304k used, 195180k free, 5276k buffers Swap: 2000088k total, 106872k used, 1893216k free, 350780k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2026 xyz 20 0 396m 207m 7192 R 100 10.3 4:11.58 emulator-arm Desktop: Cpu0 : 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 1.3%us, 0.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 5.0%us, 1.3%sy, 0.0%ni, 91.9%id, 1.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7666324k total, 6506808k used, 1159516k free, 1650960k buffers Swap: 8988348k total, 0k used, 8988348k free, 2867300k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2811 xyz 20 0 392m 220m 6276 S 8 2.9 0:33.58 emulator-arm Is there any way I can improve the emulator performance on the laptop? [UPDATE] I ran the emulator with the same settings, on the same laptop under Win7 and after starting up, it didn't use 100% of a CPU core unlike under linux. Also, I tried running the emulator from a terminal in Linux and I get this message when I don't get it under the desktop Linux host: Could not configure '/dev/hpet' to have a 1024Hz timer. This is not a fatal error, but for better emulation accuracy type: 'echo 1024 /proc/sys/dev/hpet/max-user-freq' as root. I'm not really familiar with rtc or hpet, but it doesn't seem that max-user-freq setting does anything, I still get the same warning.

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Trailing dots in url result in empty 404 page on IIS

    - by Peter Hahndorf
    I have an ASP.NET site on IIS8, but IIS7.5 behaves exactly the same. When I enter a URL like: mysite.com/foo/bar.. I get the following error with a '500 Internal Server Error' status code: even though I have custom error pages set up for 500 and 404 and I don't see anything wrong with my custom error page. In my web.config system.web node I have the following: <customErrors mode="On"> <error statusCode="404" redirect="/404.aspx" /> </customErrors> If I remove that section, I get a 404.0 response back but the page itself is blank. In web.config system.webServer I have: <httpErrors errorMode="DetailedLocalOnly"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="404.html" responseMode="File" /> </httpErrors> But whether that is there or not, I get the same blank 404.0 page rather than my expected custom error page, or at least an internal IIS message. So first of all why is the asp.net handler picking up a request for '..' (also works with one or more trailing dots) If I remove the following handler from applicacationHost.config: <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" responseBufferLimit="0" /> I get my expected custom 404 page, but of course removing that handler breaks routing in asp.net among other things. Looking at the failure trace I see: Windows Authentication is disabled for the site, so why is that module even in the request pipeline? For now my fix is to use the URL Rewrite module with the following rule: <rewrite> <rules> <rule name="Trailing Dots" stopProcessing="true"> <match url="\.+$" /> <action type="Rewrite" url="/404.html" appendQueryString="false" /> </rule> </rules> </rewrite> This works okay, but I wonder why IIS/ASP.NET behaves this way?

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

< Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >