Search Results

Search found 62712 results on 2509 pages for 'memory error'.

Page 208/2509 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • Error when adding to the domain : the specified server cannot perform the requested operation

    - by James
    When we add computers to the domain in Windows 7, we get the error: Changing the Primary Domain DNS name of this computer to "" failed. The name will remain "domain.com". The error was: The specified server cannot perform the requested operation. This happens on multiple computers and retrying yields the same result. Despite the error, the computer is still able to login to the domain ok. The DCs are windows 2003. Has anyone found a way to get rid of this error? Any help is appreciated.

    Read the article

  • cURL connection error (60): SSL certificate pr0blem on Windows

    - by Cheeso
    When I try to use Curl on windows, to retrieve an https url, I get the dreaded "connection error (60)." The exact error message is: curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html How to resolve this? ps: I spelled the word "Problem" wrong in the title purposefully, because superuser refuses to accept the question with the correct spelling.

    Read the article

  • PHPMyAdmin - Error 500

    - by christian.thomas
    Have scoured the board but can't seem to find anything that's helped yet. If I go to http://localhost/ it's fine, if I go to http://localhost/phpmyadmin I get an 'Error 500: Internal Server Error' There doesn't seem to be anything that'll show up in the log files either. I've tried the RewriteLog as mentioned in PHPMyAdmin 500 Internal Server Error - But that doesn't really seem to help either, nothing gets written to it when I've got: # Logfiles ErrorLog /home/www/beta.**.com/logs/error.log CustomLog /home/www/beta.**.com/logs/access.log combined RewriteLog /home/www/beta.**.com/logs/rewrite.log RewriteLogLevel 9 I've tried uninstalling the package and re-installing it, but that's not helped either. Anyone got any other suggestions? I'm running Debian and Apache 2.

    Read the article

  • end_request: I/O error, dev sda, sector xxxxxxxxx

    - by muruga
    I have a IBM server. This server contains 3 hard disk with RAID 5. It was working fine earlier. Unfortunately this machine got the following error message. After that I have rebooted the systems. After that I am getting the following error message in kern.log and demsg kernel: [65896.678870] end_request: I/O error, dev sda, sector 17430271 kernel: [69263.783957] sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK : [69263.783957] sd 0:0:0:0: [sda] Sense Key : Hardware Error [current] kernel: [69263.783957] sd 0:0:0:0: [sda] Add. Sense: Internal target failure Whether it is kernel problem or hard disk problem or Raid problem

    Read the article

  • Error in mysql "max_allowed_packet" VPS godaddy

    - by focusmantra
    I am getting error "Serious session error detected. Please notify administrator, this problem is most probably caused by small value in max_allowed_packet MySQL setting. " This error generally comes after every 20-25 minutes and when it comes , it logs out the user and then logs in again, starts again and then after sometime the same issue occurs again. I tried changing max_allowed_packet setting but getting error "access denied; You need SUPER privilege for this operation'. I even tried SET SESSION too but error "SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value" I have hosted the website on godaddy VPS centos and access it via putty or cpanel. Website is made in moodle 2.0.3 i.e. php. My developers use to fix this but warned will occur when server restart. As godaddy ppl say move to dedicated and then i can do but as I don't have any money so can't at present. I trying to find how developers used to do for temporary fix that is until server restart.

    Read the article

  • Codecs, Premiere Pro & Quicktime: Import or Play Error

    - by Nchpmn
    Original Question I've been using a FS-H200 (not the Pro variant) recorder with a JVC ProHD camera. I have been shooting with the DTE FORMAT to Quicktime (.mov). I copied the files to an external hard drive and am now trying to edit. The files will play back in VLC, as they would be expected to. However they will not import into Adobe Premiere CS5.5, instead giving an error: Unsupported format or damaged file. Quicktime gives the following error when attempting to play the files: Error -2002: a bad public movie atom was found in the movie (Filename) To try and fix this, I have installed the following codec packs: K-Lite Codec Pack 64-bit Full (version 5.9, latest) K-Lite Codec Pack 32-bit Full (version 8.4, latest) MainConcept Codec Suite (Broadcast) v5.1 for Adobe CS5 Reinstalled Quicktime with new download from Apple The same errors and problems still exist. From this I can assume that there is an issue with Quicktime and that is what Premiere is using as an encoder/decoder for the codec. Is there any way to fix this? From looking at the "Codec Information" from VLC: Stream 0 Type: Video Codec: MPEG-1/2 (mpgv) Language: English Resolution: 1280 x 720 Frame Rate: 25 Stream 1 Type: Audio Codec: PCM S16 BE (twos) Language: English Channels: Stereo Sample Rate: 48000 Hz Bits per sample: 16 Other computer specs: Windows 7 Professional 64-bit (SP1) Gigabyte Z68X-UD3-B3 Intel i7-2600K 16GB DDR3 2TB WD 7200RPM SATA 6Gb/s LaCie d2 Quadra 2TB v3 7200RPM (External HDD) NVIDIA GeForce GTX 560 Ti Golden Sample Updates 2012-03-11 @ 2050 AEDT MPEG Steamclip doesn't recognise, play or convert the footage. File open error: unrecognised file type. [Open Anyway] File open error: can't find video or audio tracks. 2012-03-24 @ 1920 AEDT Had to transcode the footage. :(

    Read the article

  • Windows 7 - Windows 8 dual boot installation error

    - by Nikhil
    I am trying to dual boot Windows 8 with Windows 7 . But I keep getting an error while selecting the drive to install on . The error is ""Setup was unable to create a new system partition or locate an existing system partition. See the Setup log files for more information." " . What might be the problem ? Also previously I tried to install only Windows 7 on my HDD . I downloaded the ISO from digital river . Made bootable usb . I got the same above error . But when I tried to install it via other usb which had pirated Windows 7 downloaded from torrents it gave no error . My system config is Motherboard - Gigabyte G41 MT S2P HDD - 160 GB SATA RAM - 8 GB Help !!!

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • Administrator can run a application but produces error while a administrator previllaged user can run without error

    - by tough
    I have an application which can run as admin prevailed user without error but while administrator runs it it produces error. The input string was not in correct format-application error. I tried to figure it out but not possible, since most of the people are looking for admin privilege to run it, me looking why can't a admin run the program without errors? The program is related to mssql-2008 r2. The SQL log-in settings are same for both the users.

    Read the article

  • error while loading shared libraries Dicom Store SCU / Echo SCU

    - by David Just
    I am running a dicom receiver on a Centos 6 box on top of a Xen server. If I attempt to send data to it from a remote server I get the following error: storescp: relocation error: /lib/libresolv.so.2: symbol memcpy, version GLIBC_2.0 not defined in file libc.so.6 with link time reference If I send data to the server locally it works, but sending to it from remote gives the above error. I do not think this is a problem that is specific to the storescp server.

    Read the article

  • Failing RAM, or something else?

    - by Thanatos
    I have a IBM Thinkpad T43, currently running Windows XP. Programs were crashing, XP was blue-screen-of-deathing, (more than usual) - it was basically unusable, but I couldn't get any informative error out of XP. I booted Ubuntu off a thumbdrive, which made it to the desktop, but as soon as I started to try to do anything, X segfaulted, along with several other services, followed quickly by kernel warnings and a kernel panic. I'm currently running Memtest86+ on this machine, which is spitting out numerous errors. (16k over 3 passes, and counting) The failing areas are numerous, and look something like this: 0001055da4 - XX.X MB, etc. The addresses that fail seem to cluster around 0-20 MB, 250MB, and, more rarely, 750MB, 1000MB, and 1200MB. However, a lot (but not all) of the failing addresses that I've seen end in XXXXXXX?da4 where the ? is a 1 or a 5. The machine has two sticks of RAM, one 512MB, one 1024 MB, the 512MB mapped to the lower addresses, the 1024 MB stick following. Is this indeed RAM failure, or should I consider other things before purchasing more RAM?

    Read the article

  • Is there way to enable 4 GB RAM in 32-bit Windows OS?

    - by Wahid Bitar
    I upgraded my PC to 4 GB RAM and I get only 3 GB. Windows 7 32-Bit consider that I've 4 GB RAM but didn't use more than 3 GB. Someone told me that MS Windows 32-bit doesn't support RAM larger than 3 GB. So please is there any way to make my OS "Windows 7 32-Bit" support more than 3 GB RAM ? *`Note: I can't move to 64-bit because I've many program doesn't work with a 64-bit OS. Edit:: I tried what Mr. Wonsungi advised me but whenever I check this option: Enable support for 4 GB of RAM I get the following error: 'Cannot access to the registry key HKEY_CLASSES_ROOT\CLSID\{E88DCCE0-11d1-A9F0-00AA0060FA31}.' There is no "CLSID" in my registry, I don't know why!.

    Read the article

  • Configuring Corporate Windows Error Reporting On Windows 7

    - by Clément
    Is there any good documentation out there explaining how to setup Corporate Error Reporting (CER) on Windows 7? I found some information in Advanced Windows Debugging but the book targets Windows XP and things have changed quite a bit since then. I could not find any tutorials on the Internet/MSDN either. To give a bit of background information, I work for a company with 25 employees and I would like to send crash reports to a local server so that I can analyze what causes our tools to crash. I think I need to know two things: Setting up a Corporate Error Reporting server. Setting up computer to send error reports to our Corporate Error Reporting server.

    Read the article

  • libkeybinder error Arch linux

    - by Nihar Sarangi
    I am trying to install a package that depends on python-keybinder and hence on libkeybinder. When I run makepkg for libkeybinder, it starts and after sometime I get the following error: checking for XkbQueryExtension... no configure: error: Could not find XKB ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build libkeybinder. I tried checking the pkgbuild but wasn't able to find anything. :( Is there a workaround?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • file copy error from system to cifs mount

    - by dwpriest
    When coping a file greater than 64kB from an Ubuntu server to a CIFS mounted windows share, most of the data is copied, but it seems the last chunk doesn't get copied. The size doesn't match, and the md5 check sums don't match. I have plenty of file space, but then I use cp, I get the following... cp: closing `cloudBackup/asdf.txt': No space left on device Using rsync, I get the following... rsync: close failed on "/home/fluffy/cloudBackup/.asdf.txt.qrBWe6": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(752) [receiver=3.0.8] rsync: connection unexpectedly closed (29 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.8] I have full read/write permissions on the mounted share. I can copy via SSH just fine. Any ideas? Thank you

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • Failing RAM, or something else? [closed]

    - by Thanatos
    I have a IBM Thinkpad T43, currently running Windows XP. Programs were crashing, XP was blue-screen-of-deathing, (more than usual) - it was basically unusable, but I couldn't get any informative error out of XP. I booted Ubuntu off a thumbdrive, which made it to the desktop, but as soon as I started to try to do anything, X segfaulted, along with several other services, followed quickly by kernel warnings and a kernel panic. I'm currently running Memtest86+ on this machine, which is spitting out numerous errors. (16k over 3 passes, and counting) The failing areas are numerous, and look something like this: 0001055da4 - XX.X MB, etc. The addresses that fail seem to cluster around 0-20 MB, 250MB, and, more rarely, 750MB, 1000MB, and 1200MB. However, a lot (but not all) of the failing addresses that I've seen end in XXXXXXX?da4 where the ? is a 1 or a 5. The machine has two sticks of RAM, one 512MB, one 1024 MB, the 512MB mapped to the lower addresses, the 1024 MB stick following. Is this indeed RAM failure, or should I consider other things before purchasing more RAM?

    Read the article

  • Error when I try to connect to a SQL Server 2005 from the internet

    - by Manish
    My SQL Server is on a local machine. I want to access it through internet. I created a website through I want to connect local SQL Server 2005. This is the error message: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Thanks for a reply!

    Read the article

  • Session cach and tmp folder error on cPanel Centos

    - by Danialzo
    One of my clients has come across multiple breakdowns in their websites with the following error PHP Code: Warning: session_start() [function.session-start]:open(/tmp/sess_1d6616afe1b8a0d91a8d9ec29254b453, O_RDWR) failed: No space left on device (28) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/index.php on line 177Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/currency.php on line 45&#65279;Notice: Error: Can't create/write to file '/tmp/#sql_4c5_0.MYI' (Errcode: 28) Error No: 1 They are experiencing this problem while: Nothing has been recently changed on the server tmp and other folders are occupied with not more than 10% of their total space The error come and goes I would really appreciate it if anyone could guide me through. Thanks in advance

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • xp stop error 50 page fault in non paged area

    - by Tony
    I have a laptop that fails to boot with the BSOD error: "page_fault_in_non_paged_area" STOP: 0x00000050 (0xEC6B738D, 0x00000000, 0x8649308C, 0x00000000) The laptop has 2 memory DIMMs. I removed each DIMM one at a time and the error remained with just one DIMM installed. I have run spinrite 6.0 on the hard drive no errors found. Booted to recovery mode and ran CHKDSK /R, it found and fixed errors but still gets the stop error. Any other suggestions to try?

    Read the article

  • ssl_error_rx_record_too_long error on IIS - site was working, suddenly stopped

    - by JK01
    I am suddenly getting this error connecting to localhost IIS on my development machine. It has been working fine for ages, and now suddenly has this error in Firefox: Secure Connection Failed An error occurred during a connection to localhost. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) I have googled and found no clear explanation. In IE it says: Internet Explorer cannot display the webpage\ In Chrome it says: Oops! This link appears to be broken.

    Read the article

  • Error 18446744073709551615 when running iptables in OpenVZ container

    - by xsaero00
    This is related to the question I asked before. Now I am getting a different error. iptables: Unknown error 18446744073709551615 when trying to apply a simple rule in VZ container iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 I have done everything that was suggested to do on hardware node and container but the error persists. On hardware node: /etc/sysconfig/iptables-config IPTABLES_MODULES="ip_conntrack_netbios_ns ipt_REJECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp" /etc/vz/vz.conf IPTABLES="ipt_REJECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp" /etc/rc.local modprobe xt_tcpudp; modprobe ip_conntrack; modprobe xt_state container config IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ipt_state iptable_nat ip_nat_ftp " I have restarted HN and container numerous times, but the error is still there. It seems like all config is in place but something like lack of some resources is preventing the rule from being applied. Thanks for any help.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >