Search Results

Search found 8959 results on 359 pages for 'bad decisions'.

Page 14/359 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Bash script: bad interpreter

    - by Quandary
    Question: I get this error message: export: bad interpreter: No such file or directory when I execute this bash script #!/bin/bash MONO_PREFIX=/opt/mono-2.6 GNOME_PREFIX=/opt/gnome-2.6 export DYLD_LIBRARY_PATH=$MONO_PREFIX/lib:$DYLD_LIBRARY_PATH export LD_LIBRARY_PATH=$MONO_PREFIX/lib:$LD_LIBRARY_PATH export C_INCLUDE_PATH=$MONO_PREFIX/include:$GNOME_PREFIX/include export ACLOCAL_PATH=$MONO_PREFIX/share/aclocal export PKG_CONFIG_PATH=$MONO_PREFIX/lib/pkgconfig:$GNOME_PREFIX/lib/pkgconfig PATH=$MONO_PREFIX/bin:$PATH PS1="[mono-2.6] \w @ " But the bash path seems to be correct: asshat@IS1300:~/sources/mono-2.6# which bash /bin/bash asshat@IS1300:~# cd sources/ asshat@IS1300:~/sources# cd mono-2.6/ asshat@IS1300:~/sources/mono-2.6# ./mono-2.6-environment export: bad interpreter: No such file or directory asshat@IS1300:~/sources/mono-2.6# ls download mono-2.4 mono-2.4-environment mono-2.6 mono-2.6-environment asshat@IS1300:~/sources/mono-2.6# cp mono-2.6-environment mono-2.6-environment.sh asshat@IS1300:~/sources/mono-2.6# ./mono-2.6-environment.sh export: bad interpreter: No such file or directory asshat@IS1300:~/sources/mono-2.6# ls download mono-2.4-environment mono-2.6-environment mono-2.4 mono-2.6 mono-2.6-environment.sh asshat@IS1300:~/sources/mono-2.6# bash mono-2.6-environment asshat@IS1300:~/sources/mono-2.6# What am I doing wrong? Or is this a Lucid bug? [i did chmod + x]

    Read the article

  • Bad value for type timestamp on production server

    - by Juan Javaloyes
    I'm working with: seam 2.2.2 + hibernate + richfaces + jboss 5.1 + postgres I have an module which needs to load some data from the database. Easy. The problem is, on development it works fine, 100%, but when I deploy on my production server and try to get the data, an error rise: could not read column value from result set: fechahor9_504_; Bad value for type timestamp : [C@122e5cf SQL Error: 0, SQLState: 22007 Bad value for type timestamp : [C@122e5cf javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not execute query [more errors] Caused by: org.postgresql.util.PSQLException: Bad value for type timestamp : [C@122e5cf at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:232) [more errors] Caused by: java.lang.NumberFormatException: Trailing junk on timestamp: '' at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:226) I can't understand why it works on my machine (development) and why not on production. Any clues? Anyone gone through the same problem? Is exactly the same compilation

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • Bad interpreter: No such file or directory

    - by user1497462
    I'm working through Michael Hartl's tutorial trying to learn Rails for the first time, and I've run into some issues. I recently reinstalled the whole Rails Installer because I had apparently inadvertently deleted some important files. Now, when I try running a test I get the following error: sh.exe": /c/Program Files (x86)/ruby-1.9.3/bin/bundle: "c:/Program: bad interpre ter: No such file or directory I checked my PATH and attempted to use the solution outlined here: Bundle command not found. Bad Interpreter ..but putting quotation marks around "C:\Program Files (x86)\ruby-1.9.3\bin" didn't do anything for me. I ran $ rails -v and got the following output: $ rails -v ?[31mCould not find multi_json-1.3.6 in any of the sources?[0m ?[33mRun `bundle install` to install missing gems.?[0m So then I tried running $bundle install and got the following issue again: Tom@TOM-PC /c/sample_app (updating-users) $ bundle install sh.exe": /c/Program Files (x86)/ruby-1.9.3/bin/bundle: "c:/Program: bad interpre ter: No such file or directory I'd really appreciate any help -- I've spent 5+ hours today trying to get back on track and am still at a loss. Thanks!

    Read the article

  • What's so bad about in-line CSS?

    - by ChessWhiz
    When I see website starter code and examples, the CSS is always in a separate file, named something like "main.css", "default.css", or "Site.css". However, when I'm coding up a page, I'm often tempted to throw the CSS in-line with a DOM element, such as by setting "float: right" on an image. I get the feeling that this is "bad coding", since it's so rarely done in examples. I understand that if the style will be applied to multiple objects, it's wise to follow "Don't Repeat Yourself" (DRY) and assign it to a CSS class to be referenced by each element. However, if I won't be repeating the CSS on another element, why not in-line the CSS as I write the HTML? The question: Is using in-line CSS considered bad, even if it will only be used on that element? If so, why? Example (is this bad?): <img src="myimage.gif" style="float:right" />

    Read the article

  • c++ file bad bit

    - by user230911
    Hi, when I run this code, the open and seekg and tellg operation all success. but when I read it, it fails, the eof,bad,fail bit are 0 1 1. What can cause a file bad? thanks int readriblock(int blockid, char* buffer) { ifstream rifile("./ri/reverseindex.bin", ios::in|ios::binary); rifile.seekg(blockid * RI_BLOCK_SIZE, ios::beg); if(!rifile.good()){ cout<<"block not exsit"<<endl; return -1;} cout<<rifile.tellg()<<endl; rifile.read(buffer, RI_BLOCK_SIZE); **cout<<rifile.eof()<<rifile.bad()<<rifile.fail()<<endl;** if(!rifile.good()){ cout<<"error reading block "<<blockid<<endl; return -1;} rifile.close(); return 0; }

    Read the article

  • heartbeat: Bad nodename in /etc/ha.d//haresources [node1]

    - by Richard
    I'm trying to start heartbeat on Ubuntu 10.04 with service heartbeat start, but getting the following errors: heartbeat[24829]: 2011/11/22_19:31:07 ERROR: Bad nodename in /etc/ha.d//haresources [node1] heartbeat[24829]: 2011/11/22_19:31:07 ERROR: Configuration error, heartbeat not started. On on server uname -n produces loadb1, on the second server uname -n produces loadb2. The two servers can ping each other okay with those names. This is /etc/ha.d/ha.cnf on both servers: debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 udpport 694 bcast eth1 ucast eth0 my.external.ip ucast eth0 my.external.ip ucast eth1 10.0.0.5 ucast eth1 10.0.0.6 #udp eth0 node loadb1 node loadb2 auto_failback off And this is /etc/ha.d/haresources on both servers: node1 IPaddr::46.20.121.113 httpd smb dhcpd Authkeys is also set up. What am I doing wrong? The part where I'm least clear is the ucast/bcast lines.

    Read the article

  • rpmbuild on Cent OS 6: "cpio: bad magic"

    - by djhaskin987
    When I try to run this command : rpmbuild -bb SPECS/software.spec I get an error when the WAR file (as in tomcat java web archive file) is being added to the rpm: error: create archive failed on file /<filepath>/<filename>.war: cpio: Bad magic This didn't use to happen. The only things that have changed since this worked was an upgrade. Further, no problems are happening like it on my CentOS 5 box. I compile and build the exact same code set on both machines, but CentOS 6 won't create an rpm. How do I troubleshoot this? I have already googled it and received few (if any) useful links. This appears nowhere in the user guide for RPM as far as I can see, and Maximum RPM has no section on this.

    Read the article

  • Hard disk with bad clusters

    - by Dan
    I have been trying to backup some files up to DVD recently, and the burn process failed saying the CRC check failed for certain files. I then tried to browse to these files in Windows explorer and my whole machine locks up and I have to reboot. I ran check disk without the '/F /R' arguments and it told me I had bad sectors. So I re-ran it with the arguments and check disk fails during the 'Chkdsk is verifying usn journal' stage with this error: Insufficient disk space to fix the usn journal $j data stream The hard disk is a 300GB Partition on a 400GB Disk, and there is 160GBs of free space on the partition. My os (Windows 7) is installed on the other partition and is running fine. Any idea how I fix this? or repair it enough to copy my files off it?

    Read the article

  • “BAD” partition showing up in Partition Manager

    - by Quintin Par
    I tried to partition my primary hard disk (NTFS partitions) with qtparted and got stuck in the process. Consequently I had to kill the process and exit my knoppix live CD boot up. Even though I was expecting XP to get corrupted, it booted fine and showed up all the drives accessible. But when I opened this with partition manager 8, it shows up as “BAD”. I ran chkdsk /f without any success. My objective with qparted and partition magic was to resize my existing partitions and add some space to c: How do I fix this problem and resize my partitions? Edit: Here's how my primary drive as per windows is

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • Bad performance issue on dedicated server

    - by Pierre Espenan
    I just subscribed to a dedicated server offer, and encounter some bad PHP execution performances. Actually, the time execution may be 2 times bigger than it is on my old mutualized server! I'm definitely not an expert in server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • Ubuntu 12.04 transmission-daemon and zfsonlinux: bad file descriptor and corrupt pieces

    - by Ivailo Karamanolev
    I'm running a Ubuntu 12.04 with zfsonlinux and transmission-daemon. The issues: sporadic Bad File Descriptor and Piece #xxx is corrupt errors. After I recheck the torrent, everything seems fine. That happens only when downloading: once it's in seeding mode. This only happens after the torrent client has been running for some time. I installed zfsonlinux from the offical stable ppa (https://launchpad.net/~zfs-native/+archive/stable). I previously tried running transmission-daemon from the Ubuntu repository, but since I've switched to building the latest transmission from source with the latest libevent (all stable) - same thing. I've seen bug reports (https://trac.transmissionbt.com/ticket/4147) for that issue, but none of them seem to have a solution. How can I fix these errors, or at least understand where they come from and what I can do to rectify the issue?

    Read the article

  • Http 400 'Bad Request' and win32status 1450 when larger messages are sended to a WCF service

    - by Tim Mahy
    Hi all, we sometimes receive Http 400 bad request resultcodes when posting a large file (10mb) to a WCF service hosted in IIS 6. We can reproduce this using SOAP UI and it seems that it is unpredictable when this happens. In our WCF log the call is not received, so we believe that the request does not reach the ASP.NET nor WCF runtime. This happens on multiple websites on the same machine each having their own application pool. All IIS settings are default, only in ASP.NET and WCF we allow bigger readerQuota's etc.... The win32status that is logged in the IIS log is 1450 which we think means "error no system resources". So now the question: a) how can we solve this b) (when a is not applicable :) ) which performance counters or logs are usefull to learn more about this problem? greetings, Tim

    Read the article

  • Amazon EC2 SSH Failed to connect "Bad File Number"

    - by Mark McCook
    This is the command I am told to use by clicking connect in the control panel "ssh -i private_key.pem root@instancePublicDNS" Well that one failed so I wanted to know what happen so I ran "ssh -vvv private_key.pem root@instancePublicDNS" OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to private_key.pem [...] port 22. debug1: connect to address ... port 22: Attempt to connect timed out without establishing a connection ssh: connect to host private_key.pem port 22: Bad file number Any Ideas? I have searched for the answer on google and serverfault, I found a few possible solutions that did not work. info about the instance AMI-ID : ami-688c7801 ( ubuntu 10.10 Server )

    Read the article

  • Bad certificate error with RabbitMQ using SSL

    - by David Tinker
    I am trying to get RabbitMQ working with SSL on a couple of Gentoo servers. I get the following error in /var/log/rabbitmq/[email protected] when I try to connect to the management console using https: SSL: certify: ssl_connection.erl:1641:Fatal error: bad certificate I followed the instructions here: http://www.rabbitmq.com/ssl.html The annoying thing is that I have 2 cloned servers and it is working on one and not the other. As far as I can tell the machines are configured identically. I wrote a script to generate the certs etc. and have run it on both. I am not using client certificates. Anyone know how I can figure out whats wrong with my certificate(s)? I am using Erlang 15.2, RabbitMQ 2.7.9, OpenSSL 0.9.8k.

    Read the article

  • Http 400 'Bad Request' and win32status 1450 when larger messages are sended to a WCF service

    - by Tim Mahy
    we sometimes receive Http 400 bad request resultcodes when posting a large file (10mb) to a WCF service hosted in IIS 6. We can reproduce this using SOAP UI and it seems that it is unpredictable when this happens. In our WCF log the call is not received, so we believe that the request does not reach the ASP.NET nor WCF runtime. This happens on multiple websites on the same machine each having their own application pool. All IIS settings are default, only in ASP.NET and WCF we allow bigger readerQuota's etc.... The win32status that is logged in the IIS log is 1450 which we think means "error no system resources". So now the question: a) how can we solve this b) (when a is not applicable :) ) which performance counters or logs are usefull to learn more about this problem? greetings, Tim

    Read the article

  • RewriteCond in .htaccess file gives me bad flag delimiters

    - by Steven
    I'm upgrading a website and I use this .htaccess file to show maintenance page: #MAINTENANCE-PAGE REDIRECT RewriteEngine on RewriteCond %{REMOTE_ADDR} !^127\.0\.0\.0 # Bogus IP address for posting here RewriteCond %{REMOTE_ADDR} !^127\.0\.0\.0 # Bogus IP address for posting here RewriteCond %{REQUEST_URI} !^/maintenance\.html$ RewriteRule ^(.*)$ http://www.mysite.no/maintenance.html [R=307,L] This opens the maintenance page for all users except the two IP addresses I've added. They get an Internal Server Error. I've used thesame script on another site, and that worked fine. Looking at the error log, I see the following: /var/www/vhosts/mysite.no/httpdocs/.htaccess: RewriteCond: bad flag delimiters If I remove my .htaccess file, I can work with my site just fine. My site is hosted on a VPN using CentOS 5. How can I fix this problem?

    Read the article

  • Data Recovery needed with a Sector Zero problem on a HDD

    - by Jay Robins
    Left my computer (running XP) copying files to an external drive. Came back a few hours later, and the laptop was pretty hot and had frozen (Ironic aint it). Forced a reboot, and the laptop HDD hasnt worked since. Set it up in an external enclosure, but XP cant mount the HDD to a drive letter, but is able to recognize the manufacturer and drive size (Hitachi-320GB). No noise or rattling when the drive is spinning, but I cant get anything off of it, since i cant mount it, or see much of anything. Computer repair shop ran some software tests and says it came back with a "Zero Sector bad" message, and need to send it to a professional data recovery service. Any other options or ideas, before I have to spend thousands of dollars to recover my data? Any help would be GREATLY appreciated!!! I'm desperate and a poor student! Thanks in Advance, -Jay

    Read the article

  • Reverse proxy 502 bad gateway

    - by Brian Graham
    I have setup a subdomain to proxy my plesk panel, but when saving pages I am getting 502 Bad Gateway error instead of a completion message. I am running CentOS 6. Here is my vhost.conf configuration for http://plesk.domain.tld/: RewriteEngine On RewriteCond %{SERVER_PORT} ^80$ RewriteRule $ https://plesk.domain.tld/ [R,L] Here is my vhost_ssl.conf configuration for https://plesk.domain.tld/: SSLProxyEngine On <Location /> ProxyPass https://localhost:8443/ ProxyPassReverse https://localhost:8443/ </Location> I have more than enough (and I have even checked) RAM, CPU and HDD. There are no spikes. As well, the posted information does save, it just errors when trying to show me a "This information has been saved." green/red block. Here is the relevent error from /var/log/nginx/error.log (IP/Host Filtered): 2014/05/29 02:42:41 [error] 8046#0: *402 upstream prematurely closed connection while reading response header from upstream, client: 173.238.XX.XX, server: plesk.domain.tld, request: "POST /smb/web/edit HTTP/1.1", upstream: "https://198.100.XX.XX:7081/smb/web/edit", host: "plesk.domain.tld", referrer: "https://plesk.domain.tld/smb/web/edit"

    Read the article

  • Bad sound quality of 3.5mm headphone with mic on laptop

    - by Isaac
    I have a set of headphones that have a built-in mic for hands-free calling. They just work great on my Sony Ericsson Cedar cellphone. The problem is that when I connect headphone to my Dell N5010 laptop to listen to music, the quality is horrible, with very weak or no vocals. They funny part is when I hold down the talk button on the mic (headphone mic), at which point it sounds great, but goes back to bad quality as soon as I release talk button. Also, when I take out the jack a little, at some point, the sound is great but I have to hold the jack there. I looked for any configuration on the sound card driver but find nothing. Besides using a glue to hold down the talk button of mic :), is there any other solution?

    Read the article

  • Bad request - Invalid Hostname Error when using ARR IP address

    - by syloc
    I'm trying to setup a simple ARR system. I have 1 ARR machine load balancing between 2 APP servers. I can reach the app sites if i use the server name of the ARR machine. (http://arrserver/app) But i can't do it with its IP address. (http://10.7.10.25/app). It gives the "Bad Request - Invalid Hostname". In the ARR machine i configured the default site's bindings to "All Unassigned","80" (default values). Do i need to change the binding rule or need additional url rewrite rules? And also, in the ARR server http://127.0.0.1/app doesn't work. But http://localhost/app works fine. Thx in advance

    Read the article

  • MediaTemple Django Bad Gateway

    - by Eeyore
    I have a site running on GS server on MediaTemple. It's Django/PostgreSQL setup. For some reason from time to time I get Bad Gateway error and I can't figure out what's causing it. What can cause this error? What else can I do to find the cause of the problem? url.access-deny = ( "~", ".inc" ) fastcgi.server = ( "/main.fcgi" => ( "main" => ( "socket" => "/var/tmp/" + appname + ".sock", # don't change this "check-local" => "disable", ) ) ) alias.url = ( "/media/" => "/home/xxx/data/python/django/django/contrib/admin/media/", "/static/" => "/home/xxx/containers/django/site/static/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^(/static.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/main.fcgi$1", ) server.error-handler-404 = "/main.fcgi"

    Read the article

  • Does a bad Internet connection increase bandwidth usage?

    - by Synetech
    My (Rogers) cable connection has been pretty bad recently (channels 3 and 10 are particularly fuzzy—it’s analog, not digital cable). Not surprisingly, this has caused my cable modem to drop out and have to reestablish a connection a couple of times since it started. The poor connection of course means higher corruption (not necessarily dropped per se) which causes the TCP/IP stack to have to retransmit packets more often. Reduction of bandwidth throughput aside, I got to wondering if it increases the actual bandwidth usage. That is, if there is a high error rate on the line causing packets to have to be retransmitted: Does this increase a bandwidth monitoring program’s numbers? Does the ISP count the retransmitted packets toward the monthly cap? Based on what I remember from my university networking courses and common sense, I have a feeling that the answer to both questions is yes, but I cannot reliably measure the first, and have no authoritative answer for the second. I’m wondering if maybe the retransmitted packets are acknowledged as being duplicates and thus not counted somewhere along the line.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >