Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 323/771 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Pip installs on Archlinux fails to build egg

    - by stmfunk
    I was trying to install nltk on my Archlinux server but it repeatedly fails with the following error output /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'entry_points' warnings.warn(msg) /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'zip_safe' warnings.warn(msg) /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'test_suite' warnings.warn(msg) usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'bdist_egg' /tmp/pip_build_root/nltk/distribute-0.6.21-py3.3.egg Traceback (most recent call last): File "./distribute_setup.py", line 143, in use_setuptools raise ImportError ImportError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 16, in File "/tmp/pip_build_root/nltk/setup.py", line 23, in distribute_setup.use_setuptools() File "./distribute_setup.py", line 145, in use_setuptools return _do_download(version, download_base, to_dir, download_delay) File "./distribute_setup.py", line 125, in _do_download _build_egg(egg, tarball, to_dir) File "./distribute_setup.py", line 116, in _build_egg raise IOError('Could not build the egg.') OSError: Could not build the egg. ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/nltk Storing complete log in /root/.pip/pip.log This error is also occurring for matplotlib buts thats the only other library I found it to fail on so far. pyyaml installs fine. The install works perfectly under virtualenv on my mac which is using python 2.7 but the server is using python 3.3. Any help is appreciated.

    Read the article

  • issue in installing postgresql 9.3.4 on Windows server 2003 x64

    - by randydom
    Hello i really did all what i know to install the PostgreSQL 9.3.4 on my windows 2003 server x64, but i'm always stopped with this error : please see the error : http://oi57.tinypic.com/s4tb8i.jpg I really don't know what to do , if i click OK then when i go to the windows services list i don't find the PostgreSQL service so i can't Start the service . can any one please help me to install it correctly . PS: i've followed all steps in the : wiki.postgresql.org/wiki/Troubleshooting_Installation many thanks . here's the installer log * where i get " Failed to initialise the database cluster with initdb " : Called IsVistaOrNewer()... 'winmgmts' object initialized... Version:5.2 MajorVersion:5 Ensuring we can write to the data directory (using cacls): Executing batch file 'rad22ADE.bat'... processed dir: C:\Program Files\PostgreSQL\9.2\data Executing batch file 'rad22ADE.bat'... The files belonging to this database system will be owned by user "Administrator". This user must also own the server process. The database cluster will be initialized with locale "English_United States.1252". The default text search configuration will be set to "english". fixing permissions on existing directory C:/Program Files/PostgreSQL/9.2/data ... initdb: could not change permissions of directory "C:/Program Files/PostgreSQL/9.2/data": Permission denied Called Die(Failed to initialise the database cluster with initdb)... Failed to initialise the database cluster with initdb Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.2/installer/server/initcluster.vbs" "NT AUTHORITY\NetworkService" "postgres" "****" "C:\Program Files\PostgreSQL\9.2" "C:\Program Files\PostgreSQL\9.2\data" 5432 "DEFAULT" 0 : Program ended with an error exit code Problem running post-install step. Installation may not complete correctly The database cluster initialisation failed. Creating Uninstaller Creating uninstaller 25% Creating uninstaller 50% Creating uninstaller 75% Creating uninstaller 100% Installation completed Log finished 05/02/2014 at 04:04:04

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql i am running a music wesbite however only one database is running and 5-10 pages are only running.when i click on whm show processlist it shows only 2-3 processes However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. my.conf file is [mysqld] key_buffer = 1536M max_allowed_packet = 1M max_connections = 250 max_user_connections = 15 wait_timeout=40 connect_timeout=10 table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M server-id = 14 old-passwords = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • Upstart can't determine my process' pid

    - by sirlark
    I'm writing an upstart script for a small service I've written for my colleagues. My upstart job can start the service, but when it does it only outputs queryqueue start/running; note the lack of a pid as given for other services. #/etc/init/queryqueue.conf description "Query Queue Daemon" author "---" start on started mysql stop on stopping mysql expect fork env DEFAULTFILE=/etc/default/queryqueue umask 007 kill timeout 30 pre-start script #if [ -f "$DEFAULTFILE" ]; then # . "$DEFAULTFILE" #fi [ ! -f /var/run/queryqueue.sock ] || rm -rf /var/run/queryqueue.sock #exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script script #Originally this stanza didn't exist at all if [ -f "$DEFAULTFILE" ]; then . "$DEFAULTFILE" fi exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script post-start script for i in `seq 1 5` ; do [ -S /var/run/queryqueue.sock ] && exit 0 sleep 1 done exit 1 end script The service in question is a python script, which when run without error, forks using the code below right after checking command line options and basic environmental sanity, so I tell upstart to expect fork. pid = os.fork() if pid != 0: sys.exit(0) The script is executable, and has a python shebang. I can send the TERM signal to the process manually, and it quits gracefully. But running stop queryqueue claims queryqueue stop/waiting but the process is still alive and well. Also, it's logs indicate it never received the kill signal. I'm guessing this is because upstart doesn't know which pid it has. I've also tried expect daemon and leaving the expect clause out entirely, but there's no change in behaviour. How can I get upstart to determine the pid of the exec'd process

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql however only one database is running and 5-10 pages are only running. However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • Self-connecting printers

    - by Martin Cerny
    Hello, I work as an administrator in a small company using XP Professional on all computers and two servers with Win 2003 Server. Recently a very unusual problam occured one of the computers keeps connecting to all the printers on the network it doesn't matter if it's an administrator or Domain User as soon as somebody logs in the commputer connects all the printers. The printers are either installed on local computers or on the server and shared. There is no log-on script connecting the printers, I install them manualy and none of the other computers shows such behaviour. We have a printer which is installed on two computers and both of them share it (I'm moving it to Server from a small PC which shared it up to now, but some computers still use the old connection), meaning this specific computer connects to one of the printer two times and it can't use either of the connections. How to prevent this self-connecting to all printers (none of the other computers has this problem). If I delte them from the "Printers" folder everything works fine untill I reconnect and the Folder is once again full of all the printers we have. I solved the smaller problem, computer is now capable of printing on all of the printers (it seems there have been some registry issues), after cleaning the registry and reinstalling the printer it seems to work just fine. But the second thing prevails, the computer connects to all the printers in the network (when I remove one/multiple it is reconnected right after the next log-in by any user).

    Read the article

  • Redeploy using Active Directory

    - by Noam Gal
    I am trying to use group policy to deploy our msi through AD. For some strange reason, when I overwrite the msi with a newer version, and then go to the policy, and click on "Redeploy Application", the application gets uninstalled on the users' machines, and all reg keys, binaries and shortcuts are gone from them. The "Add/Remove Programs" still contain the application entry. I have managed to create a minimal vdproj that does nothing but write its current Product Version to a registry key, and created two versions of it (1.0.0 and 1.1.0). I still face the same problems when using this msi in my AD environment. I did check that my Package Codes and Product Codes are different for both versions, and that the Upgrade Codes are identical. I also checked the RemovePreviousVersion to true. Checking with some other msi (firefox 3.0.0 and 3.6.3) I downloaded from a site specifically for AD deploy, it worked just as expected (first installing the 3.0.0, then I over-written the msi, and clicked on "Redeploy", and the users got 3.6.3 after the next log-off-log-on). What am I missing here?

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • Windows Server 2008 can't start postgresql-x64-9.0 service: could not create any TCP/IP sockets

    - by Rob
    After rebooting a Windows Server 2008 machine to apply system updates, we recently we began having some issues running PostgreSQL 9.0. When we noticed the problem, we reverted the Windows updates, but the issue persists: From services.msc, attempting to start the postgresql-x64-9.0 service fails. Half-way through starting the progress bar becomes very slow, and eventually responds with error 1053; "the service did not respond in a timely fashion." Interestingly enough, bringing up the task manager shows multiple instances of postgres.exe have been started, and looking at the log file shows: 2011-02-10 14:44:02 ESTLOG: database system is ready to accept connections I then tried killing the processes, and starting via the command-line (as the user postgres), but I receive a different error: C:/Program Files/PostgreSQL/9.0/bin/pg_ctl.exe start -N "postgresql-x64-9.0" -D "F:/SHARE/postgres" -w waiting for server to start............................................................... pg_ctl: could not start server ESTWARNING: could not create listen socket for "192.168.0.101" ESTFATAL: could not create any TCP/IP sockets The log file again indicates that the database is ready to accept connections. Also, using netstat indicates that no other processes are using port 5432; I can't think of any other obvious reason that opening the listen socket might fail. Any help would be greatly appreciated.

    Read the article

  • ubuntu 9.10 installer doesn't recognize the hard drive

    - by dan
    I downloaded Ubuntu 9.10 x86_64 and am trying to install it on a fairly modern system with a Gigabyte GA-MA770-UD3 motherboard. Ubuntu 9.04 installed fine and still will when I stick that disc in, but 9.10 doesn't see my hard drive (western digital 250GB). If I boot from the disc, I can install gparted and it does recognize the drive, but when I try to start the install process from the live disc, Ubuntu again doesn't recognize the hard drive. I checked /var/log/messages and see this: Nov 12 17:28:08 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'. Nov 12 17:28:08 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: no raid sets and with names: "nvidia_ciiajheb-0" Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. I checked my BIOS, SATA is enabled and is set to IDE mode, so there shouldn't be software RAID, but nonetheless, I added nodmraid to the boot line and tried again. It still doesn't recognize the drive. I checked /var/log/messages again and now see this: Nov 12 17:49:38 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was boad, boot with 'nodmraid'. Nov 12 17:49:38 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Any ideas on things to try? I've tried all of the various BIOS settings for SATA. IDE,RAID, etc. Nothing seems to work.

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • Loads of memory in "standby" on Windows Server 2008 R2

    - by Jaap
    In our SharePoint farm, our Web Front End servers all have loads of memory in "standby" mode, meaning very little is available for our IIS worker process. We have 32 GB of RAM in each of the boxes, and standby memory will creep up to about 28 GB, whereas the IIS worker process only seems to be using about 2 GB. Also, we've seen the machine use the swap file extensively while this memory was in standby, so I am starting to think that this memory in standby mode is stopping IIS from using it, forcing it to swap to disk, causing more performance problems. I used SysInternals RamMap to indentify what is being kept in memory, and it was able to tell me that almost everything in standby memory is of type "Mapped File". When I sort the files listed under the file summary tab in RamMap by file size, the largest files (around a few hundred meg each) are IIS log files and SharePoint log files. I would like to understand which process is loading these files into standby memory and why they are not being released. When I do an iisreset, it does not release the memory. Any ideas? Thanks!

    Read the article

  • SSL Handshake negotiation on Nginx terribly slow

    - by Paras Chopra
    I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/ Here is my Nginx Conf: user www-data; worker_processes 4; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_proxied any; server_names_hash_bucket_size 128; } upstream abc { server 1.1.1.1 weight=1; server 1.1.1.2 weight=1; server 1.1.1.3 weight=1; } server { listen 443; server_name blah; keepalive_timeout 5; ssl on; ssl_certificate /blah.crt; ssl_certificate_key /blah.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://abc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?

    Read the article

  • Flash drive suddenly died. Why? Can I recover it?

    - by mg
    Hi, I have a flash drive that I used not too much but, after few month of inactivity, it died. I know that flash drives have a limited write cycles but I am sure that this is not the problem. I tried to create a new partition table and format the drive nothing worked. This is the output of mkfs.ext2. marco@pinguina:~$ sudo LANG=en.UTF-8 mkfs.ext2 -v -c /dev/sdc1 [sudo] password for marco: mke2fs 1.41.11 (14-Mar-2010) fs_types for mke2fs.conf resolution: 'ext2', 'default' Calling BLKDISCARD from 0 to 4001431552 failed. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 244320 inodes, 976912 blocks 48845 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1002438656 30 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Running command: badblocks -b 4096 -X -s /dev/sdc1 976911 badblocks: Input/output error during ext2fs_sync_device Checking for bad blocks (read-only test): done Block 0 in primary superblock/group descriptor area bad. Blocks 0 through 2 must be good in order to build a filesystem. Aborting.... Is there something I can do to recover it?

    Read the article

  • switchover in postgresql

    - by user1010280
    I am using Postgresql 9.0 with Streaming replication. So, during switchover I follow these steps:- Get the server timestamp on primary. Get the current log position on primary. Set Verify Log location Verify Transaction Received Location Shutdown DB on production. Synchronize the transaction logs from PR to DR. Trigger a failover on the DR Database by creating the trigger file specified in recovery.conf Verify DB Mode on DR Copy the control file from from DR to primary. copy the temporary stats file from DR to primary. copy the history file from DR to primary. Create recovery.conf file. Start Database in standby mode in primary. Verify DB mode on PR At step (6), I have to copy last wal generated on Primary to standby and sync both PR and standby. but this thing takes time to copy files because this remote. So that postgres will keep seraching for wal for long time and after that it stops the server. So I want to know is there any way so that I can ask postgres to stop seraching or locating WAL after shutdown??? because postgres tries to locate this wal every 5 seconds. Please reply as soon as possible..its urgent...

    Read the article

  • Samba Does Not List Files In Shared Directory

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it?

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • CentOS 5 VPN Server won't work

    - by Miro Markarian
    I have a CentOS 5 server configured to be both a L2TP server and a PPTP server + a radius server for hosting the AAA. My problem is that, the L2TP works great and I can connect to it, but can't connect to PPTP and every-time it ends up with error #619 when it gets to the verifying username and password section. Here is the log I got from /var/log/messages Dec 17 07:40:02 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection started Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Starting call (launching pppd, opening GRE) Dec 17 07:40:03 serverdl pppd[8571]: Plugin radius.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADIUS plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin radattr.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADATTR plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: pptpd-logwtmp: $Version$ Dec 17 07:40:03 serverdl pppd[8571]: pppd 2.4.4 started by root, uid 0 Dec 17 07:40:03 serverdl pppd[8571]: Using interface ppp0 Dec 17 07:40:03 serverdl pppd[8571]: Connect: ppp0 <--> /dev/pts/2 Dec 17 07:40:03 serverdl pptpd[8570]: GRE: read(fd=7,buffer=80515e0,len=8260) from network failed: status = -1 error = Protocol not available Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: GRE read or PTY write failed (gre,pty)=(7,6) Dec 17 07:40:03 serverdl pppd[8571]: Modem hangup Dec 17 07:40:03 serverdl pppd[8571]: Connection terminated. Dec 17 07:40:03 serverdl pppd[8571]: Exit. Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection finished Just yesterday when I hadn't set up the L2TP yet PPTP was working great but then I uninstalled it and removed all it's config from /etc/* and installed L2TP first and then installed PPTP after it. and then it stopped to work. I believe it must be a radiusclient issue because both of the PPTP and L2TP services use radius to authenticate. And another thing I think must be the issue is that when assigning IPs to the PPP interfaces, I have done the following config. Is that right? For L2TP: localip 10.10.10.1 remoteip 10.10.10.2-254 For PPTP: localip 10.10.9.1 remoteip 10.10.9.2-254

    Read the article

  • Nginx - Serve blank page on "Bad Gateway" error

    - by TheLittleCheeseburger
    Hello all. I want to use Nginx as a simple reverse proxy, but if the server behind Nginx is down I just was to display a blank page. For some reason this configuration isn't displaying a blank page on error 502 and I can't figure out why. Thanks for your help! user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; # multi_accept on; } http { keepalive_timeout 65; proxy_read_timeout 200; upstream tornado { server 127.0.0.1:8001; } server { listen 80; server_name www.something.com; location / { error_page 502 = @blank; proxy_pass http://tornado; } location @blank { index index.html; root /web/blank; } } }

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • Making it Easier for Older Users to Login to Multiple Accounts

    - by Mike Hagstrom
    I currently do consulting for a small business that has multiple applications that they need to login too. I'm trying to get them to start using Basecamp and Zendesk to make all of our lives easier when it comes to collaboration on big projects and quick helpdesk ticket items. However, I have recently been informed that it is difficult for them to remember all of these websites etc... to login too. However the login information is the same. Right now they have to login to: Windows Login Gmail I want them additionally to login to Basecamp Zendesk This is just a generation or two gap between myself and them, so I'm wondering what others do to solve these problems. Is there some way we could configure USB thumbdrives that somehow have Lastpass or something on that when plugged into the computer automatically log them into their Windows account, then when they were to say visit the Basecamp account would automatically log them into that? I think the security risk (of a list thumbdrive) is well worth the ability to use these extra applications. Unless anyone else has any other ways for making it easier for users to login to multiple sites.

    Read the article

  • using gmail as email relay for sendmail

    - by Nikita
    I used to be able to send emails using a gmail account & sendmail configured using one of the guides on the Internet, for example: http://appgirl.net/blog/configuring-sendmail-to-relay-through-gmail-smtp/ This is a small server and I've recently moved it to a different house. And sendmail has stop working. The only thing different in the network setup is a new router. What is happening: In the log files, I see the following error: ...stat=Deferred: smtp.gmail.com: No route to host When I run from the command line: strace sendmail -f A -t B -u "Subject" -m "Message" -tls=yes ssl=yes -s smtp.gmail.com:587 -xu A -xp XYZ It hangs on this call: recvfrom(3, "m0\201\203\0\1\0\0\0\0\0\0\4ares\3lan\0\0\34\0\1", 8192, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 26 close(3) = 0 time(NULL) = 1339997943 open("/etc/localtime", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ff000 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 3477 _llseek(3, -24, [3453], SEEK_CUR) = 0 read(3, "\nEST5EDT,M3.2.0,M11.1.0\n", 4096) = 24 close(3) = 0 munmap(0xb76ff000, 4096) = 0 socket(PF_FILE, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_FILE, path="/dev/log"}, 110) = 0 send(3, "<18>Jun 18 01:39:03 sendmail[268"..., 96, MSG_NOSIGNAL) = 96 nanosleep({60, 0}, So it looks like at some point it tries to resolve the DNS name, but I don't have anything running on 53, so it dies out and then just hangs. The other interesting thing is that msmtp works just fine on the same server. Update: ares in strace output is actually the name of my server, but .254 IP address is the address of the router. Could anyone tell me why this is happening or what further steps can I take to investigate the issue? Thanks!

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >