Search Results

Search found 12909 results on 517 pages for 'clustered index'.

Page 219/517 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • apache vhost not working consistently

    - by petrus
    I have a vhost on my webserver whose sole and unique goal is to return the client IP adress: petrus@bzn:~$ cat /home/vhosts/domain.org/index.php <?php echo $_SERVER['REMOTE_ADDR']; echo "\n" ?> This helps me troubleshoot networking issues, especially when NAT is involved. As such, I don't always have domain name resolution and this service needs to work even if queried by its IP address. I'm using it this way: petrus@hive:~$ echo "GET /" | nc 88.191.124.41 80 191.51.4.55 petrus@hive:~$ echo "GET /" | nc domain.org 80 191.51.4.55 router#more http://88.191.124.41/index.php 88.191.124.254 However I found that it wasn't working from at least a computer: petrus@seth:~$ echo "GET /" | nc domain.org 80 petrus@seth:~$ petrus@seth:~$ echo "GET /" | nc 88.191.124.41 80 petrus@seth:~$ What I checked: This is not related to ipv6: petrus@seth:~$ echo "GET /" | nc -4 ydct.org 80 petrus@seth:~$ petrus@hive:~$ echo "GET /" | nc ydct.org 80 2a01:e35:ee8c:180:21c:77ff:fe30:9e36 netcat version is the same (except platform, i386 vs x64): petrus@seth:~$ type nc nc est haché (/bin/nc) petrus@seth:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@seth:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2010-06-26 14:01 /etc/alternatives/nc -> /bin/nc.openbsd petrus@hive:~$ type nc nc est haché (/bin/nc) petrus@hive:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@hive:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2011-05-26 01:23 /etc/alternatives/nc -> /bin/nc.openbsd It works when used without the pipe: petrus@seth:~$ nc domain.org 80 GET / 2a01:e35:ee8c:180:221:85ff:fe96:e485 And the piping works at least with a test service (netcat listening on 1234/tcp and output to stdout) petrus@bzn:~$ nc -l -p 1234 GET / petrus@bzn:~$ petrus@seth:~$ echo "GET /" | nc domain.org 1234 petrus@seth:~$ I don't know if this issue is more related to netcat or Apache, but I'd appreciate any pointers to troubleshoot this issue ! The IP addresses have been modified but kept consistent for easy reading. bzn is the server, hive is a working client and seth is the client on which I have the issue.

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten with a question mark in it, which I don't want the user to see. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • NGINX SSI Not working

    - by Mike Kelly
    I'm having trouble getting SSI to work on NGINX. You can see the problem if you hit http://www.bakerycamp.com/test.shtml. Here is the contents of that file: <!--# echo hi --> If you hit this in a browser, you see the SSI directive in the content - so apparently NGINX is not interpreting the SSI directive. My NGINX config file looks like this: server { listen 80; server_name bakerycamp.com www.bakerycamp.com; access_log /var/log/nginx/bakerycamp.access.log; index index.html; root /home/bakerycamp.com; location / { ssi on; } # Deny access to all hidden files and folders location ~ /\. { access_log off; log_not_found off; deny all; } } I did not build NGINX from sources but installed it using apt-get. I assume it has the SSI module (since that is default) but perhaps not? Should I just bite the bullet and rebuild from sources? Is there anyway to tell if the installed NGINX supports SSI and my config is just wrong?

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Apache/Mongrel/Redmine installation problem (VirtualHost/ProxyPass)

    - by Riddler
    I am installing Redmine as per this step-by-step instruction: http://justnotes.co.cc/2010/02/11/how-to-install-redmine-on-ubuntu/ I am using Ubuntu 10.04.1, Apache 2.2.14, Mongrel 1.1.5. On the VirtualHost configuration stage, I am using this: <VirtualHost *:80> ServerName myserver.lv ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000 ProxyPreserveHost on <Proxy *> Order allow,deny Allow from all </Proxy> </VirtualHost> But, when I direct my browser to http://<my-server's-ip>/redmine/ what I see is not the redmine web application but "Index of /redmine" with, well, index of the files from the root directory of Redmine. Any idea how to fix that? P.S. Tried removing the VirtualHost stuff alltogether and instead adding the following simple clauses to apache2.conf: <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000/ ProxyPreserveHost on As a result, the behavior changes! Now http://<my-server's-ip>/redmine/ produces the source code of the Redmine's start page, so it is served, but apparently not rendered. At the same time, still, http://<my-server's-ip>:8000/ works perfectly fine, so Mongrel is serving the Redmine application as it should, it's just that something is wrong with my VirtualHost/proxying clauses in the .conf file.

    Read the article

  • MySQL table does not exist

    - by Phanindra
    I am getting following error in err file. 110803 6:51:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` already exists in InnoDB internal InnoDB: data dictionary. Have you deleted the .frm file InnoDB: and not used DROP TABLE? Have you used DROP DATABASE InnoDB: for InnoDB tables in MySQL version <= 3.23.43? InnoDB: See the Restrictions section of the InnoDB manual. InnoDB: You can drop the orphaned table inside InnoDB by InnoDB: creating an InnoDB table with the same name in another InnoDB: database and copying the .frm file to the current database. InnoDB: Then MySQL thinks the table exists, and DROP TABLE will InnoDB: succeed. InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html And when I do the same, like copying the frm file from other database to here and drop the table, i am getting following error, InnoDB: Error: trying to load index PRIMARY for table ims/temp_discoveryjobdetails InnoDB: but the index tree has been freed! 110803 6:50:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` does not exist in the InnoDB internal InnoDB: data dictionary though MySQL is trying to drop it. InnoDB: Have you copied the .frm file of the table to the InnoDB: MySQL database directory from another database? InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html Please any one help me out of this. Also can any one tell me why this error is coming. EDIT: The issue is occurring only when disk size is full and when we use Truncate table. Also this is occurring only in 5.1 version but not in 5.0 version.

    Read the article

  • Nagios 403 forbidden, indexes?

    - by Georgi
    installed nagios under freebsd 9, but can't get the right way to be public in browser (from other pc's). I think that the problem is in the indexes or that there is not index file (instead main.php). Apache says that syntax is ok. The permissions of the dir are 777. The logs print Directory index forbidden by Options directive: /usr/local/www/nagios/. This is my configuration: ScriptAlias /nagios/cgi-bin/ /usr/local/www/nagios/cgi-bin/ Alias /nagios /usr/local/www/nagios/ <Directory /usr/local/www/nagios> Options +Indexes FollowSymLinks +ExecCGI AllowOverride Indexes AuthConfig FileInfo Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> <Directory /usr/local/www/nagios/cgi-bin> Options +ExecCGI AllowOverride None Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> I think that the problem is in idexes, maybe? When I remove the options it's public and available but lists the files and says that idnexes are forbidden..

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • Apache2 with lighttpd as proxy

    - by andrzejp
    Hi, I am using apache2 as web server. I would like to help him lighttpd as a proxy for static content. Unfortunately I can not well set up lighttpd and apache2. (OS: Debian) Important things from lighttpd.config: server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_proxy", "mod_status", ) server.document-root = "/www/" server.port = 82 server.bind = "localhost" $HTTP["remoteip"] =~ "127.0.0.1" { alias.url += ( "/doc/" => "/usr/share/doc/", "/images/" => "/usr/share/images/" ) $HTTP["url"] =~ "^/doc/|^/images/" { dir-listing.activate = "enable" } } I would like to use lighttpd in only one site operating as a virtual directory on apache2. Configuration of this virtual directory: ProxyRequests Off ProxyPreserveHost On ProxyPass /images http://0.0.0.0:82/ ProxyPass /imagehosting http://0.0.0.0:82/ ProxyPass /pictures http://0.0.0.0:82/ ProxyPassReverse / http://0.0.0.0:82/ ServerName MY_VALUES ServerAlias www.MY_VALUES UseCanonicalName Off DocumentRoot /www/MYAPP/forum <Directory "/www/MYAPP/forum"> DirectoryIndex index.htm index.php AllowOverride None ... As you can see (or not;)) my service is physically located at the path: / www / myapp / forum and I would like to support lighttpd dealt with folders: / www / myapp / forum / images / www / myapp / forum / imagehosting / www / myapp / forum / pictures and left the rest (PHP scripts) for apache After running lighttpd and apache2 working party, but did not show up any images of these locations. What is wrong?

    Read the article

  • Apache subdomain not working

    - by tandu
    I'm running apache on my local machine and I'm trying to create a subdomain, but it's not working. Here is what I have (stripped down): <VirtualHost *:80> DocumentRoot /var/www/one ServerName one.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /var/www/two ServerName two.localhost </VirtualHost> I recently added one. The two entry has been around for a while, and it still works fine (displays the webpage when I go to two.localhost). In fact, I copied the entire two.localhost entry and simply changed two to one, but it's not working. I have tried each of the following: * `apachectl -k graceful` * `apachectl -k restart` * `/etc/init.d/apache2 restart` * `/etc/init.d/apache2 stop && !#:0 start` Apache will complain if /var/www/one does not exist, so I know it's doing something, but when I visit one.localhost in my browser, the browser complains that nothing is there. I put an index.html file there and also tried going to one.localhost/index.html directly, and the browser still won't fine it. This is very perplexing since the entry I copied from two.localhost is exactly the same .. not only that, but if something were wrong I would expect to get a 500 rather than the browser not being able to find anything. The error_log also has nothing extra.

    Read the article

  • Spotlight can't see anything in Applications

    - by mix
    There have been other threads on this but none of the solutions mentioned have helped me. Spotlight has stopped showing any results for my Applications. I've tried reindexing and removing the index so it rebuilds it. No change. I've tried adding Applications to the Privacy tab and removing it, no change. I tried repairing disk permissions and redoing the above, no change. I've tried removing everything from the index except Applications and then I just get nothing for any search at all (except dictionary entries). I tried adding a symlink in my homedir to Applications and reindexing, but no change. Any ideas on what to do? I'm running Snow Leopard. This is driving me crazy! Update: I've noticed that when I start a reindex with sudo mdutil -E / and then immediately do a spotlight search for an app that the app shows up temporarily until spotlight gets disabled due to active indexing. After the indexing is done the app entries go away.

    Read the article

  • Why Is Web Sharing Broken on My Mac?

    - by Sam Murray-Sutton
    Background: I use my Mac for web development, running copies of web sites locally. I recently installed the Snow Leopard update, which to all intents and purposes seems to have gone fine, except... What's not working? Web-sharing; more specifically I can't turn it on via preferences. The preference pane just hangs when I try to. So Apache doesn't start on reboot. I can start Apache by hand, but I don't know enough to either setup apache to start with the computer, or to properly fix web sharing. Further details My Apache error log shows nothing on when the system boots up (as I would expect). This is the error message when I try to start web sharing from the sharing preference pane. 28/09/2009 10:58:05 System Preferences[834] setInetDServiceEnabled failed with 1 for org.apache.httpd Here's the messages given when I start apache from the command line. [Mon Sep 28 10:35:53 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Mon Sep 28 10:35:54 2009] [warn] mod_bonjour: Skipping user 'sams' - index file /Users/sams/Sites/index.html has zero length. [Mon Sep 28 10:35:54 2009] [notice] Digest: generating secret for digest authentication ... [Mon Sep 28 10:35:54 2009] [notice] Digest: done [Mon Sep 28 10:35:54 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 Phusion_Passenger/2.2.5 configured -- resuming normal operations Please let me know if you need any further details on this. Any help would be greatly appreciated. UPDATE I have added an answer of my own below - I was able to solve it thanks to being pointed in the right direction by the comments below, so thanks very much. But I'm still not totally clear as to what caused the problem or how my solution addressed it, so I'm leaving the question open for now.

    Read the article

  • Fix Corrupted Ruby in Mac OS X Lion

    - by luckyb56
    I screwed up my ruby buy executing the command sudo easy_install pip> /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It showed error: Couldn't find index page for '-e' (maybe misspelled?) No local packages or download links found for -e error: Could not find suitable distribution for Requirement.parse('-e') After that when I tried to install Brew by: /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It shows error which I have no idea: /usr/bin/ruby: line 1: Searching: command not found /usr/bin/ruby: line 2: Best: command not found /usr/bin/ruby: line 3: Processing: command not found Usage: pip COMMAND [OPTIONS] pip: error: No command by the name pip 1.1 (maybe you meant "pip install 1.1") /usr/bin/ruby: line 5: Installing: command not found /usr/bin/ruby: line 6: Installing: command not found /usr/bin/ruby: line 8: Using: command not found /usr/bin/ruby: line 9: Processing: command not found /usr/bin/ruby: line 10: Finished: command not found /usr/bin/ruby: line 11: Searching: command not found /usr/bin/ruby: line 12: Reading: command not found /usr/bin/ruby: line 13: syntax error near unexpected token `(' /usr/bin/ruby: line 13: `Scanning index of all packages (this may take a while)' Can this be fixed?

    Read the article

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • ServerName not working in Apache2 and Ubuntu

    - by CreativeNotice
    Setting up a dev LAMP server and I wish to allow dynamic subdomains, aka ted.servername.com, bob.servername.com. Here's my sites-active file <VirtualHost *:80> # Admin Email, Server Name, Aliases ServerAdmin [email protected] ServerName happyslice.net ServerAlias *.happyslice.net # Index file and Document Root DirectoryIndex index.html DocumentRoot /home/sysadmin/public_html/happyslice.net/public # Custom Log file locations LogLevel warn ErrorLog /home/sysadmin/public_html/happyslice.net/log/error.log CustomLog /home/sysadmin/public_html/happyslice.net/log/access.log combined And here's the output from sudo apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/happyslice.net:5) Syntax OK The server hostname is srv.happyslice.net. As you can see from apache2ctl when I use happyslice.net I get the default virtual host, when I use a subdomain, I get the happyslice.net host. So the later is working how I want, but the main url does not. I've tried all kinds of variations here, but it appears that ServerName just isn't being tied to the correct location. Thoughts? I'm stumped. FYI, I'm running Apache2.1 and Ubuntu 10.04 LTS

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot trying to redirect to /owa

    - by cparker4486
    I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >