Search Results

Search found 21 results on 1 pages for 'htf'.

Page 1/1 | 1 

  • Weirdness with cabal, HTF, and HUnit assertions

    - by rampion
    So I'm trying to use HTF to run some HUnit-style assertions % cat tests/TestDemo.hs {-# OPTIONS_GHC -Wall -F -pgmF htfpp #-} module Main where import Test.Framework import Test.HUnit.Base ((@?=)) import System.Environment (getArgs) -- just run some tests main :: IO () main = getArgs >>= flip runTestWithArgs Main.allHTFTests -- all these tests should fail test_fail_int1 :: Assertion test_fail_int1 = (0::Int) @?= (1::Int) test_fail_bool1 :: Assertion test_fail_bool1 = True @?= False test_fail_string1 :: Assertion test_fail_string1 = "0" @?= "1" test_fail_int2 :: Assertion test_fail_int2 = [0::Int] @?= [1::Int] test_fail_string2 :: Assertion test_fail_string2 = "true" @?= "false" test_fail_bool2 :: Assertion test_fail_bool2 = [True] @?= [False] And when I use ghc --make, it seems to work correctly. % ghc --make tests/TestDemo.hs [1 of 1] Compiling Main ( tests/TestDemo.hs, tests/TestDemo.o ) Linking tests/TestDemo ... % tests/TestDemoA ... * Tests: 6 * Passed: 0 * Failures: 6 * Errors: 0 Failures: * Main:fail_int1 (tests/TestDemo.hs:9) * Main:fail_bool1 (tests/TestDemo.hs:12) * Main:fail_string1 (tests/TestDemo.hs:15) * Main:fail_int2 (tests/TestDemo.hs:19) * Main:fail_string2 (tests/TestDemo.hs:22) * Main:fail_bool2 (tests/TestDemo.hs:25) But when I use cabal to build it, not all the tests that should fail, fail. % cat Demo.cabal ... executable test-demo build-depends: base >= 4, HUnit, HTF main-is: TestDemo.hs hs-source-dirs: tests % cabal configure Resolving dependencies... Configuring Demo-0.0.0... % cabal build Preprocessing executables for Demo-0.0.0... Building Demo-0.0.0... [1 of 1] Compiling Main ( tests/TestDemo.hs, dist/build/test-demo/test-demo-tmp/Main.o ) Linking dist/build/test-demo/test-demo ... % dist/build/test-demo/test-demo ... * Tests: 6 * Passed: 3 * Failures: 3 * Errors: 0 Failures: * Main:fail_int2 (tests/TestDemo.hs:23) * Main:fail_string2 (tests/TestDemo.hs:26) * Main:fail_bool2 (tests/TestDemo.hs:29) What's going wrong and how can I fix it?

    Read the article

  • Nginx RegEx to match a directory and file

    - by HTF
    I'm wondering if it's possible to match Wordpress directory and specific file in the same location, so at the moment I've got rule to match only the wp-admin directory: ## Restricted Access directory location ^~ /wp-admin/ { auth_basic "Access Denied!"; auth_basic_user_file .users; location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/www.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would like to also match the wp-login.php file but I can't get to work, I've tried the following: location ^~ /(wp-admin/|wp-login.php) { ...

    Read the article

  • CentOS 7: PHP high CPU usage

    - by HTF
    I've migrated Observium monitoring platform from CentOS 6.5 to CentOS 7 and I've noticed high CPU usage mostly caused by PHP, the CPU load increase when pooling script is running (poller-wrapper.py). Both VMs are running on the same physical host (KVM hypervisor) with exactly the same spec. I also tested this with a simple PHP benchmark script and CentOS 7 is slower - is it strictly related to PHP version (5.4.30 vs 5.4.16)? CentOS 6.5 [root@centos6:~]# php -f bench.php -------------------------------------- | PHP BENCHMARK SCRIPT | -------------------------------------- Start : 2014-08-19 22:26:34 PHP version : 5.4.30 Platform : Linux -------------------------------------- test_math : 1.610 sec. test_stringmanipulation : 1.416 sec. test_loops : 0.822 sec. test_ifelse : 0.729 sec. -------------------------------------- Total time: : 4.577 sec. CentOS 7 [root@centos7:~]# php -f bench.php -------------------------------------- | PHP BENCHMARK SCRIPT | -------------------------------------- Start : 2014-08-19 22:27:58 PHP version : 5.4.16 Platform : Linux -------------------------------------- test_math : 2.117 sec. test_stringmanipulation : 1.246 sec. test_loops : 1.174 sec. test_ifelse : 0.752 sec. -------------------------------------- Total time: : 5.289 sec. CPU usage increased right after migration:

    Read the article

  • Postfix TLS issue

    - by HTF
    I'm trying to enable TLS on Postfix but the daemon is crashing: Sep 16 16:00:38 core postfix/master[1689]: warning: process /usr/libexec/postfix/smtpd pid 1694 killed by signal 11 Sep 16 16:00:38 core postfix/master[1689]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling CentOS 6.3 x86_64 # postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 disable_vrfy_command = yes home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all local_recipient_maps = mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:/var/lib/postfix/smtpd_tls_cache.db smtp_use_tls = yes smtpd_delay_reject = yes smtpd_error_sleep_time = 1s smtpd_hard_error_limit = 20 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_pipelining, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_destination reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_soft_error_limit = 10 smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550

    Read the article

  • High memory usage on the server - can't determine the process

    - by HTF
    I've noticed high memory usage on the server. Details: OS: CentOS 6.3 - x86_64 Web server: Nginx with PHP-FPM The server is generating PDF documents so the traffic is minimum. top: # top -b -n 1 -a top - 10:04:51 up 21 days, 18:57, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 92 total, 1 running, 91 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.2%sy, 0.0%ni, 99.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3923092k total, 3720380k used, 202712k free, 133904k buffers Swap: 4194296k total, 12k used, 4194284k free, 147404k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15855 www-data 20 0 199m 4952 2128 S 0.0 0.1 0:00.06 php-fpm 15853 www-data 20 0 199m 4940 2028 S 0.0 0.1 0:00.06 php-fpm 15850 www-data 20 0 199m 4928 2020 S 0.0 0.1 0:00.05 php-fpm 15851 www-data 20 0 199m 4888 2020 S 0.0 0.1 0:00.06 php-fpm 15852 www-data 20 0 199m 4852 2020 S 0.0 0.1 0:00.06 php-fpm 15857 www-data 20 0 198m 4716 2020 S 0.0 0.1 0:00.06 php-fpm 17553 root 20 0 97816 3860 2924 S 0.0 0.1 0:00.03 sshd 15849 root 20 0 198m 3460 1072 S 0.0 0.1 0:00.12 php-fpm 13441 nginx 20 0 65608 2968 1604 S 0.0 0.1 0:02.06 nginx 13440 nginx 20 0 65608 2964 1600 S 0.0 0.1 0:01.87 nginx 17561 root 20 0 105m 1944 1488 S 0.0 0.0 0:00.01 bash 1150 xfs 20 0 20980 1784 704 S 0.0 0.0 0:00.13 xfs 15863 root 20 0 179m 1424 1028 S 0.0 0.0 0:00.00 rsyslogd 1 root 20 0 19224 1360 1088 S 0.0 0.0 0:17.96 init 1201 nrpe 20 0 40928 1288 704 S 0.0 0.0 3:57.64 nrpe 13226 root 20 0 114m 1216 612 S 0.0 0.0 0:00.01 crond 6691 root 20 0 64068 1156 488 S 0.0 0.0 0:09.59 sshd 13439 root 20 0 65104 1128 292 S 0.0 0.0 0:00.00 nginx 19026 root 20 0 15040 1116 844 R 0.0 0.0 0:00.00 top 451 root 16 -4 11052 1096 316 S 0.0 0.0 0:00.02 udevd 1174 root 18 -2 11048 1064 288 S 0.0 0.0 0:00.00 udevd 1175 root 18 -2 11048 1064 288 S 0.0 0.0 0:00.00 udevd 1065 root 16 -4 93168 824 560 S 0.0 0.0 0:16.00 auditd 1165 root 20 0 4056 564 480 S 0.0 0.0 0:00.00 mingetty 1167 root 20 0 4056 564 480 S 0.0 0.0 0:00.00 mingetty 1169 root 20 0 4056 564 480 S 0.0 0.0 0:00.00 mingetty 1171 root 20 0 4056 564 480 S 0.0 0.0 0:00.00 mingetty 1163 root 20 0 4056 560 480 S 0.0 0.0 0:00.00 mingetty 1176 root 20 0 4056 560 480 S 0.0 0.0 0:00.00 mingetty 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0.0 0.0 0:11.75 migration/0 4 root 20 0 0 0 0 S 0.0 0.0 44:30.28 ksoftirqd/0 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 6 root RT 0 0 0 0 S 0.0 0.0 0:03.51 watchdog/0 7 root RT 0 0 0 0 S 0.0 0.0 0:11.63 migration/1 8 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0.0 0.0 11:35.50 ksoftirqd/1 10 root RT 0 0 0 0 S 0.0 0.0 0:03.34 watchdog/1 11 root 20 0 0 0 0 S 0.0 0.0 1:36.68 events/0 12 root 20 0 0 0 0 S 0.0 0.0 1:50.57 events/1 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cgroup 14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper 15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 netns 16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr 17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pm 18 root 20 0 0 0 0 S 0.0 0.0 0:07.86 sync_supers 19 root 20 0 0 0 0 S 0.0 0.0 0:10.38 bdi-default 20 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/0 21 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/1 22 root 20 0 0 0 0 S 0.0 0.0 0:04.35 kblockd/0 23 root 20 0 0 0 0 S 0.0 0.0 0:04.18 kblockd/1 24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpid 25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_notify 26 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_hotplug 27 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/0 28 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/1 29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_aux 30 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksuspend_usbd 31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khubd 32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kseriod 33 root 20 0 0 0 0 S 0.0 0.0 0:00.00 md/0 34 root 20 0 0 0 0 S 0.0 0.0 0:00.00 md/1 35 root 20 0 0 0 0 S 0.0 0.0 0:00.00 md_misc/0 36 root 20 0 0 0 0 S 0.0 0.0 0:00.00 md_misc/1 37 root 20 0 0 0 0 S 0.0 0.0 0:00.48 khungtaskd 38 root 20 0 0 0 0 S 0.0 0.0 1:07.52 kswapd0 39 root 25 5 0 0 0 S 0.0 0.0 0:00.00 ksmd 40 root 39 19 0 0 0 S 0.0 0.0 0:22.00 khugepaged 41 root 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/0 42 root 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/1 43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 crypto/0 44 root 20 0 0 0 0 S 0.0 0.0 0:00.00 crypto/1 49 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthrotld/0 50 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthrotld/1 52 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kpsmoused 53 root 20 0 0 0 0 S 0.0 0.0 0:00.00 usbhid_resumer 83 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kstriped 233 root 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0 234 root 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1 321 root 20 0 0 0 0 S 0.0 0.0 0:00.00 virtio-blk 359 root 20 0 0 0 0 S 0.0 0.0 0:03.24 kdmflush 360 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdmflush 380 root 20 0 0 0 0 S 0.0 0.0 0:20.64 jbd2/dm-0-8 381 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ext4-dio-unwrit 382 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ext4-dio-unwrit 694 root 20 0 0 0 0 S 0.0 0.0 0:00.00 vballoon 697 root 20 0 0 0 0 S 0.0 0.0 0:00.00 virtio-net 818 root 20 0 0 0 0 S 0.0 0.0 0:00.00 jbd2/vda1-8 819 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ext4-dio-unwrit 820 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ext4-dio-unwrit 851 root 20 0 0 0 0 S 0.0 0.0 0:06.96 kauditd 1013 root 20 0 0 0 0 S 0.0 0.0 0:15.45 flush-253:0 ps: # ps aux --sort -vsz | head USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND www-data 13213 0.0 0.1 204416 4772 ? S 08:28 0:00 php-fpm: pool default www-data 13214 0.0 0.1 204416 4776 ? S 08:28 0:00 php-fpm: pool default www-data 13215 0.0 0.1 204416 4832 ? S 08:28 0:00 php-fpm: pool default www-data 13216 0.0 0.1 204416 4776 ? S 08:28 0:00 php-fpm: pool default www-data 13218 0.0 0.1 204416 4956 ? S 08:28 0:00 php-fpm: pool default free: #free -m total used free shared buffers cached Mem: 3831 3530 300 0 130 143 -/+ buffers/cache: 3256 574 Swap: 4095 0 4095 When I stooped Nginx, PHP-FPM the memory usage was still the same. Could you help me to investigate what is consuming the memory on the system? Regards

    Read the article

  • Unidirectional synchronization and admin back-end

    - by HTF
    I have Wordpress installation on two web nodes (load balancing/failover). There is unidirectional synchronization from server A to server B so any updates must occur on the first web node. I have a problem with Wordpress admin side. I'm using Nginx and the initial plan was to create rewrite rule from domain.com/wp-admin to wpadmin.domain.com - pointing to the first node. The problem is that the Wordpress installation can be access only via main domain and without extra subdomain there is no distinction between both web servers for the rewrite rule. Could you please advise if there is any other solution in this case. Regards

    Read the article

  • Mirroring MySQL server with diffrent configuration

    - by HTF
    I have to migrate MySQL server to a different data centre so I would like to create another MySQL slave server in new DC and then promote it to a master later on. I previously used LVM snapshots and Percona Xtrabackup for this purpose but this time I've optimized MySQL configuration file that prevents me from using these methods. Old server (backup): innodb_log_file_size = 256M innodb_log_files_in_group = 3 New server (restore): innodb_log_file_size = 512M innodb_log_files_in_group = 2 The Xtrabackup script and LVM snapshots copy the whole directory structure so the MySQL server won't start because there is a different size for InnoDB logs. Is there any solution to avoid a downtime in this case? I can't use mysqldumps as there is around 8000 databases so I would have to take the server down for a couple of hours. I was also thinking to use the old settings with Xtrabackup and then change it once the new server is promoted to a master - less downtime but I'm not sure if this will work? Thank you Regards

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • CentOS, CUPS - printer managment

    - by HTF
    I'm using CentOS 6.3, and trying to get a printer PIXMA iP4950 to work. The printer is attached via USB. I've downloaded and installed the drivers from the Cannon website, and have the printer installed in CUPS. However, when I print anything (even the test page), the job is completed successfully (according to CUPS-log), but the printer does not print a thing. I don't know how to debug this. Have tried to change logging to debug, but I don't see any errors in the error_log and the access_log says: Returning IPP successful-ok for Get-Jobs (ipp://localhost:631/printers/Canon_iP4900_series) from localhost Please note that I was able to print on another CentOS machine however with GNOME Desktop.

    Read the article

  • Data replication between two web nodes

    - by HTF
    I have Wordpress installation running on two web servers (Nginx). There is unidirectional synchronization from server A to server B and I'm using lsyncd for this purpose. with his configuration I have to add blog posts from the first web server so the data is replicated to the second one - how I can force access to Wordpress back-end only from the first web server? Please note that both servers have the same domain for Wordpress. Regards

    Read the article

  • Apache error log interpretation

    - by HTF
    It looks like someone gained access to my server. How I can find out which Apache vHosts this log is related to? How these commands from the log are invoked and how/why they are printed to the log file - is this some remote shell or PHP script? /var/log/httpd/error_log mkdir: cannot create directory `/tmp/.kdso': File exists --2014-06-13 13:29:17-- http://updates.dyndn-web.com/abc.txt Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5055 (4.9K) [text/plain] Saving to: `abc.txt' 0K .... 100% 303K=0.02s 2014-06-13 13:29:17 (303 KB/s) - `abc.txt' saved [5055/5055] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M101 5055 101 5055 0 0 79686 0 --:--:-- --:--:-- --:--:-- 154k minerd64: no process killed minerd32: no process killed named: no process killed kernelupdates: no process killed kernelcfg: no process killed kernelorg: no process killed ls: cannot access /tmp/.ICE-unix: No such file or directory mkdir: cannot create directory `/tmp': File exists --2014-06-13 13:29:18-- http://updates.dyndn-web.com/64.tar.gz Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 205812 (201K) [application/x-tar] Saving to: `64.tar.gz' 0K .......... .......... .......... .......... .......... 24% 990K 0s 50K .......... .......... .......... .......... .......... 49% 2.74M 0s 100K .......... .......... .......... .......... .......... 74% 2.96M 0s 150K .......... .......... .......... .......... .......... 99% 3.49M 0s 200K 100% 17.4M=0.1s 2014-06-13 13:29:18 (1.99 MB/s) - `64.tar.gz' saved [205812/205812] sh: ./kernelupgrade: Permission denied

    Read the article

  • Lighttpd domain redirection

    - by HTF
    I would like to redirect domains on HTTP/HTTPS: http://old.com -> https://new.com https://old.com -> https://new.com I have to specify the SSL key/certificate for the old domain but I'm not sure where I have to place these directives: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/new.com.pem" ssl.ca-file = "/etc/pki/tls/certs/new.com.crt" } $SERVER["socket"] == ":80" { $HTTP["host"] =~ "old.com|new.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } } I was trying to add the code below but Lighttpd reports configuration errors: $SERVER["socket"] == ":443" { $HTTP["host"] =~ "old.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/old.com.pem" ssl.ca-file = "/etc/pki/tls/certs/old.com.crt" }

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • NFSv4 with idmap

    - by HTF
    The following errors appear on the NFS server, could you please advise how I can fix this? Details: System: CentOS release 6.4, NFS: nfs-utils-1.2.3-36 # cat /etc/idmapd.conf [General] Domain = domain.com [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = nsswitch Sep 3 08:25:28 snode1 rpc.idmapd[1382]: nss_getpwnam: name '0' does not map into domain 'domain.com' Sep 3 08:25:29 snode1 rpc.idmapd[1382]: nss_getpwnam: name '500' does not map into domain 'domain.com' EDIT: 03 Sep 2013 10:41 Please note that I'm using NFSv4 and these errors appear on NFS server only (not NFS clients). Server: # cat /etc/sysconfig/nfs MOUNTD_NFS_V2="no" MOUNTD_NFS_V3="no" ... RPCNFSDARGS="-N 2 -N 3" Clients: # cat /etc/fstab server:/ /data nfs4 defaults,hard,intr,timeo=15,_netdev,noatime,nodiratime,nosuid 0 0

    Read the article

  • How to get base64 encoded contents for an ImageReader?

    - by htf
    Hi. How do I read an image into a base64 encoded string by its ImageReader? Here's example source code using HtmlUnit. I want to get the base64 String of img: WebClient wc = new WebClient(); wc.setThrowExceptionOnFailingStatusCode(false); wc.setThrowExceptionOnScriptError(false); HtmlPage p = wc.getPage("http://flickr.com"); HtmlImage img = (HtmlImage) p.getByXPath("//img").get(3); System.out.println(img.getImageReader().getFormatName());

    Read the article

  • Spotlight search with PHP

    - by htf
    Hi. I want to add a spotlight search functionality - search results being displayed with rich contents like thumbnail etc in a drop down menu changing on each keyup event - just like the apple.com search - to a site, having data in MySQL InnoDB tables. The data is spread into separate tables for categories, help pages, blog pages and so on. The search script must take into account just a subset of columns. Since it seems to be a popular demand, I guess there are some PHP search engine projects (preferably open-source and with memcached support), which could be integrated into the existing system on the basis of regular exports of relevant data from the working db/tables. Are there any solutions out there? Which one would you recommend? Or maybe it would be better to implement it the other way around? Thanks

    Read the article

  • Get a value from hashtable by a part of its key

    - by htf
    Hi. Say I have a Hashtable<String, Object> with such keys and values: apple => 1 orange => 2 mossberg => 3 I can use the standard get method to get 1 by "apple", but what I want is getting the same value (or a list of values) by a part of the key, for example "ppl". Of course it may yield several results, in this case I want to be able to process each key-value pair. So basically similar to the LIKE '%ppl%' SQL statement, but I don't want to use a (in-memory) database just because I don't want to add unnecessary complexity. What would you recommend?

    Read the article

  • Parsing response from the WSDL

    - by htf
    Hello. I've generated the web service client in eclipse for the OpenCalais WSDL using the "develop" client type. Actually I was following this post so not really going in detail. Now when I get the results this way: new CalaisLocator().getcalaisSoap().enlighten(key, content, requestParams);, I get the String object, containing the response XML. Sure it's possible to parse that XML, but I think there must be some way to do it automatically, e.g. getting the response object in the form of some list whatsoever?

    Read the article

  • Is it possible to have an enum field in a class persisted with OrmLite?

    - by htf
    Hello. I'm trying to persist the following class with OrmLite: public class Field { @DatabaseField(id = true) public String name; @DatabaseField(canBeNull = false) public FieldType type; public Field() { } } The FieldType is a public enum. The field, corresponding to the type is string in SQLite (is doesn't support enums). When I try to use it, I get the following exception: INFO [main] (SingleConnectionDataSource.java:244) - Established shared JDBC Connection: org.sqlite.Conn@5224ee Exception in thread "main" org.springframework.beans.factory.BeanInitializationException: Initialization of DAO failed; nested exception is java.lang.IllegalArgumentException: Unknown field class class enums.FieldType for field FieldType:name=type,class=class orm.Field at org.springframework.dao.support.DaoSupport.afterPropertiesSet(DaoSupport.java:51) at orm.FieldDAO.getInstance(FieldDAO.java:17) at orm.Field.fromString(Field.java:23) at orm.Field.main(Field.java:38) Caused by: java.lang.IllegalArgumentException: Unknown field class class enums.FieldType for field FieldType:name=type,class=class orm.Field at com.j256.ormlite.field.FieldType.<init>(FieldType.java:54) at com.j256.ormlite.field.FieldType.createFieldType(FieldType.java:381) at com.j256.ormlite.table.DatabaseTableConfig.fromClass(DatabaseTableConfig.java:82) at com.j256.ormlite.dao.BaseJdbcDao.initDao(BaseJdbcDao.java:116) at org.springframework.dao.support.DaoSupport.afterPropertiesSet(DaoSupport.java:48) ... 3 more So how do I tell OrmLite, values on the Java side are from an enum?

    Read the article

  • Suggest Sphinx index scheme

    - by htf
    Hi. In a MySQL database I have documents of different type: some have text content, meta keys, descriptions, others have code, SKU number, size and brand name and so on. The problem is, I have to search something in all of these documents and then display a single page, where the results will be grouped by the document type, such as help page, blog post, item... It's not clear for me how to implement the Sphinx index: I want to have a single index to speed up queries, but since different docs have different structure - how can I group them? I was thinking about just concatenating them, but it just doesn't feel right.

    Read the article

1