Search Results

Search found 5290 results on 212 pages for 'refresh rate'.

Page 197/212 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • Bind: dns not 'spreaded'

    - by realtebo
    I've elfoip.net with bind $ whois elfoip.net | grep 'Name Server' Name Server: NS.ELFOIP.NET I need elfoip.net be able to serve third levels domain, like mickymouse.elfoip.net, etc... Yes, I'm trying to create an other useless dyndns clone. i've added some third level as A RR. Eg: executing this from the server itself $ dig @localhost mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 I was expecting in one or two days, from my pc i can digit in browser mattinauno.elfoip.net and get page a 192.81.221.113 But this is not happening. Are there any prerequisites to satisfy to allow dns of my isp to be able to forward dns resolution of *.elfoip.net to MY dns ? (Or to ask to him and then cache ?) TTL of zone is set a 5m I've not AllowQuey directive, is it necessary for other dns to cache from mine ? I've cheched the zone with bind utility named-checkzone but no error detected. How to diagnose why other dns doesn't take in account RR from mine ? from my home pc dig @ns.elfoip.net mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 ;; AUTHORITY SECTION: elfoip.net. 300 IN NS ns.elfoip.net. but dig @8.8.8.8 mattinauno.elfoip.net give no answers Whole zone file: note I've used nsupdate, so this file has been re-edited and re-formatted from this utility ! root@mirko:/var/named# cat elfoip.net.db $ORIGIN . $TTL 300 ; 5 minutes elfoip.net IN SOA ns.elfoip.net. hostmaster.elfoip.net. ( 2013062314 ; serial 3600 ; refresh (1 hour) 600 ; retry (10 minutes) 86400 ; expire (1 day) 60 ; minimum (1 minute) ) NS ns.elfoip.net. A 109.168.99.6 $ORIGIN elfoip.net. $TTL 60 ; 1 minute google A 173.194.35.56 maiscai A 192.81.221.113 mattinadue A 192.81.221.113 mattinauno A 192.81.221.113 $TTL 300 ; 5 minutes ns A 109.168.99.6 $TTL 60 ; 1 minute prova A 208.67.222.222 prova2 A 13.23.34.45 A 13.23.34.46 www CNAME elfoip.net. EDIT: added named.conf.local zone "elfoip.net" { type master; // file "/etc/bind/elfoip.net.db"; file "/var/named/elfoip.net.db"; allow-update { key elfoip.net ; }; }; EDIT: I've no setup list-on directive *EDIT Added a TCPDUMP after [email protected] wwww.elfoip.net from a machine which uses my company internal dns, who allow recursive query. root@mirko:~# tcpdump -i eth0 'port 53' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 11:57:23.293611 IP host9-210-static.22-87-b.business.telecomitalia.it.45958 > mirko.elfoip.net.domain: 61337+ A? www.elfoip.net. (32) 11:57:23.294114 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.45958: 61337* 2/1/1 CNAME elfoip.net., A 109.168.99.6 (95) 11:57:23.294554 IP mirko.elfoip.net.59571 > google-public-dns-a.google.com.domain: 45851+ PTR? 9.210.22.87.in-addr.arpa. (42) 11:57:23.330444 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.59571: 45851 1/0/0 PTR host9-210-static.22-87-b.business.telecomitalia.it. (106) 11:57:23.331181 IP mirko.elfoip.net.44171 > google-public-dns-a.google.com.domain: 33339+ PTR? 8.8.8.8.in-addr.arpa. (38) 11:57:23.439405 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.44171: 33339 1/0/0 PTR google-public-dns-a.google.com. (82) 11:57:31.350654 IP host9-210-static.22-87-b.business.telecomitalia.it.30108 > mirko.elfoip.net.domain: 38269 [1au] A? ns.elfoip.net. (42) 11:57:31.351117 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.30108: 38269* 1/1/1 A 109.168.99.6 (72) If i dig @8.8.8.8 www.elfoip.net, NOTHING happens in dump log !

    Read the article

  • help setting up an IPSEC vpn from my linux box

    - by robthewolf
    I have an office with a router and a remote server (Linux - Ubuntu 10.10). Both locations need to connect to a data supplier through a VPN. The VPN is an IPSEC gateway. I was able to configure my Linksys rv42 router to create a VPN connection successfully and now I need to do the same for Linux server. I have been messing around with this for too long. First I tried OpenVPN, but that is SSL and not IPSEC. Then I tried Shrew. I think I have the settings correct but I haven't been able to create the connection. It maybe that I have to use something else like a direct IPSEC config or something like that. If someone knows of a way to turn the following settings that I have been given below into a working IPSEC VPN connection I would be very grateful. Here are the settings I was given that must be used to connect to my supplier: Local destination network: 192.168.4.0/24 Local destination hosts: 192.168.4.100 Remote destination network: 192.167.40.0/24 Remote destination hosts: 192.168.40.27 VPN peering point: xxx.xxx.xxx.xxx Then they have given me the following details: IPSEC/ISAKMP Phase 1 Parameters: Authentication method: pre shared secret Diffie Hellman group: group 2 Encryption Algorithm: 3DES Lifetime in seconds:28800 Phase 2 parameters: IPSEC security: ESP Encryption algortims: 3DES Authentication algorithms: MD5 lifetime in seconds: 28800 pfs: disabled Here are the settings from my attempt to use shrew: n:version:2 n:network-ike-port:500 n:network-mtu-size:1380 n:client-addr-auto:0 n:network-frag-size:540 n:network-dpd-enable:1 n:network-notify-enable:1 n:client-banner-enable:1 n:client-dns-used:1 b:auth-mutual-psk:YjJzN2QzdDhyN2EyZDNpNG42ZzQ= n:phase1-dhgroup:2 n:phase1-keylen:0 n:phase1-life-secs:28800 n:phase1-life-kbytes:0 n:vendor-chkpt-enable:0 n:phase2-keylen:0 n:phase2-pfsgroup:-1 n:phase2-life-secs:28800 n:phase2-life-kbytes:0 n:policy-nailed:0 n:policy-list-auto:1 n:client-dns-auto:1 n:network-natt-port:4500 n:network-natt-rate:15 s:client-dns-addr:0.0.0.0 s:client-dns-suffix: s:network-host:xxx.xxx.xxx.xxx s:client-auto-mode:pull s:client-iface:virtual s:client-ip-addr:192.168.4.0 s:client-ip-mask:255.255.255.0 s:network-natt-mode:enable s:network-frag-mode:disable s:auth-method:mutual-psk s:ident-client-type:address s:ident-client-data:192.168.4.0 s:ident-server-type:address s:ident-server-data:192.168.40.0 s:phase1-exchange:aggressive s:phase1-cipher:3des s:phase1-hash:md5 s:phase2-transform:3des s:phase2-hmac:md5 s:ipcomp-transform:disabled Finally here is the debug output from the shrew log: 10/12/22 17:22:18 ii : ipc client process thread begin ... 10/12/22 17:22:18 < A : peer config add message 10/12/22 17:22:18 DB : peer added ( obj count = 1 ) 10/12/22 17:22:18 ii : local address 217.xxx.xxx.xxx selected for peer 10/12/22 17:22:18 DB : tunnel added ( obj count = 1 ) 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : client config message 10/12/22 17:22:18 < A : local id '192.168.4.0' message 10/12/22 17:22:18 < A : remote id '192.168.40.0' message 10/12/22 17:22:18 < A : preshared key message 10/12/22 17:22:18 < A : peer tunnel enable message 10/12/22 17:22:18 DB : new phase1 ( ISAKMP initiator ) 10/12/22 17:22:18 DB : exchange type is aggressive 10/12/22 17:22:18 DB : 217.xxx.xxx.xxx:500 <- 206.xxx.xxx.xxx:500 10/12/22 17:22:18 DB : c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 DB : phase1 added ( obj count = 1 ) 10/12/22 17:22:18 : security association payload 10/12/22 17:22:18 : - proposal #1 payload 10/12/22 17:22:18 : -- transform #1 payload 10/12/22 17:22:18 : key exchange payload 10/12/22 17:22:18 : nonce payload 10/12/22 17:22:18 : identification payload 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v00 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v01 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v02 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v03 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( rfc ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports DPDv1 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SHREW SOFT compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is NETSCREEN compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SIDEWINDER compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is CISCO UNITY compatible 10/12/22 17:22:18 = : cookies c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 = : message 00000000 10/12/22 17:22:18 - : send IKE packet 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 ( 484 bytes ) 10/12/22 17:22:18 DB : phase1 resend event scheduled ( ref count = 2 ) 10/12/22 17:22:18 ii : opened tap device tap0 10/12/22 17:22:28 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:38 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:48 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:58 ii : resend limit exceeded for phase1 exchange 10/12/22 17:22:58 ii : phase1 removal before expire time 10/12/22 17:22:58 DB : phase1 deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : closed tap device tap0 10/12/22 17:22:58 DB : tunnel stats event canceled ( ref count = 1 ) 10/12/22 17:22:58 DB : removing tunnel config references 10/12/22 17:22:58 DB : removing tunnel phase2 references 10/12/22 17:22:58 DB : removing tunnel phase1 references 10/12/22 17:22:58 DB : tunnel deleted ( obj count = 0 ) 10/12/22 17:22:58 DB : removing all peer tunnel refrences 10/12/22 17:22:58 DB : peer deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : ipc client process thread exit ...

    Read the article

  • Server hung with "blocked for more than 120 seconds", diskless

    - by alterpub
    I have server which hung every 2-5 days. dmesg show following situation: kernel: [490894.231753] INFO: task munin-html:10187 blocked for more than 120 seconds. kernel: [490894.231799] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kernel: [490894.231843] munin-html D 0000000000000000 0 10187 9796 0x00000000 kernel: [490894.231878] ffff88063174b968 0000000000000082 ffff88063174b8d8 0000000000015b80 kernel: [490894.231930] ffff88063174bfd8 0000000000015b80 ffff88063174bfd8 ffff88062fe644d0 kernel: [490894.231982] 0000000000015b80 0000000000015b80 ffff88063174bfd8 0000000000015b80 kernel: [490894.232033] Call Trace: kernel: [490894.232059] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232089] [<ffffffff815a1f13>] io_schedule+0x73/0xc0 kernel: [490894.232115] [<ffffffff8117fd25>] sync_buffer+0x45/0x50 kernel: [490894.232143] [<ffffffff815a258f>] __wait_on_bit+0x5f/0x90 kernel: [490894.232170] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232197] [<ffffffff815a2638>] out_of_line_wait_on_bit+0x78/0x90 kernel: [490894.232227] [<ffffffff81080250>] ? wake_bit_function+0x0/0x40 kernel: [490894.232255] [<ffffffff8117fcd6>] __wait_on_buffer+0x26/0x30 kernel: [490894.232288] [<ffffffffa00131be>] squashfs_read_data+0x1be/0x520 [squashfs] kernel: [490894.232320] [<ffffffff8114f0f1>] ? __mem_cgroup_try_charge+0x71/0x450 kernel: [490894.232350] [<ffffffffa0013963>] squashfs_cache_get+0x1c3/0x320 [squashfs] kernel: [490894.232381] [<ffffffffa00136eb>] ? squashfs_copy_data+0x10b/0x130 [squashfs] kernel: [490894.232426] [<ffffffff815a3dbe>] ? _raw_spin_lock+0xe/0x20 kernel: [490894.232454] [<ffffffffa0013b68>] ? squashfs_read_metadata+0x48/0xf0 [squashfs] kernel: [490894.232499] [<ffffffffa0013ae1>] squashfs_get_datablock+0x21/0x30 [squashfs] kernel: [490894.232544] [<ffffffffa0015026>] squashfs_readpage+0x436/0x4a0 [squashfs] kernel: [490894.232575] [<ffffffff8111a375>] ? __inc_zone_page_state+0x35/0x40 kernel: [490894.232606] [<ffffffff8110d072>] __do_page_cache_readahead+0x172/0x210 kernel: [490894.232636] [<ffffffff8110d131>] ra_submit+0x21/0x30 kernel: [490894.232662] [<ffffffff811045f3>] filemap_fault+0x3f3/0x450 kernel: [490894.232691] [<ffffffff812bd156>] ? prio_tree_insert+0x256/0x2b0 kernel: [490894.232726] [<ffffffffa009225d>] aufs_fault+0x11d/0x170 [aufs] kernel: [490894.232755] [<ffffffff8111f6d4>] __do_fault+0x54/0x560 kernel: [490894.232782] [<ffffffff81122f39>] handle_mm_fault+0x1b9/0x440 kernel: [490894.232811] [<ffffffff811286f5>] ? do_mmap_pgoff+0x335/0x380 kernel: [490894.232840] [<ffffffff815a7af5>] do_page_fault+0x125/0x350 kernel: [490894.232867] [<ffffffff815a4675>] page_fault+0x25/0x30 Os info cat /etc/issue.net Ubuntu 10.04.2 LTS uname -a Linux Shard1Host3 2.6.35-32-server #68~lucid1-Ubuntu SMP Wed Mar 28 18:33:00 UTC 2012 x86_64 GNU/Linux This system load via ltsp and hasn't harddrives also it has a lot of memory 24Gb. free -m total used free shared buffers cached Mem: 24152 17090 7061 0 50 494 -/+ buffers/cache: 16545 7607 Swap: 0 0 0 I put vmstat info here but I think it won't give results after reboot vmstat 1 30 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 7231156 52196 506400 0 0 0 0 172 143 7 0 92 0 1 0 0 7231024 52196 506400 0 0 0 0 7859 16233 5 0 94 0 0 0 0 7231024 52196 506400 0 0 0 0 7870 16446 2 0 98 0 0 0 0 7230900 52196 506400 0 0 0 0 7308 15661 5 0 95 0 0 0 0 7231100 52196 506400 0 0 0 0 7960 16543 6 0 94 0 0 0 0 7231100 52196 506400 0 0 0 0 7542 16047 5 1 94 0 3 0 0 7231100 52196 506400 0 0 0 0 7709 16621 3 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7857 16552 4 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7192 15491 6 0 94 0 0 0 0 7231220 52196 506400 0 0 0 0 7423 15792 5 1 94 0 1 0 0 7231260 52196 506404 0 0 0 0 7686 16296 2 0 98 0 0 0 0 7231260 52196 506404 0 0 0 0 6976 15183 5 0 95 0 0 0 0 7231260 52196 506404 0 0 0 0 7303 15600 4 0 95 0 0 0 0 7231320 52196 506404 0 0 0 0 7967 16241 1 0 98 0 0 0 0 7231444 52196 506404 0 0 0 0 6948 15113 6 0 94 0 0 0 0 7231444 52196 506404 0 0 0 0 7931 16181 6 0 94 0 1 0 0 7231516 52196 506404 0 0 0 0 7715 15829 6 0 94 0 0 0 0 7231516 52196 506404 0 0 0 0 7771 16036 2 0 97 0 0 0 0 7231268 52196 506404 0 0 0 0 7782 16202 6 0 94 0 1 0 0 7231212 52196 506404 0 0 0 0 7457 15622 4 0 96 0 0 0 0 7231212 52196 506404 0 0 0 0 7573 16045 2 0 98 0 2 0 0 7231216 52196 506404 0 0 0 0 7689 16076 6 0 94 0 0 0 0 7231424 52196 506404 0 0 0 0 7429 15650 4 0 95 0 3 0 0 7231424 52196 506404 0 0 0 0 7534 16168 3 0 97 0 1 0 0 7230548 52196 506404 0 0 0 0 8559 15926 7 1 92 0 0 0 0 7230672 52196 506404 0 0 0 0 7720 15905 2 0 98 0 0 0 0 7230548 52196 506404 0 0 0 0 7677 16313 5 0 95 0 1 0 0 7230676 52196 506404 0 0 0 0 7209 15432 5 0 95 0 0 0 0 7230800 52196 506404 0 0 0 0 7522 15861 2 0 98 0 0 0 0 7230552 52196 506404 0 0 0 0 7760 16661 5 0 95 0 In the munin(monitoring system) graphs I see(before server hung): Disk(nbd0) IOs per device: read: 289m but avg by week 2.09m Disk(nbd0) throughput per device: read: 4.73k but avg by week 108.76 Disk(nbd0) utilization per device: 100% but avg by week 1.2% Eth0 traffic was low: in/out only 2Mbps Number of threads increased to 566 usually 392 Fork rate 1.08 but usually 2.82 VMStat(processes state) Increased to 17.77(from 0 as far as I could see in the graph) CPU usage iowait 880.46% when usually 6.67% It'll be great if somebody help me to understand what's up.

    Read the article

  • Can't access shared drive when connecting over VPN

    - by evolvd
    I can ping all network devices but it doesn't seem that DNS is resolving their hostnames. ipconfig/ all is showing that I am pointing to the correct dns server. I can "ping "dnsname"" and it will resolve but it wont resolve any other names. Split tunnel is set up so outside DNS is resolving fine So one issue might be DNS but I have the IP address of the server share so I figure I could just get to it that way. example: \10.0.0.1\ well I can't get to it that way either and I get "the specified network name is no longer available" I can ping it but I can't open the share. Below is the ASA config : ASA Version 8.2(1) ! hostname KG-ASA domain-name example.com names ! interface Vlan1 nameif inside security-level 100 ip address 10.0.0.253 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns domain-lookup outside dns server-group DefaultDNS name-server 10.0.0.101 domain-name blah.com access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 10000 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 8333 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 902 access-list SPLIT-TUNNEL-VPN standard permit 10.0.0.0 255.0.0.0 access-list NONAT extended permit ip 10.0.0.0 255.255.255.0 10.0.1.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool IPSECVPN-POOL 10.0.1.2-10.0.1.50 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list NONAT nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 10000 10.0.0.101 10000 netmask 255.255.255.255 static (inside,outside) tcp interface 8333 10.0.0.101 8333 netmask 255.255.255.255 static (inside,outside) tcp interface 902 10.0.0.101 902 netmask 255.255.255.255 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication enable console LOCAL aaa authentication http console LOCAL aaa authentication serial console LOCAL aaa authentication ssh console LOCAL aaa authentication telnet console LOCAL http server enable http 10.0.0.0 255.255.0.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set myset esp-aes esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map dynmap 1 set transform-set myset crypto dynamic-map dynmap 1 set reverse-route crypto map IPSEC-MAP 65535 ipsec-isakmp dynamic dynmap crypto map IPSEC-MAP interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption aes hash sha group 2 lifetime 86400 telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 0.0.0.0 0.0.0.0 inside ssh 70.60.228.0 255.255.255.0 outside ssh 74.102.150.0 255.255.254.0 outside ssh 74.122.164.0 255.255.252.0 outside ssh timeout 5 console timeout 0 dhcpd dns 10.0.0.101 dhcpd lease 7200 dhcpd domain blah.com ! dhcpd address 10.0.0.110-10.0.0.170 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 63.111.165.21 webvpn enable outside svc image disk0:/anyconnect-win-2.4.1012-k9.pkg 1 svc enable group-policy EASYVPN internal group-policy EASYVPN attributes dns-server value 10.0.0.101 vpn-tunnel-protocol IPSec l2tp-ipsec svc webvpn split-tunnel-policy tunnelspecified split-tunnel-network-list value SPLIT-TUNNEL-VPN ! tunnel-group client type remote-access tunnel-group client general-attributes address-pool (inside) IPSECVPN-POOL address-pool IPSECVPN-POOL default-group-policy EASYVPN dhcp-server 10.0.0.253 tunnel-group client ipsec-attributes pre-shared-key * tunnel-group CLIENTVPN type ipsec-l2l tunnel-group CLIENTVPN ipsec-attributes pre-shared-key * ! class-map inspection_default match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect icmp ! service-policy global_policy global prompt hostname context I'm not sure where I should go next with troubleshooting nslookup result: Default Server: blahname.blah.lan Address: 10.0.0.101

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • FFmpeg creates emtpy (black) frames

    - by resamsel
    I have a set of images from a timelapse shot (172 JPG files) that I want to convert into a movie. I tried several parameters with FFmpeg, but all I get is a video with black frames (though it has the expected length). ffmpeg -f image2 -vcodec mjpeg -y -i img_%03d.jpg timelapse2.mpg The command above creates this video: http://sdm-net.org/data/timelapse2.mpg What I'm expecting is something like this (created with Time Lapse Assembler.app): https://vimeo.com/39038362 - This is my fallback option, but I'd really like to create timelapse movies from a script. I'm on OSX Lion (10.7.3) with FFmpeg version (0.10) installed via Homebrew. I also tried to find a proper version of mencoder for OSX, but this doesn't seem to be an easy task. Also, ImageMagick's convert doesn't seem to work nicely, it creates really bad output and it seems there's not much I can do about it... Edit: With libx264 and an mp4 container: ffmpeg -f image2 -y -i img_%03d.jpg -vcodec libx264 timelapse4.mp4 Output: ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers built on Mar 26 2012 13:47:02 with clang 3.0 (tags/Apple/clang-211.12) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.10 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --disable-ffplay libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 60.100 / 53. 60.100 libavformat 53. 31.100 / 53. 31.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 60.100 / 2. 60.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, image2, from 'img_%03d.jpg': Duration: 00:00:06.88, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x7f8ec9415f20] w:3888 h:2592 pixfmt:yuvj420p tb:1/1000000 sar:72/72 sws_param: [libx264 @ 0x7f8ec981d800] using SAR=1/1 [libx264 @ 0x7f8ec981d800] frame MB size (243x162) > level limit (36864) [libx264 @ 0x7f8ec981d800] MB rate (984150) > level limit (983040) [libx264 @ 0x7f8ec981d800] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX [libx264 @ 0x7f8ec981d800] profile High, level 5.1 [libx264 @ 0x7f8ec981d800] 264 - core 120 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'timelapse4.mp4': Metadata: encoder : Lavf53.31.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], q=-1--1, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (mjpeg -> libx264) Press [q] to stop, [?] for help frame= 172 fps= 18 q=-1.0 Lsize= 259kB time=00:00:06.80 bitrate= 312.3kbits/s video:256kB audio:0kB global headers:0kB muxing overhead 1.089647% [libx264 @ 0x7f8ec981d800] frame I:1 Avg QP: 9.60 size:212820 [libx264 @ 0x7f8ec981d800] frame P:43 Avg QP:30.50 size: 291 [libx264 @ 0x7f8ec981d800] frame B:128 Avg QP:31.00 size: 285 [libx264 @ 0x7f8ec981d800] consecutive B-frames: 0.6% 0.0% 1.7% 97.7% [libx264 @ 0x7f8ec981d800] mb I I16..4: 22.5% 77.2% 0.3% [libx264 @ 0x7f8ec981d800] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0% [libx264 @ 0x7f8ec981d800] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.0% 0.0% 0.0% direct: 0.0% skip:100.0% L0: 1.2% L1:98.8% BI: 0.0% [libx264 @ 0x7f8ec981d800] 8x8 transform intra:77.2% inter:100.0% [libx264 @ 0x7f8ec981d800] coded y,uvDC,uvAC intra: 41.2% 23.4% 0.6% inter: 0.0% 0.0% 0.0% [libx264 @ 0x7f8ec981d800] i16 v,h,dc,p: 40% 25% 35% 1% [libx264 @ 0x7f8ec981d800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 32% 30% 1% 0% 0% 0% 0% 0% [libx264 @ 0x7f8ec981d800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 51% 40% 6% 1% 1% 0% 1% 0% 1% [libx264 @ 0x7f8ec981d800] i8c dc,h,v,p: 60% 21% 19% 0% [libx264 @ 0x7f8ec981d800] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8ec981d800] ref P L0: 92.3% 0.0% 0.0% 7.7% [libx264 @ 0x7f8ec981d800] ref B L0: 50.0% 0.0% 50.0% [libx264 @ 0x7f8ec981d800] ref B L1: 99.4% 0.6% [libx264 @ 0x7f8ec981d800] kb/s:304.49 Output timelapse4.mp4 (beacause of spam protection I can only post two links with my reputation): http sdm-net.org/data/timelapse4.mp4

    Read the article

  • ASA 5505 Vlan question

    - by Wayne
    I am setting up a cisco asa 5505 with the base license. I can communicate from inside-outside, outside-inside, inside-home, which is my desired traffic security. I can get http, ssh, and other access from inside-home, but I can't ping from inside-home (192.168.110.0 host to 192.168.7.1 or 192.168.7.0 host). Can someone explain. My config is listed below interface Vlan1<br> nameif inside<br> security-level 100<br> ip address 192.168.110.254 255.255.255.0 <br> !<br> interface Vlan2<br> nameif outside<br> security-level 0<br> pppoe client vpdn group birdie<br> ip address removedIP 255.255.255.255 pppoe <br> !<br> interface Vlan3<br> no forward interface Vlan1<br> nameif home<br> security-level 50<br> ip address 192.168.7.1 255.255.255.0 <br> ! <br> interface Ethernet0/0<br> switchport access vlan 2<br> ! <br> interface Ethernet0/1<br> ! <br> interface Ethernet0/2<br> ! <br> interface Ethernet0/3<br> ! <br> interface Ethernet0/4<br> switchport access vlan 3<br> ! <br> interface Ethernet0/5<br> shutdown <br> ! <br> interface Ethernet0/6<br> shutdown <br> ! <br> interface Ethernet0/7<br> shutdown <br> ! <br> ftp mode passive<br> clock timezone EST -5<br> clock summer-time EDT recurring<br> access-list Outside-In extended permit icmp any any <br> access-list Outside-In extended permit tcp any any eq www <br> access-list Outside-In extended permit tcp any any eq https <br> access-list Outside-In extended permit tcp any any eq 5969 <br> access-list inside_nat0_outbound extended permit ip any 192.168.111.0 255.255.255.224 <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.111.0 255.255.255.0 any <br> access-list standardUser_splitTunnelAcl1 extended permit ip 192.168.110.0 255.255.255.0 <br>any access-list inside_in extended permit icmp any any <br> access-list inside_in extended permit ip any any <br> access-list home_in extended permit icmp any any <br> access-list home_in extended permit ip any any <br> pager lines 24<br> logging enable<br> logging asdm informational<br> mtu inside 1492<br> mtu outside 1492<br> mtu home 1500 <br> ip local pool vpnuser 192.168.111.5-192.168.111.20<br> icmp unreachable rate-limit 1 burst-size 1<br> asdm image disk0:/asdm-524.bin<br> no asdm history enable<br> arp timeout 14400<br> nat-control <br> global (outside) 1 interface<br> nat (inside) 0 access-list inside_nat0_outbound<br> nat (inside) 1 0.0.0.0 0.0.0.0<br> nat (home) 1 192.168.7.0 255.255.255.0<br> static (inside,outside) tcp interface https 192.168.110.6 https netmask 255.255.255.255 <br> static (inside,outside) tcp interface www 192.168.110.6 www netmask 255.255.255.255 <br> static (inside,outside) tcp interface 5969 192.168.110.12 5969 netmask 255.255.255.255 <br> static (inside,home) 192.168.110.0 192.168.110.0 netmask 255.255.255.0 <br> access-group inside_in in interface inside<br> access-group Outside-In in interface outside<br> access-group home_in in interface home<br> route outside 0.0.0.0 0.0.0.0 RemovedIP 1<br>

    Read the article

  • ldirectord ipvsadm not show reals ip and not work wtih pacemaker and corosync

    - by miguer27
    first thanks for your time. I'm having a problem with ldirectord that I can not solve, I comment my situation: I have two nodes with pace maker and corosync and configure somes resources: root@ldap1:/home/mamartin# crm status Last updated: Tue Jun 3 12:58:30 2014 Last change: Tue Jun 3 12:23:47 2014 via cibadmin on ldap1 Stack: openais Current DC: ldap2 - partition with quorum Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, 2 expected votes 7 Resources configured. Online: [ ldap1 ldap2 ] Resource Group: IPV_LVS IPV_4 (ocf::heartbeat:IPaddr2): Started ldap1 IPV_6 (ocf::heartbeat:IPv6addr): Started ldap1 lvs (ocf::heartbeat:ldirectord): Started ldap1 Clone Set: clon_IPV_lo [IPV_lo] Started: [ ldap2 ] Stopped: [ IPV_lo:1 ] root@ldap1:/home/mamartin# crm configure show node ldap2 \ attributes standby="off" node ldap1 \ attributes standby="off" primitive IPV-lo_4 ocf:heartbeat:IPaddr \ params ip="192.168.1.10" cidr_netmask="32" nic="lo" \ op monitor interval="5s" primitive IPV-lo_6 ocf:heartbeat:IPv6addrLO \ params ipv6addr="[fc00:1::3]" cidr_netmask="64" \ op monitor interval="5s" primitive IPV_4 ocf:heartbeat:IPaddr2 \ params ip="192.168.1.10" nic="eth0" cidr_netmask="25" lvs_support="true" \ op monitor interval="5s" primitive IPV_6 ocf:heartbeat:IPv6addr \ params ipv6addr="[fc00:1::3]" nic="eth0" cidr_netmask="64" \ op monitor interval="5s" primitive lvs ocf:heartbeat:ldirectord \ params configfile="/etc/ldirectord.cf" \ op monitor interval="20" timeout="10" \ meta target-role="Started" group IPV_LVS IPV_4 IPV_6 lvs group IPV_lo IPV-lo_6 IPV-lo_4 clone clon_IPV_lo IPV_lo \ meta interleave="true" target-role="Started" location cli-prefer-IPV_LVS IPV_LVS \ rule $id="cli-prefer-rule-IPV_LVS" inf: #uname eq ldap1 colocation LVS_no_IPV_lo -inf: clon_IPV_lo IPV_LVS property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1401264327" rsc_defaults $id="rsc-options" \ resource-stickiness="1000" The problem is in the ipvsadm only show a one real IP, when i configured two now, show the ldirector.cf: root@ldap1:/home/mamartin# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags - RemoteAddress:Port Forward Weight ActiveConn InActConn TCP ldap-maqueta.cica.es:ldap wrr - ldap2.cica.es:ldap Route 4 0 0 TCP [[fc00:1::3]]:ldap wrr - [[fc00:1::2]]:ldap Route 4 0 0 root@ldap1:/home/mamartin# cat /etc/ldirectord.cf checktimeout=10 checkinterval=2 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=yes #ipv4 virtual=192.168.1.10:389 real=192.168.1.11:389 gate 4 real=192.168.1.12:389 gate 4 scheduler=wrr protocol=tcp checktype=on #ipv6 virtual6=[[fc00:1::3]]:389 real6=[[fc00:1::1]]:389 gate 4 real6=[[fc00:1::2]]:389 gate 4 scheduler=wrr protocol=tcp checkport=389 checktype=on and in the logs I see nothing clear: root@ldap1:/home/mamartin# ldirectord -d /etc/ldirectord.cf start DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.11:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.12:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Checking on: Real servers are added without any checks DEBUG2: Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Destination already exists root@ldap1:/home/mamartin# cat /var/log/ldirectord.log [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.11:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::2]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t [[fc00:1::3]]:389 -r [[fc00:1::2]]:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: [[fc00:1::2]]:389 ([[fc00:1::3]]:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::1]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: [[fc00:1::1]]:389 ([[fc00:1::3]]:389) (Weight set to 4) do not know if this is a bug or a configuration error, can anyone help? Regards.

    Read the article

  • ASA 5540 v8.4(3) vpn to ASA 5505 v8.2(5), tunnel up but I cant ping from 5505 to IP on other side

    - by user223833
    I am having problems pinging from a 5505(remote) to IP 10.160.70.10 in the network behind the 5540(HQ side). 5505 inside IP: 10.56.0.1 Out: 71.43.109.226 5540 Inside: 10.1.0.8 out: 64.129.214.27 I Can ping from 5540 to 5505 inside 10.56.0.1. I also ran ASDM packet tracer in both directions, it is ok from 5540 to 5505, but drops the packet from 5505 to 5540. It gets through the ACL and dies at the NAT. Here is the 5505 config, I am sure it is something simple I am missing. ASA Version 8.2(5) ! hostname ASA-CITYSOUTHDEPOT domain-name rngint.net names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 10.56.0.1 255.255.0.0 ! interface Vlan2 nameif outside security-level 0 ip address 71.43.109.226 255.255.255.252 ! banner motd ***ASA-CITYSOUTHDEPOT*** banner asdm CITY SOUTH DEPOT ASA5505 ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name rngint.net access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.1.0.125 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.160.70.10 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 host 10.1.0.125 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 10.106.70.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 25000 logging buffered informational logging asdm warnings mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 icmp permit any inside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 71.43.109.225 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ (inside) host 10.106.70.36 key ***** aaa authentication http console LOCAL aaa authentication ssh console LOCAL aaa authorization exec authentication-server http server enable http 192.168.1.0 255.255.255.0 inside http 10.0.0.0 255.0.0.0 inside http 0.0.0.0 0.0.0.0 outside snmp-server host inside 10.106.70.7 community ***** no snmp-server location no snmp-server contact snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs group1 crypto map outside_map 1 set peer 64.129.214.27 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh 10.0.0.0 255.0.0.0 inside ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside dhcpd auto_config outside ! dhcpd address 10.56.0.100-10.56.0.121 inside dhcpd dns 10.1.0.125 interface inside dhcpd auto_config outside interface inside ! dhcprelay server 10.1.0.125 outside dhcprelay enable inside dhcprelay setroute inside dhcprelay timeout 60 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept tftp-server inside 10.1.1.25 CITYSOUTHDEPOT-ASA-Confg webvpn tunnel-group 64.129.214.27 type ipsec-l2l tunnel-group 64.129.214.27 ipsec-attributes pre-shared-key ***** ! ! prompt hostname context

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • How to Edit or Add a New Row in jqGrid

    - by Paul
    My jqGrid that does a great job of pulling data from my database, but I'm having trouble understanding how the Add New Row functionality works. Right now, I'm able to edit inline data, but I'm not able to create a new row using the Modal Box. I'm missing that extra logic that says, "If this is a new row, post this to the server side URL" instead of modifying existing data. (Right now, hitting Submit only clears the form and reloads the grid data.) The documentation states that Add New Row is: jQuery("#editgrid").jqGrid('editGridRow',"new",{height:280,reloadAfterSubmit:false}); but I'm not sure how to use that correctly. I've spent alot of time studying the Demos, but they seem to all use an external button to fire the new row command, rather than using the Modal Form, which I want to do. My complete code is here: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>jqGrid</title> <link rel="stylesheet" type="text/css" media="screen" href="../css/ui-lightness/jquery-ui-1.7.2.custom.css" /> <link rel="stylesheet" type="text/css" media="screen" href="../css/ui.jqgrid.css" /> <script src="jquery-1.3.2.min.js" type="text/javascript"></script> <script src="../js/i18n/grid.locale-en.js" type="text/javascript"></script> <script src="jquery.jqGrid.min.js" type="text/javascript"></script> </head> <body> <h2>My Grid Data</h2> <table id="list" class="scroll"></table> <div id="pager" class="scroll c1"></div> <script type="text/javascript"> var lastSelectedId; jQuery('#list').jqGrid({ url:'grid.php', datatype: 'json', mtype: 'POST', colNames:['ID','Name', 'Price', 'Promotion'], colModel:[ {name:'product_id',index:'product_id', width:25,editable:false}, {name:'name',index:'name', width:50,editable:true, edittype:'text',editoptions:{size:30,maxlength:50}}, {name:'price',index:'price', width:50, align:'right',formatter:'currency', editable:true}, {name:'on_promotion',index:'on_promotion', width:50, formatter:'checkbox',editable:true, edittype:'checkbox'}], rowNum:10, rowList:[5,10,20,30], pager: $('#pager'), sortname: 'product_id', viewrecords: true, sortorder: "desc", caption:"Database", width:500, height:150, onSelectRow: function(id){ if(id && id!==lastSelectedId){ $('#list').restoreRow(lastSelectedId); $('#list').editRow(id,true,null,onSaveSuccess); lastSelectedId=id; }}, editurl:'grid.php?action=save'}) .jqGrid('navGrid','#pager', {refreshicon: 'ui-icon-refresh',view:true}, {height:280,reloadAfterSubmit:true}, {height:280,reloadAfterSubmit:true}, {reloadAfterSubmit:true}) .jqGrid('editGridRow',"new",{height:280,reloadAfterSubmit:false}); function onSaveSuccess(xhr) {response = xhr.responseText; if(response == 1) return true; return false;} </script></body></html> If it makes it easier, I'd be willing to scrap the inline editing functionality and do editing and posting via modal boxes. Any help would be greatly appreciated.

    Read the article

  • jqgrid setting cutom formatter to dynamic column collection

    - by user312249
    I am using jqgrid. We are building a dashboard functionality with jquery. Different application just have to register respective application page and dashboard will render that page.To achieve this we are using jqgrid as one of the jquery plugin. Following is my codeenter code here var ph = '#' + placeHolder; var _prevSort; $.ajax({ url: dataUrl, dataType: "json", async: true, success: function(json) { pager = $('#' + pager); if (json.showPager === "false") { pager = eval(json.showPager); } dataUrl += "&jqSession=true"; $(ph).jqGrid({ url: dataUrl, datatype: "json", sortclass: "grid_sort", colNames: JSON.parse(json.colNames), colModel: JSON.parse(json.colModel), forceFit: true, rowNum: json.rowNum, rowList: JSON.parse(json.rowList), pager: pager, sortname: json.sortName, caption: json.caption, viewrecords: true, viewsortcols: true, sortorder: json.sortOrder, footerrow: summaryFooter, userDataOnFooter: summaryFooter, jsonReader: { root: "rows", row: "row", repeatitems: false, id: json.sortName }, gridComplete: function() { if (showFooter) { $(ph).append("" + json.footerRow + ""); } if (json.additionalContent != null) { $("#" + xContID).html(json.additionalContent); } $("ui-icon-asc").append("IMG"); var _rows = $(".jqgrow"); if (json.rows.length 0) { for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } var gMaxHeight = getGridMaxHeight(); var gHeight = ($(ph + " tr").length + 1) * ($($(".jqgrow") [0]).height()); if (gHeight <= gMaxHeight) { $(ph).parent().height(gHeight); } else { $(ph).parent().height(gMaxHeight); } } else { $(ph).prepend("" + gridNoDataMsg + ""); $(ph).parent().height(60); } }, onSortCol: function(index, iCol, sortorder) { dataUrl = dataUrl.replace("&jqSession=true", ""); $(ph).jqGrid().setGridParam({ url: dataUrl }).trigger("reloadGrid"); var colName = "#jqgh" + index; // $(_prevSort).parent().removeClass("ui-jqgrid-sorted"); // $(_prevSort).parent().addClass("ui-state-default"); // $(_colName).parent().addClass("ui-jqgrid-sorted"); // $(_colName).parent().removeClass("ui-state-default"); _prevSort = _colName; var _rows = $(".jqgrow"); for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } } }).navGrid('#' + pager, { search: false, sort: false, edit: false, add: false, del: false, refresh: false }); // end of grid $("#" + loadid).empty(); gGridIds[gGridIds.length] = placeHolder; SetGridSizes(); }, error: function() { $("#" + loadid).html(loadingErr); } }); As you can see from the code i am getting column collection dynamically(Appication page which i am calling will give me JSON in the response and will have colNames collection in it. Evrything is working fine but, only issue is coming when we are trying to apply custom formatter to column. This issue comes only when we are dynamically assign "colModel" to jqgrid. Appreciate help Thanks in advance

    Read the article

  • Google App Engine - Spring Security Issue (java.security.AccessControlException)

    - by Taylor L
    I'm currently getting the AccessControlException below when I deploy to app engine (I don't see it when I run in my local environment). I'm using GAE 1.3.1, Spring 3.0.1, and Spring Security 3.0.2. Any ideas how to get around this issue? It appears to be an issue with Spring Security trying to get the system class loader, but I'm not sure how to work around this. Nested in org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.filterChainProxy': Initialization of bean failed; nested exception is java.security.AccessControlException: access denied (java.lang.RuntimePermission getClassLoader): java.security.AccessControlException: access denied (java.lang.RuntimePermission getClassLoader) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:355) at java.security.AccessController.checkPermission(AccessController.java:567) at java.lang.SecurityManager.checkPermission(Unknown Source) at com.google.apphosting.runtime.security.CustomSecurityManager.checkPermission(CustomSecurityManager.java:45) at java.lang.ClassLoader.getSystemClassLoader(Unknown Source) at org.springframework.beans.BeanUtils.findEditorByConvention(BeanUtils.java:392) at org.springframework.beans.TypeConverterDelegate.findDefaultEditor(TypeConverterDelegate.java:360) at org.springframework.beans.TypeConverterDelegate.convertIfNecessary(TypeConverterDelegate.java:213) at org.springframework.beans.TypeConverterDelegate.convertIfNecessary(TypeConverterDelegate.java:104) at org.springframework.beans.BeanWrapperImpl.convertIfNecessary(BeanWrapperImpl.java:419) at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:657) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:191) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:984) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:888) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:479) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:270) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:125) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedMap(BeanDefinitionValueResolver.java:382) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:161) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1067) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:511) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:562) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:530) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:191) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:168) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:123) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:235) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5485) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5483) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:363) at com.google.net.rpc.impl.Server$2.run(Server.java:837) at com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56) at com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:536) at com.google.net.rpc.impl.Server.startRpc(Server.java:792) at com.google.net.rpc.impl.Server.processRequest(Server.java:367) at com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:448) at com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319) at com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290) at com.google.net.async.Connection.handleReadEvent(Connection.java:474) at com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:774) at com.google.net.async.EventDispatcher.internalLoop(EventDispatcher.java:205) at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:101) at com.google.net.rpc.RpcService.runUntilServerShutdown(RpcService.java:251) at com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run(JavaRuntime.java:394) at java.lang.Thread.run(Unknown Source)

    Read the article

  • asp.net mvc jquery + tabs +jqgrid +jqgrid loaded only for first tab

    - by niao
    Greetings, I have a problem using jqgrid and jquery tab (I am coding in asp.net mvc) I have two tabs. Each tabs should contains jqgrid with different data. I specify tabs as follows: <script type="text/javascript"> $(document).ready(function() { $("#tabs").tabs(); getContentTab (1); }); function getContentTab(index) { var url='<%= Url.Content("~/Admin/GetWorkspaces") %>/' + index; var targetDiv = "#tabs-" + index; var ajaxLoading = "<img id='ajax-loader' src='<%= Url.Content("~/Content") %>/ajax-loader.gif' align='left' height='28' width='28'>"; $(targetDiv).html("<p>" + ajaxLoading + " Loading...</p>"); $.get(url,null, function(result) { $(targetDiv).html(result); }); } </script> <div id="tabs"> <ul> <li><a href="#tabs-1" onclick="getContentTab(1);">tab1</a></li> <li><a href="#tabs-2" onclick="getContentTab(2);">tab2</a></li> </ul> <div id="tabs-1"> </div> <div id="tabs-2"> </div> </div> As seen above GetWorkspaces action gets my tabs: public ActionResult GetWorkspaces(int id) { string viewName = string.Empty; switch (id) { case 1: viewName = "MarketplaceOfferView"; break; case 2: viewName = "MyMessagesView"; break; } return PartialView(viewName); } each of view is a partial view. In these partial views I have jqgrids specified as follows: <script type="text/javascript"> jQuery("#list").ready(function() { jQuery("#list").jqGrid({ url: '/Admin/GetGridData/', datatype: 'json', mtype: 'GET', colNames: ['Klient', 'Zapytanie', 'Atrakcyjnosc', 'Cena', 'Data poczatkowa', 'Data koncowa', 'Branza', 'Lokalizacja' ], colModel: [ { name: 'CompanyName', index: 'CompanyName', width: 150, align: 'left' }, { name: 'Content', index: 'ContactName', width: 300, align: 'left' }, { name: 'Rating', index: 'Address', width: 150, align: 'left' }, { name: 'Price', index: 'City', width: 150, align: 'left' }, { name: 'Price', index: 'City', width: 150, align: 'left' }, { name: 'Price', index: 'City', width: 150, align: 'left' }, { name: 'Price', index: 'City', width: 150, align: 'left' }, { name: 'Price', index: 'PostalCode', width: 100, align: 'left' } ], pager: jQuery('#pager'), rowNum: 100, rowList: [5, 10, 20, 50], sortname: 'Operator.FullName', sortorder: "asc", viewrecords: true, imgpath: '/scripts/themes/steel/images', caption: 'Historia moich wiadomosci', height:400 }); // .navGrid(pager, { edit: true, add: false, del: false, refresh: true, search: false }); }); </script> Historia moich wiadomosci <table id="list" class="scroll" cellpadding="0" cellspacing="0" width="100%"> </table> <div id="pager" class="scroll" style="text-align: center;"> </div> For second view I have an action: /Admin/GetGridDataForTab2/ THe problem is that I see a jqgrid only when I click on first tab. When I click on second tab the grid is not displayed and /Admin/GetGridData/ is not executed. Does anybody have an idea what is wrong?

    Read the article

  • Spring security ldap: no declaration can be found for element 'ldap-authentication-provider'

    - by wuntee
    Following the spring-security documentation: http://static.springsource.org/spring-security/site/docs/3.0.x/reference/ldap.html I am trying to set up ldap authentication (very simple - just need to know if a user is authenticated or not, no authorities mapping needed) and have put this in my applicationContext-security.xml file <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> ... <ldap-server url="ldap://adapps.company.com:389/dc=company,dc=com" /> <ldap-authentication-provider user-search-filter="(samaccountname={0})" user-search-base="dc=company,dc=com"/> The problem I run into is that it doesnt seem like ldap-authentication-provider; I fell like i may be missing some configuration isn the beans definition. The error I get when trying to run the application is: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 27 in XML document from ServletContext resource [/WEB-INF/rvaContext-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:465) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:395) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4467) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:722) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:593) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:236) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:172) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:382) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:316) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:429) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3185) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1955) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:725) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:322) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1693) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:368) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:834) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:148) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:250) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:388) ... 28 more Can anyone see what im missing? Also, is that all I need to add to the security bean in order to authenticate against ldap?

    Read the article

  • selectLimit in JQgrid

    - by Leander
    Hallo, I have a problem. I'am trying to get some data from my database. But with a limit. So i use the selectLimit command. But when i load the grid my firebug gives the following error: "parsererror"?e:null},parse:function(d...d(a.fn.jqGrid,d);this.no_legacy_api|| When I go to the selectLimit in the jqgrid.php and print out what the function returns it normaly return an array with all 500 objects with the correct data. This is the code of the grid: <?php require_once ($_SERVER['DOCUMENT_ROOT'].'/includes_config/config.inc.php'); require_once ($_SERVER['DOCUMENT_ROOT'].'/portal/classes/core/database/class_Database_control.php'); $dtb = new clsDatabaseControl(); $dtb = $dtb->getDatabase(ConnectionString); $dtb->doConnect(); require_once (ClassRoot.'/3rd/jqgrid/jq-config.php'); require_once (ClassRoot.'/3rd/jqgrid/php/jqGrid.php'); require_once (ClassRoot.'/3rd/jqgrid/php/jqGridPdo.php'); //require_once (ClassRoot.'/modules/logging/class_logging_control.php'); // //$oLogginControl = new clsLoggingControl($dtb); //$sSQL = $oLogginControl->getAanmeldingenGrid(); $sSQL = "SELECT LogAanmeldID, LogAanmeldStamp, UserFirstName, UserLastName, LogAanmeldIP, LogAanmeldMethod, LogAanmeldHost, LogAanmeldAgent FROM log_aanmelden, user WHERE log_aanmelden.LogAanmeldUserID = user.UserID"; $conn = new PDO(DB_DSN,DB_USER,DB_PASSWORD); $conn->query("SET NAMES utf8"); $grid = new jqGridRender($conn); //$grid->SelectCommand = $sSQL; $grid->selectLimit($sSQL ,500,1); $grid->dataType = 'json'; $grid->setColModel(); $grid->setUrl('modules/module_logging/index.php?grid=2'); $grid->setGridOptions(array( "hoverrows"=>true, //"sortname"=>"naam", "height"=>450 )); // Enable toolbar searching $grid->toolbarfilter = true; $grid->setFilterOptions(array("stringResult"=>true)); $grid->navigator = true; $grid->setNavOptions('navigator', array("excel"=>true,"add"=>false,"edit"=>false,"del"=>false,"view"=>true, "refresh"=>false)); $custom = <<<CUSTOM $(document).ready(function() { gridParentWidth = $(window).width(); $('#grid').jqGrid('setGridWidth',gridParentWidth-10); }) $(window).resize(function() { gridParentWidth = $(window).width(); $('#grid').jqGrid('setGridWidth',gridParentWidth-10); }); function formatAddAdresboek(cellValue, options, rowObject) { var imageHtml = "<a href='?FFID=51000&TID=1&INS=3&contactID=" + cellValue + "' originalValue='" + cellValue + "'><img border='0' alt='Toevoegen aan Persoonlijk Adresboek' src='images/16X16/new_user.gif'/></a>"; imageHtml = imageHtml + " <a href='?FFID=51000&TID=2&INS=3&contactID=" + cellValue + "' originalValue='" + cellValue + "'><img border='0' alt='Toevoegen aan Praktijk Adresboek' src='images/16X16/new_usergroup.gif'/></a>"; return imageHtml; } function unformatAddAdresboek(cellValue, options, cellObject) { return $(cellObject.html()).attr("originalValue"); } CUSTOM; $grid->setJSCode($custom); // Enjoy $grid->renderGrid('#grid','#pager',true, null, null, true,true); $conn = null; ?> When I inspect the $_GET the answer is empty and in stead of a JSON tab there is an HTML tab. When i use the selectCommand in stead of the selectLimit it return all the data correct as an json object an parses it correcly in the grid but it doens't use the LIMIT, Because when i add the limit. PDO doens't work anymore. So how do I get the object from the selectLimit to the grid? Can someone please help me get the grid working?

    Read the article

  • Perl CGI that sends a temporary loading page to client then later sends the actual results page

    - by Kurt W. Leucht
    I've wasted at least a half day of my company's time searching the Internet for an answer and I'm getting wrapped around the axle here. I can't figure out the difference between all the different technology choices (long polling, ajax streaming, comet, XMPP, etc.) and I can't get a simple hello world example working on my PC. I am running Apache 2.2 and ActivePerl 5.10.0. JavaScript is completely acceptable for this solution. All I want to do is write a simple Perl CGI script that when accessed, it immediately returns some HTML that tells the user to wait or maybe sends an animated GIF. Then without any user intervention (no mouse clicks or anything) I want the CGI script to at some time later replace the wait message or the animated GIF with the actual HTML results from their query. I know this is simple stuff and websites do it all the time, but I can't find a single working example that I can cut and paste onto my machine that will work. Here is my simple Hello World example that I've compiled from various Internet sources, but it doesn't seem to work. When I refresh this CGI URL in my web browser it prints nothing for 5 seconds, then it prints the PLEASE BE PATIENT web page, but not the results web page. What am I doing wrong? #!C:\Perl\bin\perl.exe use CGI; use CGI::Carp qw/fatalsToBrowser warningsToBrowser/; sub Create_HTML { my $html = <<EOHTML; <html> <head> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <script type="text/javascript" > var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) // JScript gives us Conditional compilation, we can cope with old IE versions. // and security blocked creation of the objects. try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest!='undefined') { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp=false; } } if (!xmlhttp && window.createRequest) { try { xmlhttp = window.createRequest(); } catch (e) { xmlhttp=false; } } </script> <title>Ajax Streaming Connection Demo</title> </head> <body> Some header text. <p> <div id="response">PLEASE BE PATIENT</div> <p> Some footer text. </body> </html> EOHTML return $html; } my $cgi = new CGI; print $cgi->header; print Create_HTML(); sleep(5); print "<script type=\"text/javascript\">\n"; print "\$('response').innerHTML = 'Here are your results!';\n"; print "</script>\n";

    Read the article

  • How to prevent jQuery FancyBox from closing immediately after submit?

    - by Dimitri
    Hi! I'm loading an inline registration form in a FancyBox from jQuery. However after submitting the form, the box immediately closes while there is some feedback that I want to show the user in the FancyBox itself. This feedback is generated on the server side and is printed in the FancyBox. How can I make the box only closing when their is no feedback anymore? I was thinking about using ajax to just refresh the FancyBox itself and not the whole page after refreshing. But I just can't figure out how this ajax $.ajax({type, cache, url, data, success}); works... Also it seems like there's no reaction from the 'submit bind' in the javascript. I hope someone can help me with this problem. I paste my code below. If any questions, plz ask.. Thx in advance! This is the javascript: <script type="text/javascript"> $(document).ready(function() { $("#various1").fancybox({ 'transitionIn' : 'none', 'transitionOut' : 'none', 'scrolling' : 'no', 'titleShow' : false, 'onClosed' : function() { $("#registration_error").hide(); } }); }); $("#registration_form").bind("submit", function() { if ($("#registration_error").val() != "Registration succeeded!") { $("#registration_error").show(); $.fancybox.resize(); return false; } $.fancybox.showActivity(); $.ajax({ type : "POST", cache : false, url : "/data/login.php", data : $(this).serializeArray(), success : function(data) { $.fancybox(data); } }); return false; }); This is the inline form that I show in the FancyBox: <div style="display: none;"> <div id="registration" style="width:227px;height:250px;overflow:auto;padding:7px;"> <?php echo "<p id=\"registration_error\">".$feed."</p>"; ?> <form id="registration_form" action="<?php echo $_SERVER['PHP_SELF'] ?>" method="post"> <p> <label for="username">Username: </label> <input type="text" id="login_name" name="username" size="30" /> </p> <p> <label for="password">Password: </label> <input type="password" id="pass" name="pw" size="30" /> </p> <p> <label for="repeat_password">Repeat password: </label> <input type="password" id="rep_pass" name="rep_pw" size="30" /> </p> <p> <input type="submit" value="Register" name="register" id="reg" /> </p> <p> <em></em> </p> </form> </div> </div>

    Read the article

  • How to fix notifyDataSetChanged/ListView problems in dynamic Adapter wrapper Android

    - by ipaterson
    Summary: Trying to dynamically add heading rows to a ListView via a custom adapter wrapper. ListView is having trouble keeping the scroll position in sync. Runnable demo project provided. I would like to dynamically add items to a list based on the values in a CursorAdapter, several positions ahead of what the user is currently viewing. To do this, I have an adapter that wraps the CursorAdapter and keeps the new content indexed in a SparseArray. The ListView needs to be updated when items are added to the custom adapter, but I have met a lot of pitfalls trying to get that to work and would love some advice. The demo project can be downloaded here: http://dl.dropbox.com/u/15334423/DynamicSectionedList.zip In the demo, the headings are added dynamically by looking ahead 10 places to find the correct position where the list items switch to the next letter. Each implementation of notifyDataSetChanged has problems as described: Demo 1 This demo shows the importance of notifyDataSetChanged(). On clicking anything, the app will crash. This is due to some sanity checking in ListView... mItemCount != adapter.getItemCount(). Moral is, we need to notify the list that data has changed. Demo 2 The natural next step is to notify the ListView of changes when changes occur. Unfortunately, doing so while the ListView is scrolling firmly breaks all touch interaction until the app switches out of touch mode. You will need to "fling scroll" far enough to generate new headings in order to notice this. Tapping the screen will not cause the scroll to stop, and once stopped none of the list items will be clickable. This is due to some if (!mDataChanged) { /* do very important stuff */ } code in AbsListView.onTouchEvent(). Demo 3 To fix this, Demo 3 introduces a pendingChanges flag and the custom Adapter gains a notifyDataSetChangedIfNeeded() which can be called by the ListView once it has entered a "safe" state for changes. The first point where changes must be notified is in ListView.layoutChildren(), so I overrode that method to first notify of changes if needed, then call through. Fling past at least one heading then click a list item. This doesn't quite work right, though I'm not totally sure why. Tapping or selecting an item with the keyboard/trackball causes the list to refresh without properly syncing the old position. It scrolls to the top of the list which is not acceptable. Demo 4 The scroll problem in Demo 3 can be conquered, at least in touch mode. By adding a call to notifyDataSetChangedIfNeeded() on touch down, the data change happens to take place at such a time that all touch interaction works as expected and the list position is properly synced. However, I can't find an analog for that when the device is not in touch mode, not to mention the fact that it definitely seems like a hack. The list almost always scrolls back to the top, I can't find out what causes it to occasionally maintain the correct position. Since Android is fighting me at each step of the way, I feel like there should be a better approach. Please try the demo, if any fixes can be applied to get it working that would be great! Many thanks to anyone who can look into this, hopefully if we can get the code working it will be useful for others trying to accomplish the same optimization for lists with headings.

    Read the article

  • Enable button based on TextBox value (WPF)

    - by zendar
    This is MVVM application. There is a window and related view model class. There is TextBox, Button and ListBox on form. Button is bound to DelegateCommand that has CanExecute function. Idea is that user enters some data in text box, presses button and data is appended to list box. I would like to enable command (and button) when user enters correct data in TextBox. Things work like this now: CanExecute() method contains code that checks if data in property bound to text box is correct. Text box is bound to property in view model UpdateSourceTrigger is set to PropertyChanged and property in view model is updated after each key user presses. Problem is that CanExecute() does not fire when user enters data in text box. It doesn't fire even when text box lose focus. How could I make this work? Edit: Re Yanko's comment: Delegate command is implemented in MVVM toolkit template and when you create new MVVM project, there is Delegate command in solution. As much as I saw in Prism videos this should be the same class (or at least very similar). Here is XAML snippet: ... <UserControl.Resources> <views:CommandReference x:Key="AddObjectCommandReference" Command="{Binding AddObjectCommand}" /> </UserControl.Resources> ... <TextBox Text="{Binding ObjectName, UpdateSourceTrigger=PropertyChanged}"> </TextBox> <Button Command="{StaticResource AddObjectCommandReference}">Add</Button> ... View model: // Property bound to textbox public string ObjectName { get { return objectName; } set { objectName = value; OnPropertyChanged("ObjectName"); } } // Command bound to button public ICommand AddObjectCommand { get { if (addObjectCommand == null) { addObjectCommand = new DelegateCommand(AddObject, CanAddObject); } return addObjectCommand; } } private void AddObject() { if (ObjectName == null || ObjectName.Length == 0) return; objectNames.AddSourceFile(ObjectName); OnPropertyChanged("ObjectNames"); // refresh listbox } private bool CanAddObject() { return ObjectName != null && ObjectName.Length > 0; } As I wrote in the first part of question, following things work: property setter for ObjectName is triggered on every keypress in textbox if I put return true; in CanAddObject(), command is active (button to) It looks to me that binding is correct. Thing that I don't know is how to make CanExecute() fire in setter of ObjectName property from above code. Re Ben's and Abe's answers: CanExecuteChanged() is event handler and compiler complains: The event 'System.Windows.Input.ICommand.CanExecuteChanged' can only appear on the left hand side of += or -= there are only two more members of ICommand: Execute() and CanExecute() Do you have some example that shows how can I make command call CanExecute(). I found command manager helper class in DelegateCommand.cs and I'll look into it, maybe there is some mechanism that could help. Anyway, idea that in order to activate command based on user input, one needs to "nudge" command object in property setter code looks clumsy. It will introduce dependencies and one of big points of MVVM is reducing them. Is it possible to solve this problem by using dependency properties?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >