Search Results

Search found 8083 results on 324 pages for 'total newbie'.

Page 33/324 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • genisoimage and exec bit preservation

    - by user92187
    Maybe I'm just not doing right, but I can't seem to get genisoimage to produce a UDF image and preserve the exec bit. $ genisoimage --version genisoimage 1.1.11 (Linux) $ echo "echo 'Hello world'" > script.sh $ chmod +x script.sh $ ./script.sh Hello world $ genisoimage -input-charset utf-8 -r -udf -volid minimal -o minimal.iso script.sh Total translation table size: 0 Total rockridge attributes bytes: 250 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 0 420 extents written (0 MB) $ mkdir mount $ sudo mount minimal.iso $PWD/mount -o ro,loop -t udf $ ls -l script.sh mount/script.sh -r--r--r-- 1 root root 19 Sep 21 18:40 mount/script.sh -rwxrwxr-x 1 kip kip 19 Sep 21 18:40 script.sh You'll note in the last command that script.sh was executable at the time it was injected into the image, but does not appear to be inside of the mounted image. Is this a bug in genisoimage, a problem with the way I am mounting the image, or a problem in my usage of genisoimage?

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • How can you monitor internet download usage?

    - by dv3500ea
    Some broadband providers impose a monthly download limit, charging extra if you go over. It is also quite easy to exceed some of the lower limits just by installing/updating packages and 'normal' browsing (which to me includes streaming TV programs and movies). This means that you need to limit the amount you use the internet, yet it is hard to know when. The System Monitor helps a bit with this by giving a total received/total sent in the networking section of the Resources tab. However, this is reset every reboot. It would be good if there was a way to have a monthly total received so you can know how close you are to exceeding your limit and maybe even be given warnings if it looks like you are going to exceed the limits. Does anyone know of a way to achieve this?

    Read the article

  • Excel duration converts to date

    - by Malcolm Anderson
    I'm working on an Excel 2010 spreadsheet and I'm trying to put in durations for some tasks I want to schedule.The interesting thing is that up until a few minutes ago, I couldn't do it.I was entering in "47:00" and excel was (and still is) converting it to "1/1/1900 23:00:00"In my mind, I want the value to be 47 minutes, but for the life of me I cannot find a fix for this behavior.Here's the weirdest thing, I haven't had this problem in the past.  Usually I put in times, add them up and they work like magic.  Put in 18 entries of 20 minutes each, total them and excel will usually tell me that it's a total of 6 hours.No problem.Today, problem.Here's the weird bit:As I was writing this post, I got it to work.By formatting the column as custom "[hh]:mm" and summing the columns, I can get total times.But the times are still being formatted into dates if I look at the underlying data.  Bottom line, if you need to calculate durations, you can, but don't look too closely at what is happening underneath the covers.

    Read the article

  • c++ function overloading, making fwrite/fread act like PHP versions

    - by Newbie
    I'm used to the PHP fwrite/fread parameter orders, and i want to make them the same in C++ too. I want it to work with char and string types, and also any data type i put in it (only if length is defined). I am total noob on c++, this is what i made so far: size_t fwrite(FILE *fp, const std::string buf, const size_t len = SIZE_MAX){ if(len == SIZE_MAX){ return fwrite(buf.c_str(), 1, buf.length(), fp); }else{ return fwrite(buf.c_str(), 1, len, fp); } } size_t fwrite(FILE *fp, const void *buf, const size_t len = SIZE_MAX){ if(len == SIZE_MAX){ return fwrite((const char *)buf, 1, strlen((const char *)buf), fp); }else{ return fwrite(buf, 1, len, fp); } } Should this work just fine? And how should this be done if i wanted to do it the absolutely best possible way?

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • FreeBSD 8.1 unstable network connection

    - by frankcheong
    I have three FreeBSD 8.1 running on three different hardware and therefore consist of different network adapter as well (bce, bge and igb). I found that the network connection is kind of unstable which I have tried to scp some 10MB file and found that I cannot always get the files completed successfully. I have further checked with my network admin and he claim that the problem is being caused by the network driver which cannot support the load whereby he tried to ping using huge packet size (around 15k) and my server will drop packet consistently at a regular interval. I found that this statement may not be valid since the three server is using three different network drive and it would be quite impossible that the same problem is being caused by three different network adapter and thus different network driver. Since then I have tried to tune up the performance by playing around with the /etc/sysctl.conf figures with no luck. kern.ipc.somaxconn=1024 kern.ipc.shmall=3276800 kern.ipc.shmmax=1638400000 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Required by pf net.inet.ip.forwarding=1 #Network Performance Tuning kern.ipc.maxsockbuf=16777216 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 # Setting specifically for 1 or even 10Gbps network net.local.stream.sendspace=262144 net.local.stream.recvspace=262144 net.inet.tcp.local_slowstart_flightsize=10 net.inet.tcp.nolocaltimewait=1 net.inet.tcp.mssdflt=1460 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.sendspace=262144 net.inet.tcp.recvspace=262144 net.inet.udp.recvspace=262144 kern.ipc.maxsockbuf=16777216 kern.ipc.nmbclusters=32768 net.inet.tcp.delayed_ack=1 net.inet.tcp.delacktime=100 net.inet.tcp.slowstart_flightsize=179 net.inet.tcp.inflight.enable=1 net.inet.tcp.inflight.min=6144 # Reduce the cache size of slow start connection net.inet.tcp.hostcache.expire=1 Our network admin also claim that they see quite a lot of network up and down from their cisco switch log while I cannot find any up down message inside the dmesg. Have further checked the netstat -s but dont have concrete idea. tcp: 133695291 packets sent 39408539 data packets (3358837321 bytes) 61868 data packets (89472844 bytes) retransmitted 24 data packets unnecessarily retransmitted 0 resends initiated by MTU discovery 50756141 ack-only packets (2148 delayed) 0 URG only packets 0 window probe packets 4372385 window update packets 39781869 control packets 134898031 packets received 72339403 acks (for 3357601899 bytes) 190712 duplicate acks 0 acks for unsent data 59339201 packets (3647021974 bytes) received in-sequence 114 completely duplicate packets (135202 bytes) 27 old duplicate packets 0 packets with some dup. data (0 bytes duped) 42090 out-of-order packets (60817889 bytes) 0 packets (0 bytes) of data after window 0 window probes 3953896 window update packets 64181 packets received after close 0 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 45192 discarded due to memory problems 19945391 connection requests 1323420 connection accepts 0 bad connection attempts 0 listen queue overflows 0 ignored RSTs in the windows 21133581 connections established (including accepts) 21268724 connections closed (including 32737 drops) 207874 connections updated cached RTT on close 207874 connections updated cached RTT variance on close 132439 connections updated cached ssthresh on close 42392 embryonic connections dropped 72339338 segments updated rtt (of 69477829 attempts) 390871 retransmit timeouts 0 connections dropped by rexmit timeout 0 persist timeouts 0 connections dropped by persist timeout 0 Connections (fin_wait_2) dropped because of timeout 13990 keepalive timeouts 2 keepalive probes sent 13988 connections dropped by keepalive 173044 correct ACK header predictions 36947371 correct data packet header predictions 1323420 syncache entries added 0 retransmitted 0 dupsyn 0 dropped 1323420 completed 0 bucket overflow 0 cache overflow 0 reset 0 stale 0 aborted 0 badack 0 unreach 0 zone failures 1323420 cookies sent 0 cookies received 1864 SACK recovery episodes 18005 segment rexmits in SACK recovery episodes 26066896 byte rexmits in SACK recovery episodes 147327 SACK options (SACK blocks) received 87473 SACK options (SACK blocks) sent 0 SACK scoreboard overflow 0 packets with ECN CE bit set 0 packets with ECN ECT(0) bit set 0 packets with ECN ECT(1) bit set 0 successful ECN handshakes 0 times ECN reduced the congestion window udp: 5141258 datagrams received 0 with incomplete header 0 with bad data length field 0 with bad checksum 1 with no checksum 0 dropped due to no socket 129616 broadcast/multicast datagrams undelivered 0 dropped due to full socket buffers 0 not for hashed pcb 5011642 delivered 5016050 datagrams output 0 times multicast source filter matched sctp: 0 input packets 0 datagrams 0 packets that had data 0 input SACK chunks 0 input DATA chunks 0 duplicate DATA chunks 0 input HB chunks 0 HB-ACK chunks 0 input ECNE chunks 0 input AUTH chunks 0 chunks missing AUTH 0 invalid HMAC ids received 0 invalid secret ids received 0 auth failed 0 fast path receives all one chunk 0 fast path multi-part data 0 output packets 0 output SACKs 0 output DATA chunks 0 retransmitted DATA chunks 0 fast retransmitted DATA chunks 0 FR's that happened more than once to same chunk 0 intput HB chunks 0 output ECNE chunks 0 output AUTH chunks 0 ip_output error counter Packet drop statistics: 0 from middle box 0 from end host 0 with data 0 non-data, non-endhost 0 non-endhost, bandwidth rep only 0 not enough for chunk header 0 not enough data to confirm 0 where process_chunk_drop said break 0 failed to find TSN 0 attempt reverse TSN lookup 0 e-host confirms zero-rwnd 0 midbox confirms no space 0 data did not match TSN 0 TSN's marked for Fast Retran Timeouts: 0 iterator timers fired 0 T3 data time outs 0 window probe (T3) timers fired 0 INIT timers fired 0 sack timers fired 0 shutdown timers fired 0 heartbeat timers fired 0 a cookie timeout fired 0 an endpoint changed its cookiesecret 0 PMTU timers fired 0 shutdown ack timers fired 0 shutdown guard timers fired 0 stream reset timers fired 0 early FR timers fired 0 an asconf timer fired 0 auto close timer fired 0 asoc free timers expired 0 inp free timers expired 0 packet shorter than header 0 checksum error 0 no endpoint for port 0 bad v-tag 0 bad SID 0 no memory 0 number of multiple FR in a RTT window 0 RFC813 allowed sending 0 RFC813 does not allow sending 0 times max burst prohibited sending 0 look ahead tells us no memory in interface 0 numbers of window probes sent 0 times an output error to clamp down on next user send 0 times sctp_senderrors were caused from a user 0 number of in data drops due to chunk limit reached 0 number of in data drops due to rwnd limit reached 0 times a ECN reduced the cwnd 0 used express lookup via vtag 0 collision in express lookup 0 times the sender ran dry of user data on primary 0 same for above 0 sacks the slow way 0 window update only sacks sent 0 sends with sinfo_flags !=0 0 unordered sends 0 sends with EOF flag set 0 sends with ABORT flag set 0 times protocol drain called 0 times we did a protocol drain 0 times recv was called with peek 0 cached chunks used 0 cached stream oq's used 0 unread messages abandonded by close 0 send burst avoidance, already max burst inflight to net 0 send cwnd full avoidance, already max burst inflight to net 0 number of map array over-runs via fwd-tsn's ip: 137814085 total packets received 0 bad header checksums 0 with size smaller than minimum 0 with data size < data length 0 with ip length > max ip packet size 0 with header length < data size 0 with data length < header length 0 with bad options 0 with incorrect version number 1200 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 300 packets reassembled ok 137813009 packets for this host 530 packets for unknown/unsupported protocol 0 packets forwarded (0 packets fast forwarded) 61 packets not forwardable 0 packets received for unknown multicast group 0 redirects sent 137234598 packets sent from this host 0 packets sent with fabricated ip header 685307 output packets dropped due to no bufs, etc. 52 output packets discarded due to no route 300 output datagrams fragmented 1200 fragments created 0 datagrams that can't be fragmented 0 tunneling packets that can't find gif 0 datagrams with bad address in header icmp: 0 calls to icmp_error 0 errors not generated in response to an icmp message Output histogram: echo reply: 305 0 messages with bad code fields 0 messages less than the minimum length 0 messages with bad checksum 0 messages with bad length 0 multicast echo requests ignored 0 multicast timestamp requests ignored Input histogram: destination unreachable: 530 echo: 305 305 message responses generated 0 invalid return addresses 0 no return routes ICMP address mask responses are disabled igmp: 0 messages received 0 messages received with too few bytes 0 messages received with wrong TTL 0 messages received with bad checksum 0 V1/V2 membership queries received 0 V3 membership queries received 0 membership queries received with invalid field(s) 0 general queries received 0 group queries received 0 group-source queries received 0 group-source queries dropped 0 membership reports received 0 membership reports received with invalid field(s) 0 membership reports received for groups to which we belong 0 V3 reports received without Router Alert 0 membership reports sent arp: 376748 ARP requests sent 3207 ARP replies sent 245245 ARP requests received 80845 ARP replies received 326090 ARP packets received 267712 total packets dropped due to no ARP entry 108876 ARP entrys timed out 0 Duplicate IPs seen ip6: 2226633 total packets received 0 with size smaller than minimum 0 with data size < data length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 fragments that exceeded limit 0 packets reassembled ok 2226633 packets for this host 0 packets forwarded 0 packets not forwardable 0 redirects sent 2226633 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 8 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 packets that violated scope rules 0 multicast packets which we don't join Input histogram: UDP: 2226633 Mbuf statistics: 962679 one mbuf 1263954 one ext mbuf 0 two or more ext mbuf 0 packets whose headers are not continuous 0 tunneling packets that can't find gif 0 packets discarded because of too many headers 0 failures of source address selection Source addresses selection rule applied: icmp6: 0 calls to icmp6_error 0 errors not generated in response to an icmp6 message 0 errors not generated because of rate limitation 0 messages with bad code fields 0 messages < minimum length 0 bad checksums 0 messages with bad length Histogram of error messages to be generated: 0 no route 0 administratively prohibited 0 beyond scope 0 address unreachable 0 port unreachable 0 packet too big 0 time exceed transit 0 time exceed reassembly 0 erroneous header field 0 unrecognized next header 0 unrecognized option 0 redirect 0 unknown 0 message responses generated 0 messages with too many ND options 0 messages with bad ND options 0 bad neighbor solicitation messages 0 bad neighbor advertisement messages 0 bad router solicitation messages 0 bad router advertisement messages 0 bad redirect messages 0 path MTU changes rip6: 0 messages received 0 checksum calculations on inbound 0 messages with bad checksum 0 messages dropped due to no socket 0 multicast messages dropped due to no socket 0 messages dropped due to full socket buffers 0 delivered 0 datagrams output netstat -m 516/5124/5640 mbufs in use (current/cache/total) 512/1634/2146/32768 mbuf clusters in use (current/cache/total/max) 512/1536 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1303/1303/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 1153K/9761K/10914K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/8/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines Anyone got an idea what might be the possible cause?

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • Some clarification needed about synchronous versus asynchronous asio operations

    - by Old newbie
    As far as I know, the main difference between synchronous and asynchronous operations. I.e. write() or read() vs async_write() and async_read() is that the former, don't return until the operation finish -or error-, and the last ones, returns inmediately. Due the fact that the asynchronous operations are controlled by an io_service.run() that does not finish until the controlled operations has finalized. It seems to me that in sequencial operations as those involved in TCP/IP connections with protocols such as POP3, in which the operaton is a sequence such as: C: <connect> S: Ok. C: User... S: Ok. C: Password S: Ok. C: Command S: answer C: Command S: answer ... C: bye S: <close> The difference between synchronous/asynchronous opperatons does not make much sense. Of course, in both operations there is allways the risk that the program flow stops indefinitely by some circunstance -there the use of timers-, but I would like know some more authorized opinions in this matter. I must admit that the question is rather ill-defined, but I like hear some advices about when use one or other, because I've problems in debugging with MS Visual Studio, asynchronous SSL operations in a POP3 client in wich I'm working now -about some of who surely I would write here soon-, and sometimes think that perhaps is a bad idea use asynchronous in this. Not to say that I'm an absolute newbie with this librarys, that additionally to the difficult with the idioma, and some obscure concepts in the STL, must suffer the brevity of the asio documentation.

    Read the article

  • VB 2008 or VB 2010 Dataset help

    - by Diabolo
    I have three forms similar to the one in the link. I want add a Total textbox to every form. The total will add the values that will be entered in the montant textbox. So the total will take the specific value of that texbox for all the entries that the user will have and add them. How I can do so???? Thanks http://i1006.photobucket.com/albums/af189/diaboloent/OneForm.jpg

    Read the article

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • Mozilla Weave can't sync Firefox. What's wrong?

    - by Mehper C. Palavuzlar
    For the last few days, Mozilla Weave can't sync. Below is the activity log. Any ideas? 2010-05-02 20:47:15 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-02 20:47:15 Service.Main CONFIG Starting backoff, next sync at:Sun May 02 2010 21:16:09 GMT+0300 (GTB Yaz Saati) 2010-05-02 20:47:15 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-02 21:16:09 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 21:16:30 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 21:16:30 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 21:26:50 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 21:26:50 Engine.Clients INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Clients INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Clients DEBUG Total (ms): sync 6, processIncoming 3, uploadOutgoing 0, syncStartup 3, syncFinish 0 2010-05-02 21:26:50 Engine.Bookmarks INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Bookmarks INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Bookmarks DEBUG Total (ms): sync 13, processIncoming 5, uploadOutgoing 0, syncStartup 3, syncFinish 3 2010-05-02 21:26:50 Engine.Forms INFO 1 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Forms INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Forms INFO Uploading all of 1 records 2010-05-02 21:26:50 Collection DEBUG POST Length: 388 2010-05-02 21:27:06 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/forms 2010-05-02 21:27:06 Engine.Forms DEBUG Total (ms): sync 15924, processIncoming 3, uploadOutgoing 15918, syncStartup 3, syncFinish 0, createRecord 1 2010-05-02 21:27:06 Engine.History INFO 55 outgoing items pre-reconciliation 2010-05-02 21:27:06 Engine.History INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:09 Engine.History INFO Uploading all of 55 records 2010-05-02 21:27:09 Collection DEBUG POST Length: 35337 2010-05-02 21:27:32 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/history 2010-05-02 21:27:32 Engine.History DEBUG Total (ms): sync 25588, processIncoming 4, uploadOutgoing 25580, syncStartup 3, syncFinish 0, createRecord 2540 2010-05-02 21:27:32 Engine.Passwords INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Passwords INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Passwords DEBUG Total (ms): sync 8, processIncoming 4, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Prefs INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Prefs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Prefs DEBUG Total (ms): sync 8, processIncoming 3, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Tabs INFO 1 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Tabs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Tabs INFO Uploading all of 1 records 2010-05-02 21:27:32 Collection DEBUG POST Length: 393 2010-05-02 21:27:54 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/tabs 2010-05-02 21:27:54 Engine.Tabs DEBUG Total (ms): sync 21943, processIncoming 3, uploadOutgoing 21936, syncStartup 3, syncFinish 0, createRecord 8 2010-05-02 21:27:54 Service.Main INFO Sync completed successfully 2010-05-02 22:27:53 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 22:28:14 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 22:28:14 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 22:28:16 Net.Resource DEBUG GET fail 503 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 22:28:16 Service.Main DEBUG Exception: aborting sync, failed to get collections No traceback available 2010-05-02 23:28:15 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-03 00:26:42 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 00:31:03 RecordMgr DEBUG Failed to import record: App. Quitting JS Stack trace: Res__request(...)@resource.js:208 < Res_get()@resource.js:271 < RecordMgr_import("https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global")@wbo.js:119 < WeaveSvc__remoteSetup()@service.js:824 < ()@service.js:1187 < WrappedNotify()@util.js:114 < WrappedLock()@util.js:86 < WrappedCatch()@util.js:65 < sync(false)@service.js:1146 < ([object Object])@service.js:414 < notify([object XPCWrappedNative_NoHelper])@util.js:629 2010-05-03 00:31:03 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2010-05-03 00:31:03 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-03 00:31:03 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-03 17:26:25 Service.Main INFO Loading Weave 1.2.3 2010-05-03 17:26:25 Engine.Bookmarks DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Forms DEBUG Engine initialized 2010-05-03 17:26:25 Engine.History DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Passwords DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Prefs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Resetting tabs last sync time 2010-05-03 17:26:25 Service.Main INFO Mozilla/5.0 (Windows; U; Windows NT 6.1; tr; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) 2010-05-03 17:26:26 Service.Main DEBUG Caching URLs under storage user base: https://sj-weave03.services.mozilla.com/1.0/mehper/ 2010-05-03 17:26:30 Service.Main DEBUG Autoconnecting in 3 seconds 2010-05-03 17:26:36 Service.Main INFO Logging in user mehper 2010-05-03 17:45:46 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 17:53:18 Service.Main DEBUG Exception: Could not acquire lock No traceback available

    Read the article

  • R ggplot2: Arrange facet_grid by non-facet column (and labels using non-facet column)

    - by tommy-o-dell
    I have a couple of questions regarding facetting in ggplot2... Let's say I have a query that returns data that looks like this: (note that it's ordered by Rank asc, Alarm asc and two Alarms have a Rank of 3 because their Totals = 1798 for Week 4, and Rank is set according to Total for Week 4) Rank Week Alarm Total 1 1 BELTWEIGHER HIGH HIGH 1000 1 2 BELTWEIGHER HIGH HIGH 1050 1 3 BELTWEIGHER HIGH HIGH 900 1 4 BELTWEIGHER HIGH HIGH 1800 2 1 MICROWAVE LHS 200 2 2 MICROWAVE LHS 1200 2 3 MICROWAVE LHS 400 2 4 MICROWAVE LHS 1799 3 1 HI PRESS FILTER 2 CLOG SW 1250 3 2 HI PRESS FILTER 2 CLOG SW 1640 3 3 HI PRESS FILTER 2 CLOG SW 1000 3 4 HI PRESS FILTER 2 CLOG SW 1798 3 1 LOW PRESS FILTER 2 CLOG SW 800 3 2 LOW PRESS FILTER 2 CLOG SW 1200 3 3 LOW PRESS FILTER 2 CLOG SW 800 3 4 LOW PRESS FILTER 2 CLOG SW 1798 (duplication code below) Rank = c(rep(1,4),rep(2,4),rep(3,8)) Week = c(rep(1:4,4)) Total = c( 1000,1050,900,1800, 200,1200,400,1799, 1250,1640,1000,1798, 800,1200,800,1798) Alarm = c(rep("BELTWEIGHER HIGH HIGH",4), rep("MICROWAVE LHS",4), rep("HI PRESS FILTER 2 CLOG SW",4), rep("LOW PRESS FILTER 2 CLOG SW",4)) spark <- data.frame(Rank, Week, Alarm, Total) Now when I do this... s <- ggplot(spark, aes(Week, Total)) + opts( panel.background = theme_rect(size = 1, colour = "lightgray"), panel.grid.major = theme_blank(), panel.grid.minor = theme_blank(), axis.line = theme_blank(), axis.text.x = theme_blank(), axis.text.y = theme_blank(), axis.title.x = theme_blank(), axis.title.y = theme_blank(), axis.ticks = theme_blank(), strip.background = theme_blank(), strip.text.y = theme_text(size = 7, colour = "red", angle = 0) ) s + facet_grid(Alarm ~ .) + geom_line() I get this.... Notice that it's facetted according to Alarm and that the facets are arranged alphabetically. Two Questions: How can I can I keep it facetted by alarm but displayed in the correct order? (Rank asc, Alarm asc). Also, how can I keep it facetted by alarm but show labels from Rank instead of Alarm? Note that I can't just facet on Rank because ggplot2 would see only 3 facets to plot where there are really 4 different alarms. Thanks kindly for the help! Tommy

    Read the article

  • how do I use fitnesse ActionFixture with C#?

    - by JDPeckham
    I tried to make an action fixture and it’s not working. (c# with Slim runner) Basically it seems like it's trying to interpret it as a column fixture. |!-Fitnesse.BuyActions-! | |Start|!-Fitnesse.BuyActions-!| |check|total |0.0 | |enter|price |12.00 | |press|buy | |check|total |12.00 | |enter|price |100.00 | |press|buy | |check|total |112.00 | Fitnesse.BuyActions Start Fitnesse.BuyActions check Method setStart not found in Fitnesse.BuyActions using fit; namespace Fitnesse { public class BuyActions : ActionFixture { public BuyActions() : base() { this.targetObject = this; } } }

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • problem in AudioStreaming in Iphone Sdk?

    - by senthilmuthu
    I am using sample code of AudioStream.zip,but when i use this to play Mp3 file , it gives wrong total amount of playing time(after played completely through streaming).... i checked through downloading that Mp3 file into Document Directory and played in Itune it exactly is played for 2.10 seconds.but in streaming through that code(- (double)progress method) gives total playing time only 2.3 sec, is there any sample code for AudioStreaming except that one to give right Total playing Time?

    Read the article

  • Problem passing json into jquery graph(flot)

    - by Adam McMahon
    I trying to retrieve some json to pass into a flot graph. I know that json is right because I hard coded it to check, but I'm pretty sure that I'm not passing right because It's not showing up. Here's the javascript: var total = $.ajax({ type: "POST", async: false, url: "../api/?key=xxx&api=report&crud=return_months&format=json" }).responseText; //var total = $.evalJSON(total); var plot = $.plot($("#placeholder"),total); here's the json: [ { data: [[1,12], [2,43], [3,10], [4,17], ], label: "E-File"}, { data: [[1,25], [2,35], [3,3], [4,5], ], label: "Bank Products" }, { data: [[1,41], [2,87], [3,30], [4,29], ], label: "All Returns" } ], {series: {lines: { show: true },points: { show: true }}, grid: { hoverable: true, clickable: true }, yaxis: { min: 0, max: 100 }, xaxis: { ticks: [[1,"January"],[2,"February"],[3,"March"],[4,"April"],[5,"May"],[6,"June"],[7,"July"],[8,"August"],[9,"September"],[10,"October"],[11,"November"],[12,"December"]] }}

    Read the article

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • Visual Studio Conversion Suite

    - by KingPop
    I have this as my conversion program for the "Length", how can I do it the simpliest way instead of keeping the if, elseif, else too much, i do not have much experience and trying to improve my programming skills on visual studio 2008. Basically, I get annoyed with the formulas because I don't know if it is right, I use google but doesn't help because i don't know how to get it right when the program converts from type to type. Public Class Form2 Dim Metres As Integer Dim Centimetres As Integer Dim Inches As Integer Dim Feet As Integer Dim Total As Integer Private Sub Form2_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load ErrorMsg.Hide() End Sub Private Sub btnConvert_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnConvert.Click Metres = 1 Centimetres = 0.01 Inches = 0.0254 Feet = 0.3048 txtTo.Text = 0 If txtFrom.Text <> "" Then If IsNumeric(txtFrom.Text) And IsNumeric(txtTo.Text) Then If cbFrom.Text = "Metres" And cbTo.Text = "Centimetres" Then Total = txtFrom.Text * Metres txtTo.Text = Total ElseIf cbFrom.Text = "Metres" And cbTo.Text = "Inches" Then Total = txtFrom.Text * 100 txtTo.Text = Total ElseIf cbFrom.Text = "Metres" And cbTo.Text = "Feet" Then ElseIf cbFrom.Text = "Centimetres" And cbTo.Text = "Metres" Then ElseIf cbFrom.Text = "Centimetres" And cbTo.Text = "Inches" Then ElseIf cbFrom.Text = "Centimetres" And cbTo.Text = "Feet" Then ElseIf cbFrom.Text = "Inches" And cbTo.Text = "Metres" Then ElseIf cbFrom.Text = "Inches" And cbTo.Text = "Centimetres" Then ElseIf cbFrom.Text = "Inches" And cbTo.Text = "Feet" Then ElseIf cbFrom.Text = "Feet" And cbTo.Text = "Metres" Then ElseIf cbFrom.Text = "Feet" And cbTo.Text = "Centimetres" Then ElseIf cbFrom.Text = "Feet" And cbTo.Text = "Inches" Then End If End If End If End Sub End Class This is the source for what I have done at the moment.

    Read the article

  • Query Months help

    - by StealthRT
    Hey all i am in need of some helpful tips/advice on how to go about my problem. I have a database that houses a "signup" table. The date for this table is formated as such: 2010-04-03 00:00:00 Now suppose i have 10 records in this database: 2010-04-03 00:00:00 2010-01-01 00:00:00 2010-06-22 00:00:00 2010-02-08 00:00:00 2010-02-05 00:00:00 2010-03-08 00:00:00 2010-09-29 00:00:00 2010-11-16 00:00:00 2010-04-09 00:00:00 2010-05-21 00:00:00 And i wanted to get each months total registers... so following the example above: Jan = 1 Feb = 2 Mar = 1 Apr = 2 May = 1 Jun = 1 Jul = 0 Aug = 0 Sep = 1 Oct = 0 Nov = 1 Dec = 0 Now how can i use a query to do that but not have to use a query like: WHERE left(date, 7) = '2010-01' and keep doing that 12 times? I would like it to be a single query call and just have it place the months visits into a array like so: do until EOF theMonthArray[0] = "total for jan" theMonthArray[1] = "total for feb" theMonthArray[2] = "total for mar" theMonthArray[3] = "total for apr" ...etc loop I just can not think of a way to do that other than the example i posted with the 12 query called-one for each month. This is my query as of right now. Again, this only populates for one month where i am trying to populate all 12 months all at once. SELECT count(idNumber) as numVisits, theAccount, signUpDate, theActive from userinfo WHERE theActive = 'YES' AND idNumber = '0203' AND theAccount = 'SUB' AND left(signUpDate, 7) = '2010-04' GROUP BY idNumber ORDER BY numVisits; The example query above outputs this: numVisits | theAccount | signUpDate | theActive 2 SUB 2010-04-16 00:00:00 YES Which is correct because i have 2 records within the month of April. But again, i am trying to do all 12 months at one time (in a single query) so i do not tax the database server as much when compared to doing 12 different query's... UPDATE I'm looking to do something like along these lines: if NOT rst.EOF if left(rst("signUpDate"), 7) = "2010-01" then theMonthArray[0] = rst("numVisits") end if if left(rst("signUpDate"), 7) = "2010-02" then theMonthArray[1] = rst("numVisits") end if etc etc.... end if Any help would be great! :) David

    Read the article

  • Winforms: Embedded NumericUpDown control inside ListView

    - by tanthiamhuat
    say in my ListView say with 4 columns (Description, Price Per Unit, Quantity, Total Price). I would like to make the third column editable, and embedded NumericUpDown control for the Quantity column. Is it possible? And when the Quantity is updated via the NumericUpDown control, the Total Price is also being updated based on Total Price = Quantity * Price Per Unit. is the above achievable? any code samples would be greatly appreciated.

    Read the article

  • IText can't keep rows together, second row spans multiple pages but won't stick with first row.

    - by J2SE31
    I am having trouble keeping my first and second rows of my main PDFPTable together using IText. My first row consists of a PDFPTable with some basic search criteria. My second row consists of a PdfPTable that contains all of the tabulated results. Everytime the tabulated results becomes too big and spans multiple pages, it is kicked to the second page automatically rather than showing up directly below the search criteria and then paging to the next page. How can I avoid this problem? I have tried using setSplitRows(false), but I simply get a blank document (see commented lines 117 and 170). How can I keep my tabulated data (second row) up on the first page? An example of my code is shown below (you should be able to just copy/paste). public class TestHelper{ private TestEventHelper helper; public TestHelper(){ super(); helper = new TestEventHelper(); } public TestEventHelper getHelper() { return helper; } public void setHelper(TestEventHelper helper) { this.helper = helper; } public static void main(String[] args){ TestHelper test = new TestHelper(); TestEventHelper helper = test.getHelper(); FileOutputStream file = null; Document document = null; PdfWriter writer = null; try { file = new FileOutputStream(new File("C://Documents and Settings//All Users//Desktop//pdffile2.pdf")); document = new Document(PageSize.A4.rotate(), 36, 36, 36, 36); writer = PdfWriter.getInstance(document, file); // writer.setPageEvent(templateHelper); writer.setPdfVersion(PdfWriter.PDF_VERSION_1_7); writer.setUserunit(1f); document.open(); List<Element> pages = null; try { pages = helper.createTemplate(); } catch (Exception e) { e.printStackTrace(); } Iterator<Element> iterator = pages.iterator(); while (iterator.hasNext()) { Element element = iterator.next(); if (element instanceof Phrase) { document.newPage(); } else { document.add(element); } } } catch (Exception de) { de.printStackTrace(); // log.debug("Exception " + de + " " + de.getMessage()); } finally { if (document != null) { document.close(); } if (writer != null) { writer.close(); } } System.out.println("Done!"); } private class TestEventHelper extends PdfPageEventHelper{ // The PdfTemplate that contains the total number of pages. protected PdfTemplate total; protected BaseFont helv; private static final float SMALL_MARGIN = 20f; private static final float MARGIN = 36f; private final Font font = new Font(Font.HELVETICA, 12, Font.BOLD); private final Font font2 = new Font(Font.HELVETICA, 10, Font.BOLD); private final Font smallFont = new Font(Font.HELVETICA, 10, Font.NORMAL); private String[] datatableHeaderFields = new String[]{"Header1", "Header2", "Header3", "Header4", "Header5", "Header6", "Header7", "Header8", "Header9"}; public TestEventHelper(){ super(); } public List<Element> createTemplate() throws Exception { List<Element> elementList = new ArrayList<Element>(); float[] tableWidths = new float[]{1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.25f, 1.25f, 1.25f, 1.25f}; // logger.debug("entering create reports template..."); PdfPTable splitTable = new PdfPTable(1); splitTable.setSplitRows(false); splitTable.setWidthPercentage(100f); PdfPTable pageTable = new PdfPTable(1); pageTable.setKeepTogether(true); pageTable.setWidthPercentage(100f); PdfPTable searchTable = generateSearchFields(); if(searchTable != null){ searchTable.setSpacingAfter(25f); } PdfPTable outlineTable = new PdfPTable(1); outlineTable.setKeepTogether(true); outlineTable.setWidthPercentage(100f); PdfPTable datatable = new PdfPTable(datatableHeaderFields.length); datatable.setKeepTogether(false); datatable.setWidths(tableWidths); generateDatatableHeader(datatable); for(int i = 0; i < 100; i++){ addCell(datatable, String.valueOf(i), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+1), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+2), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+3), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+4), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+5), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+6), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+7), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+8), 1, Rectangle.NO_BORDER, Element.ALIGN_RIGHT, smallFont, true); } PdfPCell dataCell = new PdfPCell(datatable); dataCell.setBorder(Rectangle.BOX); outlineTable.addCell(dataCell); PdfPCell searchCell = new PdfPCell(searchTable); searchCell.setVerticalAlignment(Element.ALIGN_TOP); PdfPCell outlineCell = new PdfPCell(outlineTable); outlineCell.setVerticalAlignment(Element.ALIGN_TOP); addCell(pageTable, searchCell, 1, Rectangle.NO_BORDER, Element.ALIGN_LEFT, null, null); addCell(pageTable, outlineCell, 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, null, null); PdfPCell pageCell = new PdfPCell(pageTable); pageCell.setVerticalAlignment(Element.ALIGN_TOP); addCell(splitTable, pageCell, 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, null, null); elementList.add(pageTable); // elementList.add(splitTable); return elementList; } public void onOpenDocument(PdfWriter writer, Document document) { total = writer.getDirectContent().createTemplate(100, 100); total.setBoundingBox(new Rectangle(-20, -20, 100, 100)); try { helv = BaseFont.createFont(BaseFont.HELVETICA, BaseFont.WINANSI, BaseFont.NOT_EMBEDDED); } catch (Exception e) { throw new ExceptionConverter(e); } } public void onEndPage(PdfWriter writer, Document document) { //TODO } public void onCloseDocument(PdfWriter writer, Document document) { total.beginText(); total.setFontAndSize(helv, 10); total.setTextMatrix(0, 0); total.showText(String.valueOf(writer.getPageNumber() - 1)); total.endText(); } private PdfPTable generateSearchFields(){ PdfPTable searchTable = new PdfPTable(2); for(int i = 0; i < 6; i++){ addCell(searchTable, "Search Key" +i, 1, Rectangle.NO_BORDER, Element.ALIGN_RIGHT, font2, MARGIN, true); addCell(searchTable, "Search Value +i", 1, Rectangle.NO_BORDER, Element.ALIGN_LEFT, smallFont, null, true); } return searchTable; } private void generateDatatableHeader(PdfPTable datatable) { if (datatableHeaderFields != null && datatableHeaderFields.length != 0) { for (int i = 0; i < datatableHeaderFields.length; i++) { addCell(datatable, datatableHeaderFields[i], 1, Rectangle.BOX, Element.ALIGN_CENTER, font2); } } } private PdfPCell addCell(PdfPTable table, String cellContent, int colspan, int cellBorder, int horizontalAlignment, Font font) { return addCell(table, cellContent, colspan, cellBorder, horizontalAlignment, font, null, null); } private PdfPCell addCell(PdfPTable table, String cellContent, int colspan, int cellBorder, int horizontalAlignment, Font font, Boolean noWrap) { return addCell(table, cellContent, colspan, cellBorder, horizontalAlignment, font, null, noWrap); } private PdfPCell addCell(PdfPTable table, String cellContent, Integer colspan, Integer cellBorder, Integer horizontalAlignment, Font font, Float paddingLeft, Boolean noWrap) { PdfPCell cell = new PdfPCell(new Phrase(cellContent, font)); return addCell(table, cell, colspan, cellBorder, horizontalAlignment, paddingLeft, noWrap); } private PdfPCell addCell(PdfPTable table, PdfPCell cell, int colspan, int cellBorder, int horizontalAlignment, Float paddingLeft, Boolean noWrap) { cell.setColspan(colspan); cell.setBorder(cellBorder); cell.setHorizontalAlignment(horizontalAlignment); if(paddingLeft != null){ cell.setPaddingLeft(paddingLeft); } if(noWrap != null){ cell.setNoWrap(noWrap); } table.addCell(cell); return cell; } } }

    Read the article

  • what is jqgrid footer json format

    - by Dennis
    hi. i am currently trying to solve my problem with the total in the footer. i tried searching in the internet with some examples, but they are using php, and i am using java. can you please show me what exactly the json looks like for this php script $response-userdata['total'] = 1234; $response-userdata['name'] = 'Totals:'; is this what it looks like? {"total":0,"userdata":[{"total":1234,"name":"Totals"}],"page":0,"aData":..... thanks.

    Read the article

  • using nested arrays by php http_build_query() and recieve them in flash AS3

    - by Mahmoud
    hi, i am having this hard time figuring what is needed to do, i am using URLVariables to send/recieve values between flash and PHP the problem is, i am unable to access nested arrays ( array inside an array ) with flash heres an example: $dgresult = array("total" = $results); echo http_build_query($dgresult,"flf_"); in flash, all i need to do is to use: var variables:URLVariables = new URLVariables(e.target.data); then i can access it with : variables.total the problem now is when i have nested arrays: $dgresult = array("total" = $results); array_push($dgresult,$another_array); http_build_query($dgresult,"flf_"); i can still access variables.total but what about anything that has flf_ ? how is that possible?

    Read the article

  • Need to add totals of select list, and subtract if taken away

    - by jeremy
    This is the code I am using to calculate the sum of two values (example below) http://www.healthybrighton.co.uk/wse/node/1841 This works to a degree. It will add the two together but only if #edit-submitted-test is selected first, and even then it will show NaN until I select #edit-submitted-test-1. I want it to calculate the sum of the fields and show the amount live, i.e. the value of edit-submitted-test-1 will show 500, then if i select another field it will update to 1000. If i take one selection away it will then subtract it and will be back to 500. Any ideas would be helpful thanks! Drupal.behaviors.bookingForm = function (context) { // some debugging here, remove when you're finished console.log('[bookingForm] started.'); // get number of travelers and multiply it by 100 function recalculateTotal() { var count = $('#edit-submitted-test').val(); count = parseFloat( count ); var cost = $('#edit-submitted-test-1').val(); cost = parseFloat( cost ); $('#edit-submitted-total').val( count + cost ); } // run recalculateTotal every time user enters a new value var fieldCount = $('#edit-submitted-test'); var fieldCount = $('#edit-submitted-test-1'); fieldCount.change( recalculateTotal ); // etc ... }; EDIT This is all working beautiful, however I now want to add all the values together, and automatically update a total cost field that is the sum of a the added field with code above and field with value passed from previous page. I did this where accomcost is the field that is added together, but the total cost field does update, but not automatically, it is always one selection behind. i.e. If i select options and accomodation cost updates to £900, total cost remains empty. If i then change the selection and the accomodation updates to £300, the total cost updates to the previous selection of £900. Any help on this one Felix? ;) thanks. var accomcost = $('#edit-submitted-accomodation-cost').val(); accomcost = accomcost ? parseFloat(accomcost) : 0; var coursescost = $('#edit-submitted-courses-cost-2').val(); coursescost = coursescost ? parseFloat(coursescost) : 0; $('#edit-submitted-accomodation-cost').val( w1 + w2 + w3 + w4 + w5 + w6 + w7 + w8 + w9 + w10 + w11 + w12 + w13 + w14 + w15 + w16 + w17 + w18 + w19 + w20 + w21 + w22 + w23 + w24 + w25 + w26 + w27 + w28 + w29 + w30 ); $('#edit-submitted-total-cost').val( accomcost + coursescost ); var fieldCount = $('#edit-submitted-accomodation-cost, #edit-submitted-courses-cost-2') .change( recalculateTotal );

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >