Search Results

Search found 8401 results on 337 pages for 'bad habits'.

Page 52/337 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • JASON parsing using soap request and response

    - by hardik
    hello all in my project i want to use JASON parsing my sample soap request and response is here below please guide me how can i do that // soap request // <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns9093="urn:outmarket"><SOAP-ENV:Body><ns9093:doStartup xmlns:ns9093="urn:outmarket"><username xsi:type="xsd:string">guest</username><password xsi:type="xsd:string">guest</password></ns9093:doStartup></SOAP-ENV:Body></SOAP-ENV:Envelope> // soap response // <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:doStartupResponse xmlns:ns1="urn:outmarket"><return xsi:type="xsd:xml"><isvaliduser>1</isvaliduser><userid>19</userid><cansubmitphoto>0</cansubmitphoto><cansubmitcomment>0</cansubmitcomment><cansubmitrating>0</cansubmitrating><cansubmitmarket>0</cansubmitmarket><cansubmitstallholder>0</cansubmitstallholder><databaseid>1</databaseid><markets><markettype>1</markettype><marketid>3</marketid><marketname><![CDATA[Bairnsdale Farmers Market]]></marketname><ratingname><![CDATA[Market in general]]></ratingname><good>0</good><neutral>0</neutral><bad>0</bad></markets><markets><markettype>0</markettype><marketid>3</marketid><marketname><![CDATA[Bairnsdale Farmers Market]]></marketname><ratingname><![CDATA[Market in general]]></ratingname><good>25</good><neutral>0</neutral><bad>18</bad></markets><markets><markettype>0</markettype><marketid>5</marketid><marketname><![CDATA[Bendigo Farmers' Market]]></marketname><ratingname>`

    Read the article

  • Replace apostrophe in json string with empty string

    - by user572844
    Hi, I have problem with deserialization of json string, because string is bad format. For example json object consist string property statusMessage with value "Hello "dog" ". The correct format should be "Hello \" dog \" " . I would like remove apostrophes from this property. Something Like this. "Hello "dog" ". - "Hello dog ". Here is it original json string which I work. "{\"jancl\":{\"idUser\":18438201,\"nick\":\"JANCl\",\"photo\":\"1\",\"sex\":1,\"photoAlbums\":1,\"videoAlbums\":0,\"sefNick\":\"jancl\",\"profilPercent\":75,\"emphasis\":false,\"age\":\"-\",\"isBlocked\":false,\"PHOTO\":{\"normal\":\"http://u.aimg.sk/fotky/1843/82/n_18438201.jpg?v=1\",\"medium\":\"http://u.aimg.sk/fotky/1843/82/m_18438201.jpg?v=1\",\"24x24\":\"http://u.aimg.sk/fotky/1843/82/s_18438201.jpg?v=1\"},\"PLUS\":{\"active\":false,\"activeTo\":\"0000-00-00\"},\"LOCATION\":{\"idRegion\":\"6\",\"regionName\":\"Trenciansky kraj\",\"idCity\":\"138\",\"cityName\":\"Trencianske Teplice\"},\"STATUS\":{\"isLoged\":true,\"isChating\":false,\"idChat\":0,\"roomName\":\"\",\"lastLogin\":1294925369},\"PROJECT_STATUS\":{\"photoAlbums\":1,\"photoAlbumsFavs\":0,\"videoAlbums\":0,\"videoAlbumsFavs\":0,\"videoAlbumsExts\":0,\"blogPosts\":0,\"emailNew\":0,\"postaNew\":0,\"clubInvitations\":0,\"dashboardItems\":1},\"STATUS_MESSAGE\":{\"statusMessage\":\"\"Status\"\",\"addTime\":\"1294872330\"},\"isFriend\":false,\"isIamFriend\":false}}" Problem is here, json string consist this object: "STATUS_MESSAGE": {"statusMessage":" "some "bad" value" ", "addTime" :"1294872330"} Condition of string which I want modified: string start with "statusMessage":" string can has any *lenght from 0 -N * string end with ", "addTime So I try write pattern for string which start with "statusMessage":", has any lenght and is ended with ", "addTime. Here is it: const string pattern = " \" statusMessage \" : \" .*? \",\"addTime\" "; var regex = new Regex(pattern, RegexOptions.IgnoreCase); //here i would replace " with empty string string result = regex.Replace(jsonString, match => ???); But I think pattern is wrong, also I don’t know how replace apostrophe with empty string (remove apostrophne). My goal is : "statusMessage":" "some "bad" value" to "statusMessage":" "some bad value" Thank for advice

    Read the article

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

  • DHCPv6: Provide IPv6 information in your local network

    Even though IPv6 might not be that important within your local network it might be good to get yourself into shape, and be able to provide some details of your infrastructure automatically to your network clients. This is the second article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. IPv6 addresses for everyone (in your network) Okay, after setting up the configuration of your local system, it might be interesting to enable all your machines in your network to use IPv6. There are two options to solve this kind of requirement... Either you're busy like a bee and you go around to configure each and every system manually, or you're more the lazy and effective type of network administrator and you prefer to work with Dynamic Host Configuration Protocol (DHCP). Obviously, I'm of the second type. Enabling dynamic IPv6 address assignments can be done with a new or an existing instance of a DHCPd. In case of Ubuntu-based installation this might be isc-dhcp-server. The isc-dhcp-server allows address pooling for IP and IPv6 within the same package, you just have to run to independent daemons for each protocol version. First, check whether isc-dhcp-server is already installed and maybe running your machine like so: $ service isc-dhcp-server6 status In case, that the service is unknown, you have to install it like so: $ sudo apt-get install isc-dhcp-server Please bear in mind that there is no designated installation package for IPv6. Okay, next you have to create a separate configuration file for IPv6 address pooling and network parameters called /etc/dhcp/dhcpd6.conf. This file is not automatically provided by the package, compared to IPv4. Again, use your favourite editor and put the following lines: $ sudo nano /etc/dhcp/dhcpd6.conf authoritative;default-lease-time 14400; max-lease-time 86400;log-facility local7;subnet6 2001:db8:bad:a55::/64 {    option dhcp6.name-servers 2001:4860:4860::8888, 2001:4860:4860::8844;    option dhcp6.domain-search "ios.mu";    range6 2001:db8:bad:a55::100 2001:db8:bad:a55::199;    range6 2001:db8:bad:a55::/64 temporary;} Next, save the file and start the daemon as a foreground process to see whether it is going to listen to requests or not, like so: $ sudo /usr/sbin/dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf eth0 The parameters are explained quickly as -6 we want to run as a DHCPv6 server, -d we are sending log messages to the standard error descriptor (so you should monitor your /var/log/syslog file, too), and we explicitely want to use our newly created configuration file (-cf). You might also use the command switch -t to test the configuration file prior to running the server. In my case, I ended up with a couple of complaints by the server, especially reporting that the necessary lease file wouldn't exist. So, ensure that the lease file for your IPv6 address assignments is present: $ sudo touch /var/lib/dhcp/dhcpd6.leases$ sudo chown dhcpd:dhcpd /var/lib/dhcp/dhcpd6.leases Now, you should be good to go. Stop your foreground process and try to run the DHCPv6 server as a service on your system: $ sudo service isc-dhcp-server6 startisc-dhcp-server6 start/running, process 15883 Check your log file /var/log/syslog for any kind of problems. Refer to the man-pages of isc-dhcp-server and you might check out Chapter 22.6 of Peter Bieringer's IPv6 Howto. The instructions regarding DHCPv6 on the Ubuntu Wiki are not as complete as expected and it might not be as helpful as this article or Peter's HOWTO. But see for yourself. Does the client get an IPv6 address? Running a DHCPv6 server on your local network surely comes in handy but it has to work properly. The following paragraphs describe briefly how to check the IPv6 configuration of your clients, Linux - ifconfig or ip command First, you have enable IPv6 on your Linux by specifying the necessary directives in the /etc/network/interfaces file, like so: $ sudo nano /etc/network/interfaces iface eth1 inet6 dhcp Note: Your network device might be eth0 - please don't just copy my configuration lines. Then, either restart your network subsystem, or enable the device manually using the dhclient command with IPv6 switch, like so: $ sudo dhclient -6 You would either use the ifconfig or (if installed) the ip command to check the configuration of your network device like so: $ sudo ifconfig eth1eth1      Link encap:Ethernet  HWaddr 00:1d:09:5d:8d:98            inet addr:192.168.160.147  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: 2001:db8:bad:a55::193/64 Scope:Global          inet6 addr: fe80::21d:9ff:fe5d:8d98/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 Looks good, the client has an IPv6 assignment. Now, let's see whether DNS information has been provided, too. $ less /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 2001:4860:4860::8888nameserver 2001:4860:4860::8844nameserver 192.168.1.2nameserver 127.0.1.1search ios.mu Nicely done. Windows - netsh Per description on TechNet the netsh is defined as following: "Netsh is a command-line scripting utility that allows you to, either locally or remotely, display or modify the network configuration of a computer that is currently running. Netsh also provides a scripting feature that allows you to run a group of commands in batch mode against a specified computer. Netsh can also save a configuration script in a text file for archival purposes or to help you configure other servers." And even though TechNet states that it applies to Windows Server (only), it is also available on Windows client operating systems, like Vista, Windows 7 and Windows 8. In order to get or even set information related to IPv6 protocol, we have to switch the netsh interface context prior to our queries. Open a command prompt in Windows and run the following statements: C:\Users\joki>netshnetsh>interface ipv6netsh interface ipv6>show interfaces Select the device index from the Idx column to get more details about the IPv6 address and DNS server information (here: I'm going to use my WiFi device with device index 11), like so: netsh interface ipv6>show address 11 Okay, address information has been provided. Now, let's check the details about DNS and resolving host names: netsh interface ipv6> show dnsservers 11 Okay, that looks good already. Our Windows client has a valid IPv6 address lease with lifetime information and details about the configured DNS servers. Talking about DNS server... Your clients should be able to connect to your network servers via IPv6 using hostnames instead of IPv6 addresses. Please read on about how to enable a local named with IPv6.

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Unable to mount XP share using fs-cifs from Linux

    - by MetalSearGolid
    I have a head unit that runs Linux that is connected to my PC via an Ethernet cable. I have a Windows XP share on this PC that the head unit needs to be able to mount, however, when mounting using the following command, it fails. Here is the command that fails, along with the verbose output: # fs-cifs -vvvvvvvvv -l //CUMBRIA-XP:192.168.1.2:/hnet /mnt/net cifs[2158679-1]: starting... cifs[2158679-1]: user is to input both name & passwd. cifs[2158679-1]: server [192.168.1.2] share [hnet] prefix [/mnt/net] user [nu ll] passwd [null] Welcome: 192.168.1.2(:/hnet) -> /mnt/net Username:headunit cifs[2158679-1]: user name: headunit length 8 cifs[2158679-1]: new server Password: cifs[2158679-1]: establishing connection to (192.168.1.2)CUMBRIA-XP cifs[2158679-1]: session request: 192.168.1.2:CUMBRIA-XP -> localhost cifs[2158679-1]: negotiating smb dialect cifs[2158679-1]: skey(idx=2): 00000000, challenge:(8), 6137bfa2 f2d7803b cifs[2158679-1]: negotiation: success with dialect=2 cifs[2158679-1]: logging headunit on 192.168.1.2 cifs[2158679-1]: new packet cifs[2158679-1]: returning: mid 0 status= 0 cifs[2158679-1]: smb_logon successful: dialect 2 enpass 1 cifs[2158679-1]: mounting 192.168.1.2:/hnet cifs[2158679-1]: returning: mid 1 status= 13 cifs[2158679-1]: smb_mount: Bad file descriptor cifs[2158679-1]: try upper case share. cifs[2158679-1]: session request: 192.168.1.2:CUMBRIA-XP -> localhost cifs[2158679-1]: negotiating smb dialect cifs[2158679-1]: skey(idx=2): 00000000, challenge:(8), 2d3e910f e3e148c4 cifs[2158679-1]: negotiation: success with dialect=2 cifs[2158679-1]: logging headunit on 192.168.1.2 cifs[2158679-1]: returning: mid 2 status= 0 cifs[2158679-1]: smb_logon successful: dialect 2 enpass 1 cifs[2158679-1]: mounting 192.168.1.2:/HNET cifs[2158679-1]: returning: mid 3 status= 13 cifs[2158679-1]: smb_mount: Bad file descriptor cifs[2158679-1]: mount failed. cifs[2158679-1]: io_mount: smb_connection failed: Bad file descriptor io_mount: Bad file descriptor cifs[2158679-1]: user is to input both name & passwd. fs-cifs: missing arguments, or all mount attempts failed. run "use fs-cifs" or "fs-cifs -h" for help. Any ideas? It is worthy to note that /mnt does not exist on the filesystem, but I was told by the company who gave us these units that fs-cifs should automatically create the /mnt/net folders if they don't exist.

    Read the article

  • Including hostname in apache logwatch reports

    - by Robert Munteanu
    When hosting multiple domains with apache it's useful to see the logwatch apache output with the virtual host name included, but I only get: --------------------- httpd Begin ------------------------ Requests with error response codes 400 Bad Request /: 1 Time(s) /robots.txt: 1 Time(s) whereas I would like something like --------------------- httpd Begin ------------------------ Requests with error response codes 400 Bad Request example.com/: 1 Time(s) example.org/robots.txt: 1 Time(s) How can I achieve this with logwatch?

    Read the article

  • Tunnelblick cannot load private key file

    - by Patrick
    I got a certificate from my network administrator and the passphrase for it. Put everything in the Tunnelblick configuration folder, but always get an error: 2010-11-20 13:22:10 Cannot load private key file vpn-pass.key: error:06065064:digital envelope routines:EVP_DecryptFinal:bad decrypt: error:0906A065:PEM routines:PEM_do_header:bad decrypt: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Everything was copy&paste and it works on a windows machine. How can I get this to work?

    Read the article

  • Backup the Windows user folder in the cloud?

    - by Benjamin
    As I understand it, Google Drive and Dropbox, the two cloud storage providers I happen to know, can only sync a predefined folder that is created upon installation. I'd be happy to have an automated synchronisation of my folders in the cloud, but I'm not ready to change my habits, and start saving all my documents in the folder imposed by the provider. Is it possible with one of these, or any other you might know, to sync the full Windows user folder instead?

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • NAS device claims drive in a RAID is degraded but S.M.A.R.T. says it is fine

    - by Nathan Villaescusa
    I have a Synology DS213 with two 600GB drives in RAID 1. Last night the device reported that my second drive had become degraded and that I should replace it. When I ran a extensive S.M.A.R.T. test the results said that the drive is okay. How can I confirm that the drive is actually bad? Is there any case that the degraded drive is the good one and that it is actually the other drive that is bad?

    Read the article

  • Mounting an attached ebs volume in EC2

    - by David
    I've created an EC2 instance, created an EBS volume, attached it to the running instance, and successfully ssh'ed into my instance. The drive is attached as /dev/sdf Next, I tried mounting the drive by running: mkdir /testName mount -t ext3 /dev/sdf /testName But then I get the error message: mount: wrong fs type, bad option, bad superblock on /dev/sdf, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so What am I doing wrong? Thanks.

    Read the article

  • How to server a small image at a larger size

    - by DennyHalim.com
    We all know about hotlinking images and how to ban bad referrers, but I feel the need to take it further then that. I want to replace the hotlinked images with one huge image that's several megabytes in size. I have found a good image that's less then 100k and replaced all bad hotlinkers with it. How can I convert this image to become larger?

    Read the article

  • Proper multimeter specs for fixing capacitors on your motherboard?

    - by Xeoncross
    When our motherboards or LCD's go out it is possible that the capacitors are at fault and can be easily replaced by removing and replacing the bad caps rather than buying new screens or computers. After all, replacing a cap is much faster than building a new PC. Along those lines some of the capacitors I see on my boards are over 900µF. What is the bare minimum you need in a multimeter to test for bad capacitors (or good capacitance?)

    Read the article

  • Error with mounting partiotn ubuntu

    - by Master
    I have ext3 partition and i get this error mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Command in fstab is /dev/sdb1 /media/Server ext3 defaults 0 0

    Read the article

  • MegaCli newly created disk doesn't appear under /dev/sdX

    - by Henry-Nicolas Tourneur
    After having successfully added 2 new disks in a new RAID virtual drive (background initialization done), I would have exepected it to appear under /dev/sdh but it's not there (so, unusable). The system is running a CentOS 5.2 64 bits, HAL and udev daemons are running, not records of any sdh apparition under the messsage log file or in dmesg, only MegaCli do see that virtual drive. Any idea ? Some data: [root@server ~]# ./MegaCli -LDInfo -LALL -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (target id: 0) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:139392MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (target id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:285568MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default [root@server ~]# ls -l /dev/disk/by-id/scsi-360* lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09 -> ../../sda lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee -> ../../sdf lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f -> ../../sdb lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec -> ../../sde lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec-part1 -> ../../sde1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d -> ../../sdd lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7 -> ../../sdc lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1 -> ../../sdg lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1-part1 -> ../../sdg1

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Apache httpOnly Cookie Information Disclosure CVE-2012-0053

    - by John
    A PCI compliance scan, on a CentOS LAMP server fails with this message. The server header and ServerSignature don't expose the Apache version. Apache httpOnly Cookie Information Disclosure CVE-2012-0053 Can this be resolved by simply specifying a custom ErrorDocument for the 400 Bad Request response? How is the scanner determining this vulnerability, is it invoking a bad request then looking to see if it's the default Apache 400 response?

    Read the article

  • Deleted exchange account still being auto suggested

    - by mike G
    I set up a new hire in our domain in exchange. When he arrived yesterday I discovered his name had been mispelled. I deleted his account and created a new account with proper spelling. The problem now is his old email address is being being suggested whenever anyone types in his first name. Users email the bad address get a bounce and create more help desk tickets. Is there a way to update exchange or purge the bad account?

    Read the article

  • Invalid UTF-8 for Postgres, Perl thinks they're ok

    - by gorilla
    I'm running perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag & ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However perl seems to think that the strings are valid, but it still throws the exceptions. How can I get perl to detect the bad UTF-8?

    Read the article

  • How to upload video on YouTube with Ruby

    - by viatropos
    I am trying to upload a youtube video using the GData gem (I have seen the youtube_g gem but would like to make it work with pure GData if possible), but I keep getting this error: GData::Client::BadRequestError in 'MyProject::Google::YouTube should upload the actual video to youtube (once it does, mock this test out)' request error 400: No file found in upload request. I am using this code: def metadata data = <<-EOF <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> EOF end @yt = GData::Client::YouTube.new @yt.clientlogin("name", "pass") @yt.developer_key = "myKey" url = "http://uploads.gdata.youtube.com/feeds/api/users/name/uploads" mime_type = "multipart/related" file_path = "sample_upload.mp4" @yt.post_file(url, file_path, mime_type, metadata) What is the recommended/standard way for uploading videos to youtube with ruby, what is your method? Update After applying the changes to wrapped_entry, the string it produces looks like this: --END_OF_PART_59003 Content-Type: application/atom+xml; charset=UTF-8 <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> --END_OF_PART_59003 Content-Type: multipart/related Content-Transfer-Encoding: binary ... and inspecting the request and response looks like this: Request: <GData::HTTP::Request:0x1b8bb44 @method=:post @url="http://uploads.gdata.youtube.com/feeds/api/users/lancejpollard/uploads" @body=#<GData::HTTP::MimeBody:0x1b8c738 @parts=[#<GData::HTTP::MimeBodyString:0x1b8c058 @bytes_read=0 @string="--END_OF_PART_30909\r\nContent-Type: application/atom+xml; charset=UTF-8\r\n\r\n <?xml version=\"1.0\"?>\n<entry xmlns=\"http://www.w3.org/2005/Atom\"\n xmlns:media=\"http://search.yahoo.com/mrss/\"\n xmlns:yt=\"http://gdata.youtube.com/schemas/2007\">\n <media:group>\n <media:title type=\"plain\">Bad Wedding Toast</media:title>\n <media:description type=\"plain\">\n I gave a bad toast at my friend's wedding.\n </media:description>\n <media:category scheme=\"http://gdata.youtube.com/schemas/2007/categories.cat\">People</media:category>\n <media:keywords>toast wedding</media:keywords>\n </media:group>\n</entry> \n\r\n--END_OF_PART_30909\r\nContent-Type: multipart/related\r\nContent-Transfer-Encoding: binary\r\n\r\n"> #<File:/Users/Lance/Documents/Development/git/thing/spec/fixtures/sample_upload.mp4> #<GData::HTTP::MimeBodyString:0x1b8c044 @bytes_read=0 @string="\r\n--END_OF_PART_30909--"] @current_part=0 @boundary="END_OF_PART_30909" @headers={"Slug"="sample_upload.mp4" "User-Agent"="GoogleDataRubyUtil-AnonymousApp" "GData-Version"="2" "X-GData-Key"="key=AI39si7jkhs_ECjF4unOQz8gpWGSKXgq0KJpm8wywkvBSw4s8oJd5p5vkpvURHBNh-hiYJtoKwQqSfot7KoCkeCE32rNcZqMxA" "Content-Type"="multipart/related; boundary=\"END_OF_PART_30909\"" "MIME-Version"="1.0"} Response: #<GData::HTTP::Response:0x1b897e0 @body="No file found in upload request." @headers={"cache-control"=>"no-cache no-store must-revalidate" "connection"=>"close" "expires"=>"Fri 01 Jan 1990 00:00:00 GMT" "content-type"=>"text/plain; charset=utf-8" "date"=>"Fri 11 Dec 2009 02:10:25 GMT" "server"=>"Upload Server Built on Nov 30 2009 13:21:18 (1259616078)" "x-xss-protection"=>"0" "content-length"=>"32" "pragma"=>"no-cache"} @status_code=400> Still not working, I'll have to check it out more with those changes.

    Read the article

  • Extending Throwable in Java

    - by polygenelubricants
    Java lets you create an entirely new subtype of Throwable, e.g: public class FlyingPig extends Throwable { ... } Now, very rarely, I may do something like this: throw new FlyingPig("Oink!"); and of course elsewhere: try { ... } catch (FlyingPig porky) { ... } My questions are: Is this a bad idea? And if so, why? What could've been done to prevent this subtyping if it is a bad idea? Since it's not preventable (as far as I know), what catastrophies could result? If this isn't such a bad idea, why not? How can you make something useful out of the fact that you can extends Throwable?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >