Search Results

Search found 13969 results on 559 pages for 'word count'.

Page 234/559 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Apache 2.2, worker mpm, mod_fcgid and PHP: Can't apply process slot

    - by mopoke
    We're having an issue on an apache server where every 15 to 20 minutes it stops serving PHP requests entirely. On occasions it will return a 503 error, other times it will recover enough to serve the page but only after a delay of a minute or more. Static content is still served during that time. In the log file, there's errors reported along the lines of: [Wed Sep 28 10:45:39 2011] [warn] mod_fcgid: can't apply process slot for /xxx/ajaxfolder/ajax_features.php [Wed Sep 28 10:45:41 2011] [warn] mod_fcgid: can't apply process slot for /xxx/statics/poll/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php There is RAM free and, indeed, it seems that more php processes get spawned. /server-status shows lots of threads in the "W" state as well as some FastCGI processes in "Exiting(communication error)" state. I rebuilt mod_fcgid from source as the packaged version was quite old. It's using current stable version (2.3.6) of mod_fcgid. FCGI config: FcgidBusyScanInterval 30 FcgidBusyTimeout 60 FcgidIdleScanInterval 30 FcgidIdleTimeout 45 FcgidIOTimeout 60 FcgidConnectTimeout 20 FcgidMaxProcesses 100 FcgidMaxRequestsPerProcess 500 FcgidOutputBufferSize 1048576 System info: Linux xxx.com 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009 x86_64 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.04 DISTRIB_CODENAME=jaunty DISTRIB_DESCRIPTION="Ubuntu 9.04" Apache info: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:45:55 Server's Module Magic Number: 20051115:21 Server loaded: APR 1.2.12, APR-Util 1.2.12 Compiled using: APR 1.2.12, APR-Util 1.2.12 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Apache modules loaded: alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.load cgi.load deflate.load dir.load env.load expires.load fcgid.load headers.load include.load mime.load negotiation.load rewrite.load setenvif.load ssl.load status.load suexec.load PHP info: PHP 5.2.6-3ubuntu4.6 with Suhosin-Patch 0.9.6.2 (cli) (built: Sep 16 2010 19:51:25) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

  • Linux NIC Bonding Issue (CentOS 4 / RHEL 3)

    - by jinanwow
    I am having an issue with bonding NICs on CentOS 4. It appears the bonding driver does work, but it is stuck in round-robin mode and I am trying to get to active-backup. The current config is: ifcfg-bond0 DEVICE=bond0 IPADDR=192.168.204.18 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no TYPE=Bonding BONDING_OPTS="mode=1 miimon=100" ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yes ifcfg-eth3 DEVICE=eth3 ONBOOT=yes BOOTPROTO=none TYPE=Ethernet MASTER=bond0 SLAVE=yes cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:17:a4:8f:94:b1 Slave Interface: eth3 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:1b:21:56:b8:69 cat /etc/modprobe.conf alias eth0 tg3 alias eth1 tg3 alias eth3 e1000 alias eth2 e1000 alias bond0 bonding options bond0 mode=1 miimon=100 I have tried moving the bonding information out of the ifcfg-bond0 into the modprobe configuration file. It seems that it is stuck in RR and I am trying to get it into the Active-backup (mode 1) state. Any ideas what would be causing this issue?

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Neophyte question about using Subtotal and CountIf in Excel

    - by Andrew
    Hi, I'm using Excel and having some problems with Countif and I don't understand how it works differently from SubTotal. I used the GUI to subtotal stuff and all the subtotals are right. Then I attempted to use the Countif to see how many requirements passed. That worked for the first subtotal only. It's easy to see why. When I look at the box for the subtotal, it says: =SUBTOTAL(3,C286:C292) When I look at my formula for passed requirements, I have: =IF(ISTEXT(A285),COUNTIF(C286:C338,"=Passed"),"") Notice that the last column is wrong. How did the Subtotal manage to keep this correct? I typed in the formula for passed requirements and dragged it down the page. Everything behaved as expected (even the bit about ISTEXT dutifully figured out which row was which), but it got the last row wrong. Any ideas? SRS Maintenance Count 7 44 SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Maintenance Passed SRS Reports Count 12 43 SRS Reports Passed SRS Reports Passed SRS Reports Passed SRS Reports Passed SRS Reports Failed SRS Reports Passed SRS Reports Passed SRS Reports Failed SRS Reports Passed SRS Reports Passed SRS Reports Failed

    Read the article

  • Very slow write performance on Debian 6.0 (AMD64) with DMCRYPT/LVM/RAID1

    - by jdelic
    I'm seeing very strange performance characteristics on one of my servers. This server is running a simple two-disk software-RAID1 setup with LVM spanning /dev/md0. One of the logical volumes /dev/vg0/secure is encrypted using dmcrypt with LUKS and mounted with the sync and noatimes flag. Writing to that volume is incredibly slow at 1.8 MB/s and the CPU usage stays near 0%. There are 8 crpyto/1-8 processes running (it's a Intel Quadcore CPU). I hope that someone on serverfault has seen this before :-(. uname -a 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux Interestingly, when I read from the device I get good performance numbers: reading without encryption: $ dd if=/dev/vg0/secure of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes (6.6 GB) copied, 68.8951 s, 95.1 MB/s reading with encryption: $ dd if=/dev/mapper/secure of=/dev/null bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes (6.6 GB) copied, 69.7116 s, 94.0 MB/s However, when I try to write to the device: $ dd if=/dev/zero of=./test bs=64k 8809+0 records in 8809+0 records out 577306624 bytes (577 MB) copied, 321.861 s, 1.8 MB/s Also, when I read I see CPU usage, when I write, the CPU stays at almost 0% usage. Here is output of cryptsetup luksDump: LUKS header information for /dev/vg0/secure Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 2056 MK bits: 256 MK digest: dd 62 b9 a5 bf 6c ec 23 36 22 92 4c 39 f8 d6 5d c1 3a b7 37 MK salt: cc 2e b3 d9 fb e3 86 a1 bb ab eb 9d 65 df b3 dd d9 6b f4 49 de 8f 85 7d 3b 1c 90 83 5d b2 87 e2 MK iterations: 44500 UUID: a7c9af61-d9f0-4d3f-b422-dddf16250c33 Key Slot 0: ENABLED Iterations: 178282 Salt: 60 24 cb be 5c 51 9f b4 85 64 3d f8 07 22 54 d4 1a 5f 4c bc 4b 82 76 48 d8 a2 d2 6a ee 13 d7 5d Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Why does this loopback device creation malfunction?

    - by user50118
    The stackoverflow people thought this was more appropriate here, I put it there as it is part of a program but I can see their POV, so here it is: At the bottom of the code you can see it failing. In fact, I'll put it here at the start too because it is the problem I need to solve: [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks) I don't understand why the device is supposedly too small. I made this partition two days ago with normal fdisk, it was created and formatted with ext4 supplying no options other than the partition (/dev/sdb2) to format. The only explaination I can think of is that ext4 has the size of the partition wrong somehow but that seems very unlikely. What is wrong with my math? The offset is correct, you can see that with the file command, and the size should be correct too because End - Start comes to the same number of sectors minus 1, just like it should (A disk starting on sector 1 and ending on sector 2 would be 2 - 1 = 1 and have two sectors). # sfdisk -luS /dev/sdb Disk /dev/sdb: 9729 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb2 78295040 156296384 78001345 83 Linux # losetup -r -f --show -o $((78295040 * 512)) --sizelimit $((78001345 * 512)) /dev/sdb /dev/loop0 # file -s /dev/loop0 /dev/loop0: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files) # mount -o ro -t ext4 /dev/loop0 /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail -n 1 [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks)

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • Thecus N5200, disk has dropped out of RAID5

    - by Anders Ekdahl
    We have a Thecus 5200 NAS here at work with five WD Caviar Black 2TB disks in a RADI5 array. Yesterday, disk 4 dropped out of the array, and in the NAS web interface there's a warning about the RAID array being "degraded". When I go into Storage - Disks, disk 1 and 4 has a warning next to them. When I click on the warnings, this information about the disks are displayed: Tray Number 4 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 34 Reallocated Sector Count 66 Current Pending Sector 1447 Raw Read Error Rate 61 Seek Error Rate 0 Hardware ECC Recovered N/A Tray Number 1 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 32 Reallocated Sector Count 0 Current Pending Sector 1465 Raw Read Error Rate 0 Seek Error Rate 0 Hardware ECC Recovered N/A I'm not really an expert on either disks or RAID arrays. Does this indicate that the fourth disk is damaged, and needs to be replaced? And what about disk number one? It has a warning, but it's still in the array. Is it safe to add the fourth disk back into the array as a spare? I can't find any way to add it back as a it were before.

    Read the article

  • Why dd finishes instantly when pipelining to cat?

    - by agsamek
    I start bash on Cygwin and type: dd if=/dev/zero | cat /dev/null It finishes instantly. When I type: dd if=/dev/zero > /dev/null it runs as expected and I can issue killall -USR1 dd to see the progress. Why does the former invocation finishes instantly? Is it the same on a Linux box? * Explanation why I asked such stupid question and possibly not so stupid question I was compressing hdd images and some were compressed incorrectly. I ended up with the following script showing the problem: while sleep 1 ; do killall -v -USR1 dd ; done & dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c Wc should write 1000000000 at the end. The problem is that it does not on my machine: bash-3.2$ dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c 13+0 records in 12+0 records out 60000000 bytes (60 MB) copied, 0.834 s, 71.9 MB/s 27+0 records in 26+0 records out 130000000 bytes (130 MB) copied, 1.822 s, 71.4 MB/s 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 13.231 s, 75.6 MB/s 1005856128 Is it a bug or am I doing something wrong once again?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Recieving a 500 Internal Server error after login success?

    - by Jeremy Quick
    I created my first member login form which takes the typical username and password and then sends it to the code below: checklogin.php: mysql_connect($db_host, $db_username, $db_password) or die(mysql_error()); mysql_select_db($db_database) or die(mysql_error()); $username=$_POST['username']; $password=$_POST['password']; $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM users WHERE username='$username' and password='$password'"; $result=mysql_query($sql); $count=mysql_num_rows($result); if($count == 1) //ERROR APPEARS TO TAKE PLACE HERE { session_start(); $_SESSION['username'] = $username; $_SESSION['password'] = $password; header('login_success.php'); } else { header("location:login_fail.php"); } If I type in the wrong information everything works properly so I know the error appears to be taking effect in the marked if statement. I have been searching the internet now looking for solutions but none seem to match mine or I am overlooking them. I've made a few changes which brought me to this point, before I was receiving deprecation warnings. Also, I have checked the logs and they are empty of errors relating to this.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >