Search Results

Search found 2789 results on 112 pages for 'blocks'.

Page 23/112 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • Getting "-bash: fork: Resource temporarily unavailable" in OSX

    - by Joseph Tura
    I seem to run into problems with the max. number of processes every so often. Anyone know what is best practice for fixing this? Running OSX 10.6 on a MacBook Pro i7. ulimit -a returns these values: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited When the error occurred I checked, and there were 102 running tasks and 523 threads.

    Read the article

  • Multiple ServerRoot directives in single apache

    - by fip
    i came across a apache httpd 2.2 configuration recently in which multiple ServerRoot-directives were defined, each followed by individual prefork Settings. Sort of like this: ServerRoot root1 <IfModule prefork.c> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> [vhost-configs] ServerRoot root2 <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 20 MaxClients 250 MaxRequestsPerChild 0 </IfModule> [vhost-configs] In my understanding these are global settings one overriding the other. But is that true and is it still true with the second ServerRoot directive between the prefork blocks? Thank you in advance EDIT They are not in different conditional blocks and both server roots are used in that way that files with relative paths to both are correctly included. I just wondered if a <ServerRoot> would initiate a new scope in which all global statements would not override the configuration of previous ones.

    Read the article

  • Linux not picking up new partition correctly on emc pseudo device

    - by James
    Hi We have a database server running oracle rac. We were recently running out of space on the main LUN that it is attached to. I created a new 100GB LUN and concatenated this onto the existing LUN creating a new MetaLUN. After some messing I managed to get linux to recognise the new space. I then created a new partition in on the pseudo device, to use the new space. Previously when I have done this on other system the next step is to create an ASM disk on the new partition and add this disk to the oracle disk group. This however fails. I am aware of various issues with ASM and powerpath, but I don't think this is the issue here. As on while investigating the issue I discovered that one of the underlying logical device is not reflecting the size change. See below; Powermt displays all of the underlying logical units [root@XXXXX~]# powermt display dev=emcpowerd Pseudo name=emcpowerd CLARiiON ID=CKM00091500009 [VFRAC2] Logical device ID=6006016030312200787502866C65DE11 [LUN 30] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3 qla2xxx sde SP A0 active alive 0 0 3 qla2xxx sdj SP B0 active alive 0 0 4 qla2xxx sdo SP A1 active alive 0 0 4 qla2xxx sdt SP B1 active alive 0 0 Fdisk on the pseudo device shows correct space. [root@XXXXX ~]# fdisk -l /dev/emcpowerd Disk /dev/emcpowerd: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/emcpowerd1 1 39162 314568733+ 83 Linux /dev/emcpowerd2 39163 52216 104856255 83 Linux fdisk on one of the logical units is wrong [root@XXXXX~]# fdisk -l /dev/sde Disk /dev/sde: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 39162 314568733+ 83 Linux /dev/sde2 39163 52216 104856255 83 Linux fdisk on the rest of the units is fine [root@XXXXX ~]# fdisk -l /dev/sdj Disk /dev/sdj: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 39162 314568733+ 83 Linux /dev/sdj2 39163 52216 104856255 83 Linux Also when I created the the partition linux did not create the any entries in the /dev directory for the second partition so I created these manually [root@XXXXX dev]# mknod sde2 b 8 66 [root@XXXXX dev]# ls -al sd[ejot]? brw-r----- 1 root disk 8, 65 Dec 29 14:20 sde1 brw-r--r-- 1 root disk 8, 66 Apr 8 20:31 sde2 brw-r----- 1 root disk 8, 145 Dec 29 14:19 sdj1 brw-r--r-- 1 root disk 8, 146 Apr 8 20:33 sdj2 brw-r----- 1 root disk 8, 225 Apr 6 23:12 sdo1 brw-r--r-- 1 root disk 8, 226 Apr 8 20:33 sdo2 brw-r----- 1 root disk 65, 49 Dec 29 14:19 sdt1 brw-r--r-- 1 root disk 65, 50 Apr 8 20:33 sdt2 This is a production server that we cannot easily reboot. Any ideas would be much appreciated. J

    Read the article

  • Single file changed: intrusion or corruption?

    - by Michaël Witrant
    rkhunter reported a single file change on a virtual server (netstat binary). It didn't report any other warning. The change was not the result of a package upgrade (I reinstalled it and the checksum is back as it was before). I'm wondering whether this is a file corruption or an intrusion. I guess an intrusion would have changed many other files watched by rkhunter (or none if the intruder had access to rkhunter's database). I disassembled both binaries with objdump -d and stored the diff here: https://gist.github.com/3972886 The full dump diff generated with objdump -s is here : https://gist.github.com/3972937 I guess a file corruption would have changed either large blocks or single bits, not small blocks like this. Do these changes look suspicious? How could I investigate more? The system is running Debian Squeeze.

    Read the article

  • Xpath automatization software

    - by holms
    Too sad this topic was closed. But I'm kind of a having the same question. I want to construct xpathes, for common html block which appears on page. For example: you can give two URLs to that software, which will contain SAME html blocks (divs) , but having different content in it. by giving 2 stackoverflow.com url's, software could detect that same div#id is being used once again, and just give XPATH'es of those html blocks like for example. Of course I can find xpath'es my self, as far as I remember, firebug makes it easy,shows xpath of every html block, but this is kind of hard procedure if you want to get xpath'es for LOTS of html elements. so that's why I want this kind of software to help in this routine.

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • List of MD /Raid/LVM (Devices) = How to mount them without any further information available?

    - by Jens
    Hello Expets, I do not have much skills in linux and installed a system two years ago that I now had to reboot, but it seems I did not automate everything with start-scripts... My Problem: I miss some mountpoints. I have a list of my raids (excerpt:) md3 : active (auto-read-only) raid1 sda6[0] sdb6[1] 97659008 blocks [2/2] [UU] md4 : active (auto-read-only) raid1 sda7[0] sdb7[1] 250099776 blocks [2/2] [UU] and it seems md3 and md4 are NOT mounted. However i do NOT have any entries for them fstab file. What should I do next. I do NOT know which filesystem they have (most likely ext3). =Can I savely try to mount them with (mount -t ext3 /dev/md3 /mnt/mymntpoint) or will the lead to corrupted data, in case they are not ext3? What should I do next (based on the information given above). The goal is to remount these Devices again, but I do not know anything about them anymore... Thank you very much Jens

    Read the article

  • Org-mode lags in highlighting source

    - by quanticle
    I'm using org-mode to maintain my programming notes. This means I have lots of source code blocks, as follows. #+begin_src <language name> <code> #+end_src One thing I've noticed is that when I write the #+end_src, emacs doesn't color the source code as such. Yet, if I quit emacs and reopen the notes file (or force a refresh with the Org-Refresh/Reload-Refresh setup current buffer menu entry) the source is colored grey if I'm using the GUI or green if I'm using emacs in the terminal. Is this an inherent limitation of emacs, or am I doing something wrong in setting up my code blocks that's preventing emacs from going back and recoloring the source code that I've entered?

    Read the article

  • Read/WRITE/Verify disk diagnostic tool for Mac OS X?

    - by Spiff
    It seems that there are many tools out there for Mac OS X that test a hard drive for bad blocks by doing a Read/Verify pass. That is, they read a block, then read it a second time, and verify that both reads yielded the same results. I need a tool that does a non-destructive Read/Write/Verify pass. It should read each block, write those same contents back out, and then read it again to verify. That way every block gets written, giving the hard drive a chance to spare out bad blocks. But since the same contents that were just read get written back out, it doesn't destroy data that wasn't already lost. I'm aware of several tools that can do Read/Verify, but I'm not aware of any that do Read/Write/Verify. Are there any tools that do what I want? Unix / open source tools that compile and run on Mac OS X are fair game too.

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Root partition full? CentOS

    - by Joao Heleno
    Hi! I'm running CentOS 5.4 and my / is full. I wanted to install gparted but in order to do that I must install Priorities and it's when I get an error saying / is full so I can't go forward. Here's some output: fdisk -l Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20972826 83 Linux /dev/sda2 2612 3251 5140800 82 Linux swap / Solaris /dev/sda3 3252 30394 218026147+ 83 Linux df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 20315812 19365152 0 100% / /dev/sda3 211196248 49228164 151066780 25% /home tmpfs 1552844 0 1552844 0% /dev/shm I'm not using LVM. Please advise. Thanks

    Read the article

  • connect to ssh server thru 80 via HTTP proxy?

    - by im_chc
    Hi, Please help: I want to connect to my ssh server at home However, I'm behind a corporate (CORP) firewall, which blocks almost all ports (443, 22, 23 etc). But it seems that 80 is not blocked, coz I am able to surf the web after I login (i.e. IE sets to CORP's proxy server, and start IE - displayed CORP intranet portal - type in google.com - dialog pops up for userid + pwd - login successful, and surf without restrictions) My ssh server listens at 443. My question is: Is there a way to connect from a computer behind the CORP firewall to the ssh server thru the 80 port, with the ssh server still listening on port 443? Changing the ssh server to listen to port 80 is not an option, coz my home ISP blocks 80. Can I use a public proxy which listens at 80? After some research on google I found that there is something called "connect to SSH thru an HTTP proxy" using the Cockscrew software. Is it useful? Or is there some other way to solve the problem?

    Read the article

  • UUID in Mountain Lion

    - by Naji
    I am trying to find my external HDD UUID in Mountain Lion but diskutil info /dev/disk1s1 returns: Najis-MacBook-Air:~ ****$ diskutil info disk1s1 Device Identifier: disk1s1 Device Node: /dev/disk1s1 Part of Whole: disk1 Device / Media Name: Untitled 1 Volume Name: My Book Escaped with Unicode: My%FF%FE%20%00Book Mounted: Yes Mount Point: /Volumes/My Book Escaped with Unicode: /Volumes/My%FF%FE%20%00Book File System Personality: NTFS Type (Bundle): ntfs Name (User Visible): Windows NT File System (NTFS) Partition Type: Windows_NTFS OS Can Be Installed: No Media Type: Generic Protocol: USB SMART Status: Not Supported Total Size: 2.0 TB (2000364240896 Bytes) (exactly 3906961408 512-Byte-Blocks) Volume Free Space: 212.5 GB (212506509312 Bytes) (exactly 415051776 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Yes Ejectable: Yes Whole: No Internal: No And there is no UUID. What is wrong exactly? Thank you.

    Read the article

  • Indentation-based Folding for TextMate

    - by Craig Walker
    SASS and HAML have indentation-based syntax, much like Python. Blocks of related code have the same number of spaces at the start of a line. Here's some example code: #drawer height: 100% color: #c2c7c4 font: size: 10px .slider overflow: hidden height: 100% .edge background: url('/images/foo') repeat-y .tab margin-top = !drawer_top width: 56px height: 161px display: block I'm using phuibonhoa's SASS bundle, and I'd like to enhance it so that the various sections can fold. For instance, I'd like to fold everything under #drawer, everything under .slider, everything under .edge, etc. The bundle currently includes the following folding code: foldingStartMarker = '/\*|^#|^\*|^\b|^\.'; foldingStopMarker = '\*/|^\s*$'; How can I enhance this to fold similarly-indented blocks?

    Read the article

  • How can I combine code from an old revision when I didn't branch in TortoiseMerge?

    - by gr33d
    I need to combine (merge?) some parts of an old revision with a newer revision of a file. I'm still pretty new to subversion, so I'm not sure what I'll bomb in the process. I did not branch--these are simply different revisions of a file. How do I send the sections of code from r1 to r3 where they are needed. The keyboard shortcuts and menu options for "theirs", "mine", "left block", "right block", etc aren't very intuitive. If I need 5 blocks from r1 to be after the first 10 blocks of r3, how do I do it? Shouldn't I be able to go through r1 block by block and decide if and where it belongs in r3? Thanks in advance!

    Read the article

  • Is there a constraint-based scheduling/calendar application?

    - by wonsungi
    Is there a constraint-based scheduling/calendar application? This application would be used to coordinate multiple people's schedules. Two basic use cases: Multiple people need to schedule a time to meet together. Everyone is busy at different days/times. Each person enters blocks of days/times they cannot meet, and the application suggests the best times to meet given a desired time range. Multiple people need to use some common resources for a specific length of time (over some time span like a week), but the exact date/time does not matter. These people enter the resources and time needed, and the application suggests the best way to share these resources. This use case still accounts for people's blocks of busy time. I imagine this program would be graphical, but other interfaces would be acceptable. Also preferable if web-based/works on both PC's and Mac's, but PC-only/Mac-only solutions are acceptable.

    Read the article

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

  • Converting a DWG/DXF to CSV or Excel

    - by Menno Gouw
    I'm using ZWcad and i need to get the coordinates of hundreds of blocks into a excel sheet or .CSV file so i can import that into the GPS hardware. I know there are plenty of tools for autocad, i probably can even write one myself but as far as ZWcad goes i seem to be out of options. However ZWcad saves to DWG too, and exports to all the other familiar cad extensions. So i was wondering if i would just save the blocks i need to export to a certain file there might be a tool/program to convert that directly into .CSV.

    Read the article

  • mdadm - Recovering a 'split' RAID1 array

    - by Hamza
    I have two drives that used to be part of a single RAID1 volume but it appears that one of them went offline for some time, something I've noticed just now when I rebooted my system. I now seem to have two RAID volumes, as reported by: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc[1] 2096116 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb[0] 2096116 blocks super 1.2 [2/1] [U_] unused devices: <none> Not exactly sure where to go from here. How can I merge and re-sync these volumes without data loss? Thanks.

    Read the article

  • Is an I/O benchmark made for hardware an accurate assessment of a Windows VM's performance under vSphere 5?

    - by Jeremy
    We support an enterprise application running on Windows Server 2008 R2. One of our customers has chosen to install to VMWare, and what I'm finding is that the VM's are relatively slow compared to hardware. Our product development team has advised that many VMs appear to run particularly slow on I/O benchmarks, which impact performance in production. I've tried the AttoSoft I/O benchmark and find that for smaller I/O blocks (1-32K) the VM I'm looking at is 25x slower than hardware and for larger I/O blocks (1-8MB) it's 10x slower. Is this a fair benchmark? If not, any suggestions for a fair test?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >