Search Results

Search found 6028 results on 242 pages for 'total commander'.

Page 116/242 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • UUID in Mountain Lion

    - by Naji
    I am trying to find my external HDD UUID in Mountain Lion but diskutil info /dev/disk1s1 returns: Najis-MacBook-Air:~ ****$ diskutil info disk1s1 Device Identifier: disk1s1 Device Node: /dev/disk1s1 Part of Whole: disk1 Device / Media Name: Untitled 1 Volume Name: My Book Escaped with Unicode: My%FF%FE%20%00Book Mounted: Yes Mount Point: /Volumes/My Book Escaped with Unicode: /Volumes/My%FF%FE%20%00Book File System Personality: NTFS Type (Bundle): ntfs Name (User Visible): Windows NT File System (NTFS) Partition Type: Windows_NTFS OS Can Be Installed: No Media Type: Generic Protocol: USB SMART Status: Not Supported Total Size: 2.0 TB (2000364240896 Bytes) (exactly 3906961408 512-Byte-Blocks) Volume Free Space: 212.5 GB (212506509312 Bytes) (exactly 415051776 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Yes Ejectable: Yes Whole: No Internal: No And there is no UUID. What is wrong exactly? Thank you.

    Read the article

  • VMWare Server :: VM set to 2gb RAM but vmware process shows 100mb physical, 1900mb virtual

    - by brad
    I've set up a VMWare instance to run CastIron Integration Appliance. I allocated 2gb of memory to the instance, assuming it would take this as physical memory (my server has 8gb total). When I view top however on the server, the vmware-vmx process has about 100m Resident memory and 1900m Virtual. Running CastIron it reports that the appliance often hits 50% memory usage. Does this mean I'm using 900mb of harddrive space as memory? I wanted VMWare to use 2gb of physical memory, no swap. Can anyone tell me how to achieve this? Setup Debian Lenny 5.0.3 VMWare Server 2.0.2

    Read the article

  • Shrink a mounted LVM partition

    - by javanix
    I fear I already know the answer to this question, but here goes. I need to carve out a new partition on a running system. /var/ is mounted from an LVM volume (hdd1_vg-var) and has only 3% used disk space. / is mounted separately (hdd1_vg-root) and has about 80% used disk space. Filesystem Size Used Avail Use% Mounted on /dev/**/hdd1_vg-root 2.0G 1.4G 481M 75% / /dev/**/hdd1_vg-var 33G 699M 31G 3% /var Unfortunately I don't have any free extents to grow this partition organically - vgdisplay shows: Total PE 10000 Alloc PE / Size 10000 / 39.06 GB Free PE / Size 0 / 0 So seeing that I have all this free disk space on /var/, can I shrink /var/ without un-mounting it or is this just a pipe dream? I am really hoping to be able to do this work on a running system - un-mounting would of course not be difficult but it would interfere with system functionality.

    Read the article

  • How to sum cells depending on the content of a neighbor cell

    - by dannymcc
    I have an Excel document with the following columns; Date | Reference | Amount 23/01/11 | 111111111 | £20.00 25/09/11 | 222222222 | £30.00 11/11/11 | 111111111 | £40.00 01/04/11 | 333333333 | £10.00 31/03/11 | 333333333 | £33.00 20/03/11 | 111111111 | £667.00 21/11/11 | 222222222 | £564.00 I am trying to find a way of summarising the content in the following way; Reference : 111111111 Total: £727 So far the only way I have been able to achieve this is to filter the list by each reference number (manually) and then add a simple SUM formula to the bottom of the list of amounts. Are there any tricks that anyone knows that may speed this up? What I am trying to achieve is a spreadsheet that highlights each reference number that collectively exceeds over £2,000.

    Read the article

  • How to get a maximum file size of VZFS parition?

    - by Nulldevice
    I have a VPS hosting with a VZFS file system. How can I determine maximum file size of VZFS partition? UPD: Free space (or total space) is not what i need. Sometimes file cannot occupy a hole partition volume - fat16 with 2Gb limit is a good example. I need to use a large database file (say, 64Gb) and so I need to know if a file system of VPS hosting will cope with it. It is easy to calculate for an ext3 filesystem using tune2fs, but VPS uses VSFS by Virtuozzo, and it is documented bad. Is it any generic way to calculate maximum file size for some filesystem in linux?

    Read the article

  • iptables: limiting bytes downloaded per IP per day?

    - by Miles
    On a public-facing web server, I'd like to limit the total bytes downloaded per IP address per day. For example, after a visitor downloaded 100MB, any additional requests would be dropped or rejected for the next 24 hours. Is it possible to accomplish this using iptables alone? The connbytes, connlimit, hashlimit, quota, and recent options all look promising, but the man page plays its cards close to the vest (e.g., "quota - Implements network quotas by decrementing a byte counter with each packet. --quota bytes The quota in bytes."). Would like to avoid using a proxy (like Squid) if possible.

    Read the article

  • VLC RTP Streaming in FC12

    - by Matt D
    I'm trying to get VLC to work streaming RTP audio/video over my office network. The goal is multicast a/v streaming. In all test cases, we are streaming from VLC to VLC. I am able to stream from Windows to Windows, and from Fedora to Windows, but not from Windows to Fedora. Additionally, I am unable to receive a LOCAL stream from one instance of VLC to another, within Fedora. I don't see any reason why this would be. The buffer indicator (where the elapsed/total time is normally displayed) never shows any connectivity, so it would appear to be a network problem, but since I am able to stream from Fedora to Windows (same IP, same port) I thought it would be something else. Does anyone know of a solution to this issue?

    Read the article

  • Hyperic HQ metrics not working

    - by Robin Weston
    I am having a problem with Hyperic HQ. Several metrics, some in IIS 6.x (Request Execution Time, Request Wait Time) and some in .NET 2.0 (Bytes in all Heaps, Exceptions Thrown per Minute), always show 0. If I view perfmon on the server itself I can see that the counters have values greater than zero. There are some metrics that work fine, such as Total Get Requests per Minute and the other IIS defaults. I have looked in the server logs but nothing obvious shows up. Please advise. Am happy to find more information if required.

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Can I force a workstation to use a specific domain controller?

    - by Chad
    I'm on a domain that I can't control the domain controllers on, but I can control my systems. All the domain controllers are part of one site, and that cannot change. However, one of the domain controllers is not working correctly and the admins in charge of it are taking forever to resolve the issue. There are 6 total domain controllers... for some reason my workstations/servers are still attempting to use the bad one to authenticate my users. Is there a way to force a workstation to use specific domain controllers? or, better yet, force it to NOT use the bad one? Thanks in advance!

    Read the article

  • TTL and traceroute showing different values for same domain

    - by Cray XT3
    y am i getting two different output for tracert and ping. Ping Result showing total hops of 20 and tracert showing 8. default ttl value on my linux machine 64,icmp echo reply ttl value 44. 64-44=20 but tracert is showing only 8 hops. What can be the reason? If tracert is implemented using ttl then why am i getting different values for same domain.no matter how many times i tried. Fo google and google services,ttl value and tracrt are same,but for other domains its different.

    Read the article

  • Reinstall linux over ssh.

    - by DoomStone
    Hello I'm having a large problem with our development server, it have had a program called Webmin + a total idiot have been administrating the Linux sever. Witch now have resulted in the server being totally trashed, there are so many different kinds of the same program install that anything doesn’t work. And don't get me started on the users and groups :D Well at last I have been given the responsibility to administrate our development server. But I would like to start from scratch, instead of trying to find every single package and config the previous admin have **ed up. But the problem is that it is a remote hosted server with ssh access. The server is running Debian, but i am thinking of reinstalling it with ubuntu server Thanks

    Read the article

  • Missing disk space in Windows XP

    - by Jørn Schou-Rode
    On my mother's Lenovo laptop, Windows XP claims that the hard drive is almost full. According to the properties window, 52.7 out of 55.2 GB is in use: By deleting temp files from Internet Explorer, System Restore, Recycle bin, Windows Update, System Cleanup, I managed to free up about one GB. That's still 50 GB in use, which still is a lot more than I expected. Hence, I gave good old WinDirStat a spin, and here's the output: It might be hard to read here, but the first line says that the total amount of disk space in use on drive C is 24.3 GB. So Windows claims usage of 52.7 GB and WinDirStat can only account for 24.3 GB. Where is the other half of that disk space being used? I hope someone has an answer, or some tricks or tips to do further research. UPDATE: The laptop in question has an SSD hard drive. I am aware that these disk (at least the earlier ones) have a limited life-time. Could the symptoms described be caused by wear and tear on the SSD?

    Read the article

  • Burn more than one ISO to one dvd?

    - by Doug
    I just turned 11 CDs with a total of 1gb of stuff into ISOs. Is it possible for me to just burn all the ISO videos into a DVD? Any alternative way for me to do it? Edit: I need to be playable on any DVD player. It's videos for my grandparents. Update: I read that DVD shrink should work (re-authoring software), but I didn't try it because when I imported the VIDEO_TS folders into the software, my video wasn't played widescreen, and I dont' know how to fix it.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Production Instance : CLOSE_WAIT Connection Issue

    - by rajnikant
    I am using 10EC2 Instances behind 1 ELB. And ELB configured 80 to 8080 and 443 to 8080 port. And all 10EC2 instances having installed with Apache Tomcat, total request on ELB around 8000 to 10000 in 1 minute. I am facing problem for CLOSE_WAIT connection on 10 EC2 Instance, having Apache Tomcat. EC2 Instance Type : m1.xlarge When we restart the Apache Tomcat, all CLOSE_WAIT connections are lost, but its not proper way to work on Production Instances. Please help me out.

    Read the article

  • Laptop overheating within minutes of start up

    - by Spik330
    I have a Dell Windows 7 Home Prem with an I7-720QM. More information on the computer can be found here http://www.dell.com/support/home/us/en/04/product-support/servicetag/51CVCN1/configuration The Problem I am having is the computer will over heat unnaturally fast. From the time it takes from boot to when i can run my diagnostic tools which takes about two minutes the cpu temp is 86c after a few more minutes the cpu temp will reach 100 and the computer will black screen shut down. In total the the laptop can only be run for 3-5 minutes before completely shutting off. During this time there is nothing extensive running. After the laptop shuts down you have to wait for it to cool down or it will shut off even faster sometimes 7-15 seconds well still in the boot screen. Does anyone know what could be the problem maybe a sensor or is the computer fried?

    Read the article

  • Can a switch consume bandwidth?

    - by aashiq
    I have a network with a router and a switch. At first my ISP's optical fiber is connected with media converter input port. Than Ethernet cable (output port of Media converter) is connected with the switch. Then an output Ethernet cable is connected with our inner Microtik router. Then this router is connected with another LAN switch. From this switch we have got every connection other switch. It is our total network structure. Our bandwidth is 2 Mbps. From the 11th of March our MRTG graph shows high all the time even when my all switch is switched off except LAN switch. That's why our line is breaking up (Voice call). How could be it possible? My all PC's bandwidth is limited but when connected PC with media converter directly then the graph shows normal. That's why I can't blame my ISP.

    Read the article

  • apache requests failing

    - by Josh
    I'm trying to figure out why sometimes the client fails to load objects/requests from a dynamic page served from Apache/MySql/Debian machine. Let's say 13 objects are to be loaded for a total of 185.3 KB load, with no external objects (no DNS lookups) and no other traffic at the same time, randomly some of those object do not load. However, if I perform a refresh, sometimes all of them load or some might fail again. I only have 1Mbps/up and my DNS is been hosted externally (everydns). What could be the reason of this issue? Any comments will be appreciated.

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • OCZ Agility 3 SSD - Incorrect capacity displayed

    - by Chris
    Just installed a 60GB OCZ Agility 3 SSD, put Windows, and other various applications on there. All working fine. However, when I look at the drive in Windows 7, it says that I have 1.5GB free, but when I select all folders on the drive and view the properties to see the combined file size it says that the total is 28.9GB. So I'm effectively losing half of my capacity!! Any ideas on what this could be? PC Spec: Windows 7 60GB OCZ Agility 3 SSD Thanks, Chris

    Read the article

  • Can't create a file even if rights allow and I've relogged in

    - by stiv
    I try to create file in folder with group write access, user tomcat7 is in group. Why isn't it workin? skr@konrad~/data/asu$ sudo -u tomcat7 sh $ whoami tomcat7 $ echo > /home/skr/data/asu/g.gz.index sh: 2: cannot create /home/skr/data/asu/g.gz.index: Permission denied $ ls -la /home/skr/data/asu/ total 18708 drwxrwxr-x 2 skr skr 4096 Sep 29 08:38 . drwxrwxr-x 85 skr skr 4096 Jul 30 00:42 .. $ grep ^skr /etc/group skr:x:1002:tomcat7:mail Tried to logout, but it doesn't help. Any ideas?

    Read the article

  • Exchange Management Console Listing Extra Servers

    - by Zak G
    I recently decommissioned an Old Exchange 2003 server. Prior to the decommissioning, the environment consisted of 2 EXCH 2010 and that one EXCH 2003 server. I went through all the steps listed in Microsoft documents and online in how to remove the server completely (pruning the AD and all that jazz). When I go to Exchange Management Console it still lists 3 total servers in the "Server Summary" column in the Organizational Health tab. When I click manage servers, it only lists the 2 EXCH 2010 servers. I am aware this is only a cosmetic issue but I would appreciate it if anyone can share some advice on how to fix the issue.

    Read the article

  • scp vs netatalk, samba, and/or vsftpd with External USB drive

    - by KitsuneYMG
    I set up a ubuntu server machine to share an ext2 formatted external usb drive. When attempting to copy a single 275MB files from said device through netatalk, I get estimated download rates at around 45 min. With samba and ftp (using vsftpd) I get 1+ hours! Using scp to copy the file results in complete download within 5 minutes. Another option, ssh+cp from external device to ~ and then using netatalk to grab it from there results in a total time of arounf 7 minutes. Does anyone have a clue what is misconfigured? Assuming that nothing is, is there any fs/pseudo-fs that would use the internal hdd as an intermediate location/onion-layer for the external hdd (for reads only)? Details: AppleVolumes.default: /mnt/ext USB allow:username cnidscheme:cdb options:usedots,upriv

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >