Search Results

Search found 6028 results on 242 pages for 'total commander'.

Page 116/242 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • Deleting MySQL rows causes lock table error

    - by Dave L
    I had a couple million rows to delete but they can't be deleted at once without this error ERROR 1206 (HY000): The total number of locks exceeds the lock table size So I wrote a script to delete 100,000 rows 10,000 at a time. It ran once but when I run it a second time I get the error on the first attempt to delete 10,000. The way I'm trying to delete the 10,000 rows is to use a delete statement that refers to all 2 million rows but I use a limit clause to affect only 10,000. I've tried adding an "unlock tables;" statement to the script before the first delete but that doesn't help. I still get the lock table error on the first delete. Any ideas how I can do this? Is there a way I can tell it NOT to lock records? I can make sure nothing else is accessing the table.

    Read the article

  • UUID in Mountain Lion

    - by Naji
    I am trying to find my external HDD UUID in Mountain Lion but diskutil info /dev/disk1s1 returns: Najis-MacBook-Air:~ ****$ diskutil info disk1s1 Device Identifier: disk1s1 Device Node: /dev/disk1s1 Part of Whole: disk1 Device / Media Name: Untitled 1 Volume Name: My Book Escaped with Unicode: My%FF%FE%20%00Book Mounted: Yes Mount Point: /Volumes/My Book Escaped with Unicode: /Volumes/My%FF%FE%20%00Book File System Personality: NTFS Type (Bundle): ntfs Name (User Visible): Windows NT File System (NTFS) Partition Type: Windows_NTFS OS Can Be Installed: No Media Type: Generic Protocol: USB SMART Status: Not Supported Total Size: 2.0 TB (2000364240896 Bytes) (exactly 3906961408 512-Byte-Blocks) Volume Free Space: 212.5 GB (212506509312 Bytes) (exactly 415051776 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Yes Ejectable: Yes Whole: No Internal: No And there is no UUID. What is wrong exactly? Thank you.

    Read the article

  • Hyperic HQ metrics not working

    - by Robin Weston
    I am having a problem with Hyperic HQ. Several metrics, some in IIS 6.x (Request Execution Time, Request Wait Time) and some in .NET 2.0 (Bytes in all Heaps, Exceptions Thrown per Minute), always show 0. If I view perfmon on the server itself I can see that the counters have values greater than zero. There are some metrics that work fine, such as Total Get Requests per Minute and the other IIS defaults. I have looked in the server logs but nothing obvious shows up. Please advise. Am happy to find more information if required.

    Read the article

  • Creating a FAT file system and save it into a file in GNU/linux?

    - by RubenT
    I tell you my problem: I want to create a FAT file system and save it into a so I can mount it in linux using something like: sudo mount -t msdos <file> <dest_folder> Maybe I'm wrong and this cannot be done. Anyway, the problem is this: I'm trying to create the file containing a FAT file system, and I'm running this command: sudo mkfs.vfat -F 32 -r 112 -S 512 -v -C "test.fat" 100 That, accordingly to the mkfs man page, will create a FAT32 file system with 112 rootdir entries, logical sector size of 512 bytes, 100 blocks in total, and save it into "test.fat". But it fails, and the bash tells me: mkfs.vfat: unable to create test.fat What is going on? I think I am misunderstanding how mkfs works and how to use it. It is possible to write a filesystem into a file?

    Read the article

  • Reinstall linux over ssh.

    - by DoomStone
    Hello I'm having a large problem with our development server, it have had a program called Webmin + a total idiot have been administrating the Linux sever. Witch now have resulted in the server being totally trashed, there are so many different kinds of the same program install that anything doesn’t work. And don't get me started on the users and groups :D Well at last I have been given the responsibility to administrate our development server. But I would like to start from scratch, instead of trying to find every single package and config the previous admin have **ed up. But the problem is that it is a remote hosted server with ssh access. The server is running Debian, but i am thinking of reinstalling it with ubuntu server Thanks

    Read the article

  • Can I force a workstation to use a specific domain controller?

    - by Chad
    I'm on a domain that I can't control the domain controllers on, but I can control my systems. All the domain controllers are part of one site, and that cannot change. However, one of the domain controllers is not working correctly and the admins in charge of it are taking forever to resolve the issue. There are 6 total domain controllers... for some reason my workstations/servers are still attempting to use the bad one to authenticate my users. Is there a way to force a workstation to use specific domain controllers? or, better yet, force it to NOT use the bad one? Thanks in advance!

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • How to get a maximum file size of VZFS parition?

    - by Nulldevice
    I have a VPS hosting with a VZFS file system. How can I determine maximum file size of VZFS partition? UPD: Free space (or total space) is not what i need. Sometimes file cannot occupy a hole partition volume - fat16 with 2Gb limit is a good example. I need to use a large database file (say, 64Gb) and so I need to know if a file system of VPS hosting will cope with it. It is easy to calculate for an ext3 filesystem using tune2fs, but VPS uses VSFS by Virtuozzo, and it is documented bad. Is it any generic way to calculate maximum file size for some filesystem in linux?

    Read the article

  • iptables: limiting bytes downloaded per IP per day?

    - by Miles
    On a public-facing web server, I'd like to limit the total bytes downloaded per IP address per day. For example, after a visitor downloaded 100MB, any additional requests would be dropped or rejected for the next 24 hours. Is it possible to accomplish this using iptables alone? The connbytes, connlimit, hashlimit, quota, and recent options all look promising, but the man page plays its cards close to the vest (e.g., "quota - Implements network quotas by decrementing a byte counter with each packet. --quota bytes The quota in bytes."). Would like to avoid using a proxy (like Squid) if possible.

    Read the article

  • Burn more than one ISO to one dvd?

    - by Doug
    I just turned 11 CDs with a total of 1gb of stuff into ISOs. Is it possible for me to just burn all the ISO videos into a DVD? Any alternative way for me to do it? Edit: I need to be playable on any DVD player. It's videos for my grandparents. Update: I read that DVD shrink should work (re-authoring software), but I didn't try it because when I imported the VIDEO_TS folders into the software, my video wasn't played widescreen, and I dont' know how to fix it.

    Read the article

  • TTL and traceroute showing different values for same domain

    - by Cray XT3
    y am i getting two different output for tracert and ping. Ping Result showing total hops of 20 and tracert showing 8. default ttl value on my linux machine 64,icmp echo reply ttl value 44. 64-44=20 but tracert is showing only 8 hops. What can be the reason? If tracert is implemented using ttl then why am i getting different values for same domain.no matter how many times i tried. Fo google and google services,ttl value and tracrt are same,but for other domains its different.

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Missing disk space in Windows XP

    - by Jørn Schou-Rode
    On my mother's Lenovo laptop, Windows XP claims that the hard drive is almost full. According to the properties window, 52.7 out of 55.2 GB is in use: By deleting temp files from Internet Explorer, System Restore, Recycle bin, Windows Update, System Cleanup, I managed to free up about one GB. That's still 50 GB in use, which still is a lot more than I expected. Hence, I gave good old WinDirStat a spin, and here's the output: It might be hard to read here, but the first line says that the total amount of disk space in use on drive C is 24.3 GB. So Windows claims usage of 52.7 GB and WinDirStat can only account for 24.3 GB. Where is the other half of that disk space being used? I hope someone has an answer, or some tricks or tips to do further research. UPDATE: The laptop in question has an SSD hard drive. I am aware that these disk (at least the earlier ones) have a limited life-time. Could the symptoms described be caused by wear and tear on the SSD?

    Read the article

  • OCZ Agility 3 SSD - Incorrect capacity displayed

    - by Chris
    Just installed a 60GB OCZ Agility 3 SSD, put Windows, and other various applications on there. All working fine. However, when I look at the drive in Windows 7, it says that I have 1.5GB free, but when I select all folders on the drive and view the properties to see the combined file size it says that the total is 28.9GB. So I'm effectively losing half of my capacity!! Any ideas on what this could be? PC Spec: Windows 7 60GB OCZ Agility 3 SSD Thanks, Chris

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • apache requests failing

    - by Josh
    I'm trying to figure out why sometimes the client fails to load objects/requests from a dynamic page served from Apache/MySql/Debian machine. Let's say 13 objects are to be loaded for a total of 185.3 KB load, with no external objects (no DNS lookups) and no other traffic at the same time, randomly some of those object do not load. However, if I perform a refresh, sometimes all of them load or some might fail again. I only have 1Mbps/up and my DNS is been hosted externally (everydns). What could be the reason of this issue? Any comments will be appreciated.

    Read the article

  • Laptop overheating within minutes of start up

    - by Spik330
    I have a Dell Windows 7 Home Prem with an I7-720QM. More information on the computer can be found here http://www.dell.com/support/home/us/en/04/product-support/servicetag/51CVCN1/configuration The Problem I am having is the computer will over heat unnaturally fast. From the time it takes from boot to when i can run my diagnostic tools which takes about two minutes the cpu temp is 86c after a few more minutes the cpu temp will reach 100 and the computer will black screen shut down. In total the the laptop can only be run for 3-5 minutes before completely shutting off. During this time there is nothing extensive running. After the laptop shuts down you have to wait for it to cool down or it will shut off even faster sometimes 7-15 seconds well still in the boot screen. Does anyone know what could be the problem maybe a sensor or is the computer fried?

    Read the article

  • Production Instance : CLOSE_WAIT Connection Issue

    - by rajnikant
    I am using 10EC2 Instances behind 1 ELB. And ELB configured 80 to 8080 and 443 to 8080 port. And all 10EC2 instances having installed with Apache Tomcat, total request on ELB around 8000 to 10000 in 1 minute. I am facing problem for CLOSE_WAIT connection on 10 EC2 Instance, having Apache Tomcat. EC2 Instance Type : m1.xlarge When we restart the Apache Tomcat, all CLOSE_WAIT connections are lost, but its not proper way to work on Production Instances. Please help me out.

    Read the article

  • Optimal Disk Setup for OLTP SQL Server

    - by Chris
    We have a high transaction (lots of reads and writes) database server (running SQL 2005) that is currently set up with a RAID 1 OS partition (C:) and a RAID 5 data/log/tempdb partition (D:). The C: has 2 drives and the D: has 4 drives. The server has around 300 databases ranging from 10MB to 2GB in size. I have been reading up on best practices for partioning the disks, but would like some opinions on our setup since we are so limited in the number of disks. It seems like RAID 10 is popular, but I dont think we could use it with only 6 total disks to work with. Thanks. Update I went with 3 RAID 1 Partitions (2 disks each) Partition 1: OS, TempDB, Backups Partition 2: Logs Partition 3: Data

    Read the article

  • subversion instillation on centos 5.8

    - by user57221
    I am trying to install subversion on centos 5.8 usingyum install subversion and it is throwing the error below. ..... .... Total size: 7.3 M Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: libapr-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libaprutil-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libapr-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libapr-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 libaprutil-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr-util is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libaprutil-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 Complete! (1, [u'Please report this error in http://bugs.centos.org/yum5bug']) How do i resolve this?

    Read the article

  • MRTG: Switch Port Throughput

    - by amazinghorse24
    I currently have MRTG running in a Debian box. It currently polls a Netgear Switch for the speeds of 7 or so ports and then makes the graphs of them. It currently only records the bits/sec. I would like to set up MRTG to record and display the total amount of data that has gone through the port, not just the speed of it. I am somewhat new to MIBS and SNMP and so I need some help. The switch is a Netgear GS748AT and am not quite sure where to find the MIBS for it, or which MIBS I need to accomplish my task. Any and all help is appreciated!

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • Why can't I open my Access application in design mode?

    - by mmyers
    I have been given an Access 2007 application (mainly VB code) that I need to modify. It has been locked down for production, so the toolbars and so forth are not visible. However, it is a .mdb file, not .mde, so in theory it should be possible to get into design mode by holding Shift while opening it. But that method has only worked a total of three times out of the (probably) 60 or 70 times I've tried. I realize now that I should have enabled the toolbars while I had it open, but unfortunately hindsight doesn't get me anywhere now. Does anyone know what might be causing the problem? Is it my own fault, or the application's, or Access's?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >