Search Results

Search found 22162 results on 887 pages for 'limit size'.

Page 61/887 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • max_binlog_size & log-bin size

    - by waza123
    I have a problem with full disk -rw-rw---- 1 mysql mysql 1073741982 2012-07-03 18:14 mysql-bin.000034 -rw-rw---- 1 mysql mysql 1073741890 2012-07-04 14:39 mysql-bin.000035 -rw-rw---- 1 mysql mysql 1073741988 2012-07-05 09:16 mysql-bin.000036 -rw-rw---- 1 mysql mysql 1073741964 2012-07-06 00:04 mysql-bin.000037 -rw-rw---- 1 mysql mysql 1073741974 2012-07-06 21:45 mysql-bin.000038 -rw-rw---- 1 mysql mysql 1073741923 2012-07-07 19:05 mysql-bin.000039 -rw-rw---- 1 mysql mysql 356167680 2012-07-07 23:47 mysql-bin.000040 my.cnf: max_binlog_size = 1073741824 log-bin = mysql-bin max_relay_log_size = 1G relay_log_space_limit = 2G mysql is creating files mysql-bin.xxxxx and my disk is full after several days. How to make mysql to delete old logs ?

    Read the article

  • FreeBSD Listen Queue Overflows - can't increase max queue size

    - by Harry
    I have a decently high trafficked FreeBSD Nginx server, and I'm starting to get a large number of listen queue overflows: [root@svr ~]# netstat -sp tcp | fgrep listen 80361931 listen queue overflows [root@svr ~]# netstat -Lan | grep "*.80" tcp4 192/0/128 *.80 [root@svr ~]# sysctl kern.ipc.somaxconn kern.ipc.somaxconn: 12288 [root@svr ~]# However I can't seem to increase the max listen queue length past 128. I've increased kern.ipc.somaxconn, but it's not changing the max. Am I missing something? Thanks!

    Read the article

  • Officially announced RAM support size doesn't apply to one of twin rigs with just one difference

    - by Deniz
    It'll take a little long to describe my situation but here goes the story : In January 2009 we bought (the OEM parts) two similar systems with just one difference. One of them had a Phenom X4 cpu and the other one (mine) a Phenom X3 cpu. At the beginning we had problems with both systems to power them on whilst having all of their ram slots being full. We decided to install the systems with just 2 slots populated and later try to install the rest of ram sticks. Both systems did succeed to support 3 sticks. We tried many different procedures to make the systems work with their fourth ram slots being populated. We waited for new bios updates and flashed the boards when they were available, we tried different ram sticks with different frequencies etc. One day while we were trying to install the fourth stick, the X4 machine did accept it. The other one did not. The most mind boggling thing was that after one of my trials the X3 system begun to not operate with the third slot populated. Our boards did have AMD 770 chipsets and we even tried to change the board of the X3 machine with another 770 chipset board. Now my questions are : Should we change the cpu ? What is causing the X3 system to not accept the fourth (or now the third) ram stick ? The manufacturers sites do claim that this boards do accept 4 ram sticks (but they only tested them with certain ram brands and models). What are the limitations for maximum ram configurations on motherboards ? Are there some "rules of thumb" except frequency, voltage, chip type considerations for which we did check our parts ? Our boards are : Gigabyte GA-MA770-DS3 Sapphire PC-AM2RX780 - PURE CrossFireX 770

    Read the article

  • Estimate compressed file size in tar.gz

    - by liori
    I've got a set of .tar.gz files, which are duplicity backup files (either full backups or incremental ones). I'd like to compute which directories take the most space on backups. This will most probably be a different figure to calculating which directories take the most space on a live filesystem because I need to account for how often are files changing (and therefore taking space on incremental backups) and how compressible are files. I know that while many other archive formats store compressed files as different entities inside the archive file, .tar.gz files do not, and therefore it is impossible to get an exact amount of storage taken in the archive by a single file after compression. Are there any tools to calculate at least some estimates?

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Resolution Chart Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • SUnreclaim size increasing in syc with a TCP-CLOSE_WAIT till application restart

    - by maver1k
    Recently found a behaviour where my application had a connection in TCP Close_wait state till the app was restarted (after about 5 hours). But during this period the SUnreclaim space was also increasing constantly and went down on restart. The application is runnning on a rhel5 os and Im not very familiar with the memory management system. Would appreciate if someone clould tell me what extactly is the Ureclaim space and why it is increasing in sync with the close_wait. Thanks.

    Read the article

  • Win'08 - Extend volume size on SAN attached storage in a failover cluster

    - by user53207
    Running Win 2008, I'd like to extend the volume of a SAN attached drive that is part of a failover cluster. The SAN team has allocated additional drive space which is being seen by Windows Storage Manager. However, the option to "Extend Volume" is disabled, so is the ability to turn it into a dynamic disk. Is the ability to extend volumes when part of a failover cluster disabled or not available when it's part of SAN attached storage?

    Read the article

  • Tool to monitor file size, file existence, parse xml, etc

    - by Artur Carvalho
    I'm trying to find some tool that helps me monitor several things. What are some requirements: Shows results on a web page. Checks existence of files/folders Checks sizes of files/folders Can parse xml files Can have several status depending if it's for instance, after 9pm Ping workstations/Servers to ensure they are on or off create daily/weekly/monthly reports (pdf, html, csv) show daily/weekly/monthly scheduled tasks check if specific users are logged in a machine check which users are logged in in a machine I've looked into some solutions but could not find what I wanted. Usually tools like nagios are more focused in servers, and spiceworks is not so specific. At this point I'm using a little powershell script that does several of these items, but before losing more time probably reinventing the wheel, what tools are out there? Thank you in advance.

    Read the article

  • NFSv3 Asynchronous Write Depends on Block Size?

    - by Joe Swanson
    I am trying to figure out if my NFSv3 deployment is performing SAFE asynchronous writes. I suspect that it is doing strictly synchronous writes, as I am getting poor performance in general. I used Wireshark to look at the 'stable' flag in write calls, and look for 'commit' calls. I noticed that, with especially large block sizes, writes to appear to be performed asynchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=2097152 count=512 However, smaller block sizes appear to be performed strictly synchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=8192 count=655360 What gives? How does the client decide whether to tell the server to perform writes synchronously or asynchronously? Is there any way I can get smaller block sizes to be performed asynchronously?

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • SQL Server Log File Size Management

    - by Rob
    I have all my databases in full recovery and have the log backups happening every 15 minutes so my log files are usually pretty small. question is if there is a nightly operation that causes lots of transactions to happen and causes my log files to grow, should i shrink them back down afterward? Does having a gigantic log file negatively affect the database performance? Disk space isn't an issue at this time.

    Read the article

  • Recommended boot partition size for Windows 7

    - by dwj
    I started using One Big Partition for everything and separating data out with folders when I got my current computer years ago. I'm preparing to upgrade my system from Windows XP to Windows 7 and I thought I might go back to putting my data on a separate partition. Most likely I'll just use the default OS install. My current Program Files tree has ~16 GB of stuff. Thinking ahead though, I've had XP installed for years. Who knows what apps I'm going to install down the line? This, of course, begs the question: How big do I make my Windows 7 install partition?

    Read the article

  • Buying Dual Monitors of different size and resolution

    - by rutherford
    I'm about to go choose a dual monitor setup: 1) Is there any reason why I can't just walk out and buy the two TFT screens I like (a wide screen and a 'portrait' screen) and combine them? Mainly wide screen would be for gaming, and portrait for browsing. I'd want the desktop stretching from one to the other (ie drag pointer, apps from one screen to the other) 2) Also do I need separate gfx cards for each monitor or can one cover both? any performance cost? 3) And can I have separate background images for each, seeing as they'll be different resolutions?

    Read the article

  • Buying Dual Monitors of different size and resolution

    - by rutherford
    I'm about to go choose a dual monitor setup: 1) Is there any reason why I can't just walk out and buy the two TFT screens I like (a wide screen and a 'portrait' screen) and combine them? Mainly wide screen would be for gaming, and portrait for browsing. I'd want the desktop stretching from one to the other (ie drag pointer, apps from one screen to the other) 2) Also do I need separate gfx cards for each monitor or can one cover both? any performance cost? 3) And can I have separate background images for each, seeing as they'll be different resolutions?

    Read the article

  • NTFS partition size not recognized after disaster recovery clone

    - by djechelon
    I'm in the middle of a disaster recovery of a 250GB hard disk that was "clicking". Obviously I didn't have a backup copy. I managed to salvage all the files thanks to GParted Live that was able to read the disk without a single "click" sound. So I cloned the partition to a new drive sized 500GB. Unfortunately, GParted process went to some kind of infinite loop, disks stopped I/O and after a couple of hours I interrupted the clone process I started. Now the problem is: when cloning the partition I also chose to expand 250GB to the whole 500GB of the target disk. Windows sees the partition sized 500GB in computer management, but Windows Explorer only sees 250. chkdsk e: /f says the filesystem is OK. How can I repair the file system and let Windows see the full 500GB of the new partition? An alternate idea is to deep-copy the files from the backup disk to a newly formatted disk. This should definitely fix. Any other ideas?

    Read the article

  • Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • size of bad data on HD windows7

    - by acidzombie24
    While using my external harddrive (NTFS) i had a crc32 error. Now i would like to see how much data is corrupted. If its a few KBs i wont mind but if its a few MB i should consider getting a new harddrive. How can i check using windows7

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >