Search Results

Search found 21107 results on 845 pages for 'size optimization'.

Page 58/845 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • apache/nginx html file size limit

    - by Daniel
    When serving/sending HTML files to a users browser, where can I reconfigure this size limit? I want to send an extremely large html files to users via apache and nginx. Files are being truncated in apache/nginx, what setting determines the file size?

    Read the article

  • Exchange 2007 | Mailbox DB Size 180GB

    - by rihatum
    Hi All, I have a Exchange 2007 SP1 server running on Windows 2008 6 HD Drives in a RAID-1 OS, DB, Logs on separate RAID-1 Disks Size of the Mailbox Database is 183GB and increasing We only have First Storage Group and Second Storage Group There is no more space on the server to install new Physical Disks and create a Storage Group Q - Can I resize the RAID-1 Partition where the DB is ? Q - Any other suggestions as to how I can decrease the Mailbox DB Size ? Will be grateful for your suggestions on this. Kind Regards

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • How to log size of cookies in request header with apache

    - by chrisst
    We have an issue on our site with cookies growing too large. We have already expanded the acceptable header size and throttled the cookie sizes for now, but I'd like to figure out what the average client's header sizes are, specifically of the cookies. I've created an apache log that captures the cookies being set on each request: LogFormat "%{Cookie}i" cookies But this just spits out the entire contents of all cookies in the header. Is there a way to have apache just log the size (or just length of the string) per request?

    Read the article

  • File size limit exceeded in bash

    - by yboren
    I have tried this shell script on a SUSE 10 server, kernel 2.6.16.60, ext3 filesystem the script has problem like this: cat file | awk '{print $1" "$2" "$3}' | sort -n > result the file's size is about 3.2G, and I get such error message: File size limit exceeded in this shell, ulimit -f is unlimited after I change script into this cat file | awk '{print $1" "$2" "$3}' >tmp sort -n tmp > result the problem is gone. I don't know why, can anyone help me with an explanation?

    Read the article

  • Size of a sharepoint web application

    - by Indra
    How do you figure out the current size of the sharepoint web application? Better yet, the size of a site collection or a subsite. I am planning to move a site collection from one farm to another. I need to plan the storage capacity first.

    Read the article

  • Automatic picture size adjustment

    - by CChriss
    Does anyone know of a free utility that allows you to paste into it a graphics file (any type would work for me, jpg, bmp, png, etc) and it will size the file to within a preset size boundary? For instance, if I preset it to resize files to be a maximum of 400 wide by 300 tall, and I paste in a file 500x500, it would shrink the file to fit within the 300 tall limit. Thanks.

    Read the article

  • Changes in gcc/persistence of optimization flags gcc/C

    - by gnometorule
    Just curious. Using gcc/gdb under Ubuntu 9.10. Reading a C book that also often gives the disassembly of the object file. When reading in January, my disassembly looks a lot like the book's; now, it's quite different - possibly more optimized (I notice some re-arrangements in the assembly code now that, at least in the files I checked, look optimized). I have used optimization options -O1 - -O3 for gcc between the first and second read, but not before the first. (1) Is the use of optimization options persistent, aka, if you use them once, you'll use them until switching them off? That would be weird (browsed man file and at least did not see anything to that sort). In the unlikely case that it is true, how do you switch them off? (2) Has gcc's assembly changed through any recent upgrade? (3) Does gcc sometimes produce (significantly) different assembly code although same compile options are chosen? Thanks much.

    Read the article

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • ICC vs GCC - Optimization and CPU architecture

    - by Rayne
    Hi all, I'm interested in knowing how GCC differs from Intel's ICC in terms of the optimization levels and catering to specific processor architecture. I'm using GCC 4.1.2 20070626 and ICC v11.1 for Linux. How does ICC's optimization levels (O1 to O3) differ from GCC, if they differ at all? The ICC is able to cater specifically to different architectures (IA-32, intel64 and IA-64). I've read that GCC has the -march compiler option which I think is similar, but I can't find a list of the options to use. I'm using Intel Xeon X5570, which is 64-bit. Are there any other GCC compiler options I could use that would cater my applications for 64-bit Intel CPUs? Thank you. Regards, Rayne

    Read the article

  • Query optimization

    - by Lijo
    Hi Team, I am trying to learn SQL query optimization using SQL Server 2005. However, I did not find any practical example where I can improve performance by tweaking query alone. Can you please list some example queries - prior and after optimization? It has to be query tuning – not adding index, not making covering index, not making any alteration to table. No change is supposed to be done on table and index. [ I understand that indexes are important. This is only for learning purpose] It would be great if you can exaplain using temp table instead of refering any sample databases like adventureworks. Expecting your support... Thanks Lijo

    Read the article

  • 550 Error When I try to get the size of a file on an FTP

    - by Eric
    I'm trying to use an FtpWebRequest to get the size of a file on a company FTP. Yet whenever I try to get the response an exception is thrown. See the error details in the catch block in the code below. string uri = "ftp://ftp.domain.com/folder/folder/file.xxx"; FtpWebRequest sizeReq = (FtpWebRequest)WebRequest.Create(uri); sizeReq.Method = WebRequestMethods.Ftp.GetFileSize; sizeReq.Credentials = cred; sizeReq.UsePassive = proj.ServerConfig.UsePassive; //true sizeReq.UseBinary = proj.ServerConfig.UseBinary; //true sizeReq.KeepAlive = proj.ServerConfig.KeepAlive; //false long size; try { //Exception thrown here when I try to get the response using (FtpWebResponse fileSizeResponse = (FtpWebResponse)sizeReq.GetResponse()) { size = fileSizeResponse.ContentLength; } } catch(WebException exp) { FtpWebResponse resp = (FtpWebResponse)exp.Response; MessageBox.Show(exp.Message); // "The remote server returned an error: (550) File unavailable (e.g., file not found, no access)." MessageBox.Show(exp.Status.ToString()); //ProtcolError MessageBox.Show(resp.StatusCode.ToString()); // ActionNotTakenFileUnavailable MessageBox.Show(resp.StatusDescription.ToString()); //"550 SIZE: Operation not permitted\r\n" } This code does work, however, when connected to my personal FTP. The StatusDescription of the response says that the operation is "not permitted". Could it be that my office FTP just wont allow for the querying of a file size? I've also tried listing the directory details, which will return the size, and have noticed that my office FTP reports the directory details in a different format then my personal FTP. Maybe this is the problem? //work ftp ListDirectoryDetails -rw-r--r-- 1 (?) user 12345 Nov 16 20:28 some file name.xxx //personal ftp ListDirectoryDetails -rw-r--r-- 1 user user 12345 Mar 13 some file name.xxx From reading this blog post I think that my personal ftp is returning a Unix formatted response, but my work is returning a windows formatted response. Maybe this is unrelated but I thought I'd mention it.

    Read the article

  • sizeof abuse : get the size of a const table

    - by shodanex
    When declaring a const table, it is possible to get the size of the table using sizeof. However, once you stop using the symbol name, it does not work anymore. is there a way to have the following program output the correct size for table A, instead of 0 ? #include <stdio.h> struct mystruct { int a; short b; }; const struct mystruct tableA[] ={ { .a = 1, .b = 2, }, { .a = 2, .b = 2, }, { .a = 3, .b = 2, }, }; const struct mystruct tableB[] ={ { .a = 1, .b = 2, }, { .a = 2, .b = 2, }, }; int main(int argc, char * argv[]) { int tbl_sz; const struct mystruct * table; table = tableA; tbl_sz = sizeof(table)/sizeof(struct mystruct); printf("size of table A : %d\n", tbl_sz); table = tableB; tbl_sz = sizeof(tableB)/sizeof(struct mystruct); printf("size of table B : %d\n", tbl_sz); return 0; } Output is : size of table A : 0 size of table B : 2

    Read the article

  • function taking in an input image and different kernel size

    - by drifterOcean19
    I have this filtering function that takes an input image, performs convolution using a given kernel, and returns the resulting image. However, I can't seem to work it out how to make it takes different kernel sizes.For example instead of pre-defined 3x3 kernel as below in the code, it could instead take 5x5 or 7x7. and then the user could input the type of kernel/filter they want(Depending on the intended effect). I can't seem to put my head around it. i'm quite new to matlab. function [newImg] = kernelFunc(imgB) img=imread(imgB); figure,imshow(img); img2=zeros(size(img)+2); newImg=zeros(size(img)); for rgb=1:3 for x=1:size(img,1) for y=1:size(img,2) img2(x+1,y+1,rgb)=img(x,y,rgb); end end end for rgb=1:3 for i= 1:size(img2,1)-2 for j=1:size(img2,2)-2 window=zeros(9,1); inc=1; for x=1:3 for y=1:3 window(inc)=img2(i+x-1,j+y-1,rgb); inc=inc+1; end end kernel=[1;2;1;2;4;2;1;2;1]/16; med=window.*kernel; disp(med); med=sum(med); med=floor(med); newImg(i,j,rgb)=med; end end end newImg=uint8(newImg); figure,imshow(newImg); end

    Read the article

  • How to adjust font size with Cufon?

    - by user77413
    I am using a condensed font in Cufon. When the page loads, my menu is too wide and wraps. Then Cufon replaces the font and it looks fine. To reduce the visual distraction, I want to set the font size smaller and then have Cufon change the font size when it displays. Currently the font size is set by the div containing the menu. Here is the CSS for the menu container: .header_menu_block { display: block; margin: 0px 0px 0px 0px; padding: 0px 0px 0px 0px; margin-top: 3px; /*margin-left: 238px; ie 6 can't handle, see margin block below*/ float: left; text-align: left; font-weight: normal; font-size: 14px; color: #FFFFFF; height: 41px; width: 991px; } The Cufon replacement code looks like this: <script type="text/javascript"> Cufon.replace('.header_menu_block_col_menu ', { color: '#ffffff', hover: {color: '#204966'} } ); </script> I've tried setting the CSS font size to 12px and then using the following Cufon code, but it does not work: <script type="text/javascript"> Cufon.replace('.header_menu_block_col_menu ', { color: '#ffffff', hover: {color: '#204966'}, font-size:'14px' } ); </script> Does anyone know how to do this?

    Read the article

  • small string optimization for vector?

    - by BuschnicK
    I know several (all?) STL implementations implement a "small string" optimization where instead of storing the usual 3 pointers for begin, end and capacity a string will store the actual character data in the memory used for the pointers if sizeof(characters) <= sizeof(pointers). I am in a situation where I have lots of small vectors with an element size <= sizeof(pointer). I cannot use fixed size arrays, since the vectors need to be able to resize dynamically and may potentially grow quite large. However, the median (not mean) size of the vectors will only be 4-12 bytes. So a "small string" optimization adapted to vectors would be quite useful to me. Does such a thing exist? I'm thinking about rolling my own by simply brute force converting a vector to a string, i.e. providing a vector interface to a string. Good idea?

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • Drop caches in Linux

    - by BerserkEVA
    I have an embedded Geode-based application server with 512MB RAM and I'm about to try to maximize free memory during applications workload, which uses MySQL database 5.5 with InnoDB quite aggressively. As part of the whole optimization I'd want to introduce echo 1 > /proc/sys/vm/drop_caches sleep 5 echo 0 > /proc/sys/vm/drop_caches in crontab. How often is it safe to execute something like this? Any other observation/suggestion is welcome.

    Read the article

  • Big O, how do you calculate/approximate it?

    - by Sven
    Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying to solve lays in you can figure out if it is still possible to squeeze out that little extra performance.* But I'm curious, how do you calculate or approximate the complexity of your algorithms? *: but as they say, don't overdo it, premature optimization is the root of all evil, and optimization without a justified cause should deserve that name as well.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Counting point size based on chart area during zooming/unzoomin

    - by Gacek
    Hi folks. I heave a quite simple task. I know (I suppose) it should be easy, but from the reasons I cannot understand, I try to solve it since 2 days and I don't know where I'm making the mistake. So, the problem is as follows: - we have a chart with some points - The chart starts with some known area and points have known size - we would like to "emulate" the zooming effect. So when we zoom to some part of the chart, the size of points is getting proportionally bigger. In other words, the smaller part of the chart we select, the bigger the point should get. So, we have something like that. We know this two parameters: initialArea; // Initial area - area of the whole chart, counted as width*height initialSize; // initial size of the points Now lets assume we are handling some kind of OnZoom event. We selected some part of the chart and would like to count the current size of the points float CountSizeOnZoom() { float currentArea = CountArea(...); // the area is counted for us. float currentSize = initialSize * initialArea / currentArea; return currentSize; } And it works. But the rate of change is too fast. In other words, the points are getting really big too soon. So I would like the currentSize to be invertly proportional to currentArea, but with some scaling coefficient. So I created the second function: float CountSizeOnZoom() { float currentArea = CountArea(...); % the area is counted for us. // Lets assume we want the size of points to change ten times slower, than area of the chart changed. float currentSize = initialSize + 0.1f* initialSize * ((initialArea / currentArea) -1); return currentSize; } Lets do some calculations in mind. if currentArea is smaller than initialArea, initialArea/currentArea > 1 and then we add "something" small and postive to initialSize. Checked, it works. Lets check what happens if we would un-zoom. currentArea will be equal to initialArea, so we would have 0 at the right side (1-1), so new size should be equal to initialSize. Right? Yeah. So lets check it... and it doesn't work. My question is: where is the mistake? Or maybe you have any ideas how to count this scaled size depending on current area in some other way?

    Read the article

  • how to see the optimized code in c

    - by sganesh
    I can examine the optimization using profiler, size of the executable file and time to take for the execution. I can get the result of the optimization. But I have these questions, How to get the optimized C code. Which algorithm or method used by C to optimize a code. Thanks in advance.

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • InnoDB: Error: log file ./ib_logfile0 is of different size

    - by jack
    I just added the following lines in /etc/mysql/my.cnf after I converted one database to use InnoDB engine. innodb_buffer_pool_size = 2560M innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 16 innodb_flush_method = O_DIRECT But it raise "ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query" error restarting mysqld. And mysql error log shows the following InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 100118 20:52:52 [ERROR] Plugin 'InnoDB' init function returned error. 100118 20:52:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100118 20:52:52 [ERROR] Unknown/unsupported table type: InnoDB 100118 20:52:52 [ERROR] Aborting So I commented out this line # innodb_log_file_size = 256M And it restarted mysql successfully. I wonder what's the "5242880 bytes of log file" showed in mysql error? It's the first database on InnoDB engine on this server so when and where is that log file created? In this case, how can I enable innodb_log_file_size directive in my.cnf? EDIT I tried to delete /var/lib/mysql/ib_logfile0 and restart mysqld but it still failed. It now shows the following in error log. 100118 21:27:11 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 InnoDB: Error: log file ./ib_logfile1 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! Resolution It works now after deleted both ib_logfile0 and ib_logfile1 in /var/lib/mysql

    Read the article

  • virtualbox snapshot size

    - by intuited
    I've started using Windows 7 under VirtualBox on an Ubuntu 10.10 host. I took about 6 snapshots over the course of setting up the VM from the Windows restore image that came with the computer. My installations were more or less limited to windows updates, antivirus, and the VB Guest Additions. I uninstalled much more than I installed. The VM was running for about 24 hours total. The snapshots increased in size at a worrisome rate, even when the machine was idle: the snapshot .vdi file for the period between 11:22 PM and 9:02 AM is 6 gigs in size; during that time very little happened. The other .vdi files are between 0.5 and 3 GB, most between 1 and 2 GB. The corresponding .sav files are between 0.5 and 1 GB. The Internet connection where I was doing this is limited to 30KB/s download, which, constantly saturated, works out to less than 3 GB per 24 hour period. Is this normal? Is there something that can be done to make snapshots more practical? update On starting up the VM again, I've noticed that mscorsvw is using significant processing time. Apparently this process [precompiles .NET assemblies]. This may have been going on during the period when I was taking snapshots, which might explain some of the snapshot size increase. I would be somewhat surprised to learn that this could be responsible for over 10 GB of additional disk usage, or that it would run for roughly 24 hours. Is this possible?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >