Search Results

Search found 22986 results on 920 pages for 'allocation unit size'.

Page 62/920 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • IIS Application Pool Memory Size Problem

    - by Roni
    I increased my application pool memory size from default to 500 mb. and i have IIS 7.5. My server sometimes falling down (service unavailable) and i don't know the reason. I did couple of changes at the same day that i changed memory size in iis and from that days i am getting this problem in one of my servers. Is there anybody can tell me what is the right way to increase memory and what can be the problems???? Thankss Roni

    Read the article

  • Exchange 2010 EMS - Total size of users mailboxes within a particular OU

    - by Moif Murphy
    I'm doing some massive DB cleanups at the moment. We have two DBs both approaching 400GB and I'm wanting to split the DB's into departments. To do that I need to know the total size of mailboxes within an OU. I've run this: http://stackoverflow.com/questions/9796101/exchange-listing-mailboxes-in-an-ou-with-their-mailbox-size but this only gives me a list and I need a combined totalitemsize so know how big I need the new DB's to be. Thanks

    Read the article

  • apache/nginx html file size limit

    - by Daniel
    When serving/sending HTML files to a users browser, where can I reconfigure this size limit? I want to send an extremely large html files to users via apache and nginx. Files are being truncated in apache/nginx, what setting determines the file size?

    Read the article

  • Exchange 2007 | Mailbox DB Size 180GB

    - by rihatum
    Hi All, I have a Exchange 2007 SP1 server running on Windows 2008 6 HD Drives in a RAID-1 OS, DB, Logs on separate RAID-1 Disks Size of the Mailbox Database is 183GB and increasing We only have First Storage Group and Second Storage Group There is no more space on the server to install new Physical Disks and create a Storage Group Q - Can I resize the RAID-1 Partition where the DB is ? Q - Any other suggestions as to how I can decrease the Mailbox DB Size ? Will be grateful for your suggestions on this. Kind Regards

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • How to log size of cookies in request header with apache

    - by chrisst
    We have an issue on our site with cookies growing too large. We have already expanded the acceptable header size and throttled the cookie sizes for now, but I'd like to figure out what the average client's header sizes are, specifically of the cookies. I've created an apache log that captures the cookies being set on each request: LogFormat "%{Cookie}i" cookies But this just spits out the entire contents of all cookies in the header. Is there a way to have apache just log the size (or just length of the string) per request?

    Read the article

  • File size limit exceeded in bash

    - by yboren
    I have tried this shell script on a SUSE 10 server, kernel 2.6.16.60, ext3 filesystem the script has problem like this: cat file | awk '{print $1" "$2" "$3}' | sort -n > result the file's size is about 3.2G, and I get such error message: File size limit exceeded in this shell, ulimit -f is unlimited after I change script into this cat file | awk '{print $1" "$2" "$3}' >tmp sort -n tmp > result the problem is gone. I don't know why, can anyone help me with an explanation?

    Read the article

  • Size of a sharepoint web application

    - by Indra
    How do you figure out the current size of the sharepoint web application? Better yet, the size of a site collection or a subsite. I am planning to move a site collection from one farm to another. I need to plan the storage capacity first.

    Read the article

  • Automatic picture size adjustment

    - by CChriss
    Does anyone know of a free utility that allows you to paste into it a graphics file (any type would work for me, jpg, bmp, png, etc) and it will size the file to within a preset size boundary? For instance, if I preset it to resize files to be a maximum of 400 wide by 300 tall, and I paste in a file 500x500, it would shrink the file to fit within the 300 tall limit. Thanks.

    Read the article

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • What is the difference among NSString alloc:initWithCString versus stringWithUTF8String?

    - by mobibob
    I thought these two methods were (memory allocation-wise) equivalent, however, I was seeing "out of scope" and "NSCFString" in the debugger if I used what I thought was the convenient method (commented out below) and when I switched to the more explicit method my code stopped crashing! Notice that I am getting the string that is being stored in my container from sqlite3 query. p = (char*) sqlite3_column_text (queryStmt, 1); // GUID = (NSString*) [NSString stringWithUTF8String: (p!=NULL) ? p : ""]; GUID = [[NSString alloc] initWithCString:(p!=NULL) ? p : "" encoding:NSUTF8StringEncoding]; Also note, that if I looked at the values in the debugger and printed them with NSLog they looked correct, however, I don't think new memory was allocated and the value copied. Instead the memory pointer was stored - went out of scope - referenced later - crash!

    Read the article

  • Efficient algorithm for creating an ideal distribution of groups into containers?

    - by Inshim
    I have groups of students that need to be allocated into classrooms of a fixed capacity (say, 100 chairs in each). Each group must only be allocated to a single classroom, even if it is larger than the capacity (ie there can be an overflow, with students standing up) I need an algorithm to make the allocations with minimum overflows and under-capacity classrooms. A naive algorithm to do this allocation is horrendously slow when having ~200 groups, with a distribution of about half of them being under 20% of the classroom size. Any ideas where I can find at least some good starting point for making this algorithm lightning fast? Thanks!

    Read the article

  • How well do zippers perform in practice, and when should they be used?

    - by Rob
    I think that the zipper is a beautiful idea; it elegantly provides a way to walk a list or tree and make what appear to be local updates in a functional way. Asymptotically, the costs appear to be reasonable. But traversing the data structure requires memory allocation at each iteration, where a normal list or tree traversal is just pointer chasing. This seems expensive (please correct me if I am wrong). Are the costs prohibitive? And what under what circumstances would it be reasonable to use a zipper?

    Read the article

  • 550 Error When I try to get the size of a file on an FTP

    - by Eric
    I'm trying to use an FtpWebRequest to get the size of a file on a company FTP. Yet whenever I try to get the response an exception is thrown. See the error details in the catch block in the code below. string uri = "ftp://ftp.domain.com/folder/folder/file.xxx"; FtpWebRequest sizeReq = (FtpWebRequest)WebRequest.Create(uri); sizeReq.Method = WebRequestMethods.Ftp.GetFileSize; sizeReq.Credentials = cred; sizeReq.UsePassive = proj.ServerConfig.UsePassive; //true sizeReq.UseBinary = proj.ServerConfig.UseBinary; //true sizeReq.KeepAlive = proj.ServerConfig.KeepAlive; //false long size; try { //Exception thrown here when I try to get the response using (FtpWebResponse fileSizeResponse = (FtpWebResponse)sizeReq.GetResponse()) { size = fileSizeResponse.ContentLength; } } catch(WebException exp) { FtpWebResponse resp = (FtpWebResponse)exp.Response; MessageBox.Show(exp.Message); // "The remote server returned an error: (550) File unavailable (e.g., file not found, no access)." MessageBox.Show(exp.Status.ToString()); //ProtcolError MessageBox.Show(resp.StatusCode.ToString()); // ActionNotTakenFileUnavailable MessageBox.Show(resp.StatusDescription.ToString()); //"550 SIZE: Operation not permitted\r\n" } This code does work, however, when connected to my personal FTP. The StatusDescription of the response says that the operation is "not permitted". Could it be that my office FTP just wont allow for the querying of a file size? I've also tried listing the directory details, which will return the size, and have noticed that my office FTP reports the directory details in a different format then my personal FTP. Maybe this is the problem? //work ftp ListDirectoryDetails -rw-r--r-- 1 (?) user 12345 Nov 16 20:28 some file name.xxx //personal ftp ListDirectoryDetails -rw-r--r-- 1 user user 12345 Mar 13 some file name.xxx From reading this blog post I think that my personal ftp is returning a Unix formatted response, but my work is returning a windows formatted response. Maybe this is unrelated but I thought I'd mention it.

    Read the article

  • sizeof abuse : get the size of a const table

    - by shodanex
    When declaring a const table, it is possible to get the size of the table using sizeof. However, once you stop using the symbol name, it does not work anymore. is there a way to have the following program output the correct size for table A, instead of 0 ? #include <stdio.h> struct mystruct { int a; short b; }; const struct mystruct tableA[] ={ { .a = 1, .b = 2, }, { .a = 2, .b = 2, }, { .a = 3, .b = 2, }, }; const struct mystruct tableB[] ={ { .a = 1, .b = 2, }, { .a = 2, .b = 2, }, }; int main(int argc, char * argv[]) { int tbl_sz; const struct mystruct * table; table = tableA; tbl_sz = sizeof(table)/sizeof(struct mystruct); printf("size of table A : %d\n", tbl_sz); table = tableB; tbl_sz = sizeof(tableB)/sizeof(struct mystruct); printf("size of table B : %d\n", tbl_sz); return 0; } Output is : size of table A : 0 size of table B : 2

    Read the article

  • Embedded Linux: Memory Fragmentation

    - by waffleman
    In many embedded systems, memory fragmentation is a concern. Particularly, for software that runs for long periods of time (months, years, etc...). For many projects, the solution is to simply not use dynamic memory allocation such as malloc/free and new/delete. Global memory is used whenever possible and memory pools for types that are frequently allocated and deallocated are good strategies to avoid dynamic memory management use. In Embedded Linux how is this addressed? I see many libraries use dynamic memory. Is there mechanism that the OS uses to prevent memory fragmentation? Does it clean up the heap periodically? Or should one avoid using these libraries in an embedded environment?

    Read the article

  • What's the purpose of having a separate "operator new[]" ?

    - by sharptooth
    Looks like operator new and operator new[] have exactly the same signature: void* operator new( size_t size ); void* operator new[]( size_t size ); and do exactly the same: either return a pointer to a big enough block of raw (not initialized in any way) memory or throw an exception. Also operator new is called internally when I create an object with new and operator new[] - when I create an array of objects with new[]. Still the above two special functions are called by C++ internally in exactly the same manner and I don't se how the two calls can have different meanings. What's the purpose of having two different functions with exactly the same signatures and exactly the same behavior?

    Read the article

  • function taking in an input image and different kernel size

    - by drifterOcean19
    I have this filtering function that takes an input image, performs convolution using a given kernel, and returns the resulting image. However, I can't seem to work it out how to make it takes different kernel sizes.For example instead of pre-defined 3x3 kernel as below in the code, it could instead take 5x5 or 7x7. and then the user could input the type of kernel/filter they want(Depending on the intended effect). I can't seem to put my head around it. i'm quite new to matlab. function [newImg] = kernelFunc(imgB) img=imread(imgB); figure,imshow(img); img2=zeros(size(img)+2); newImg=zeros(size(img)); for rgb=1:3 for x=1:size(img,1) for y=1:size(img,2) img2(x+1,y+1,rgb)=img(x,y,rgb); end end end for rgb=1:3 for i= 1:size(img2,1)-2 for j=1:size(img2,2)-2 window=zeros(9,1); inc=1; for x=1:3 for y=1:3 window(inc)=img2(i+x-1,j+y-1,rgb); inc=inc+1; end end kernel=[1;2;1;2;4;2;1;2;1]/16; med=window.*kernel; disp(med); med=sum(med); med=floor(med); newImg(i,j,rgb)=med; end end end newImg=uint8(newImg); figure,imshow(newImg); end

    Read the article

  • How to adjust font size with Cufon?

    - by user77413
    I am using a condensed font in Cufon. When the page loads, my menu is too wide and wraps. Then Cufon replaces the font and it looks fine. To reduce the visual distraction, I want to set the font size smaller and then have Cufon change the font size when it displays. Currently the font size is set by the div containing the menu. Here is the CSS for the menu container: .header_menu_block { display: block; margin: 0px 0px 0px 0px; padding: 0px 0px 0px 0px; margin-top: 3px; /*margin-left: 238px; ie 6 can't handle, see margin block below*/ float: left; text-align: left; font-weight: normal; font-size: 14px; color: #FFFFFF; height: 41px; width: 991px; } The Cufon replacement code looks like this: <script type="text/javascript"> Cufon.replace('.header_menu_block_col_menu ', { color: '#ffffff', hover: {color: '#204966'} } ); </script> I've tried setting the CSS font size to 12px and then using the following Cufon code, but it does not work: <script type="text/javascript"> Cufon.replace('.header_menu_block_col_menu ', { color: '#ffffff', hover: {color: '#204966'}, font-size:'14px' } ); </script> Does anyone know how to do this?

    Read the article

  • Where does memory dynamically allocated reside?

    - by Summer_More_More_Tea
    Hello everyone: We know that malloc() and new operation allocate memory from heap dynamically, but where does heap reside? Does each process have its own private heap in the namespace for dynamic allocation or the OS have a global one shared by all the processes. What's more, I read from a textbook that once memory leak occurs, the missing memory cannot be reused until next time we restart our computer. Is this thesis right? If the answer is yes, how can we explain it? Thanks for your reply. Regards.

    Read the article

  • strategy to allocate/free lots of small objects

    - by aaa
    hello I am toying with certain caching algorithm, which is challenging somewhat. Basically, it needs to allocate lots of small objects (double arrays, < 256 elements), with objects accessible through mapped value, map[key] = array. time to initialized array may be quite large, generally more than 10 thousand cpu cycles. By lots I mean around gigabyte in total. objects may need to be popped/pushed as needed, generally in random places, one object at a time. lifetime of an object is generally long, minutes or more, however, object may be subject to allocation/deallocation several times during duration of program. What would be good strategy to avoid memory fragmentation, while still maintaining reasonable allocate deallocate speed? I am using C++, so I can use new and malloc. Thanks. I know there a similar questions on website, http://stackoverflow.com/questions/2156745/efficiently-allocating-many-short-lived-small-objects, are somewhat different, thread safety is not immediate issue for me.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >