Search Results

Search found 23939 results on 958 pages for 'block size'.

Page 48/958 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Default Window Size

    - by muntoo
    I'm running Vista Home Premium 32-Bit, though I doubt that matters, in this case. So, how do you set the default windows size? Is there a registry tweak for this? Going to Tools->Folder Options...->View Apply to Folders or Reset Folders doesn't work. So, whenever I open a new window to a random new folder I haven't opened before, I don't want it to take 75% of the screen. Is there any way I could make it open smaller?

    Read the article

  • MediaShield RAID 5 is showing up as 760GB when the actual size is 2.7TB

    - by Ilya Volodin
    I just finished setting up Windows 2003 Server on my new server. And I started setting up a RAID 5 for it. I have 4x1TB Hard Drives. From MediaSheild RAID Utility (at boot time) the RAID size is displayed as 2.7TB. Linux also shows it as 2.7TB. However, in Windows, everything (including Windows Disk Management as well as Windows based MediaShield utility) is reporting only 760Gb. I already tried converting partitioning table to GUID from MBR, because I read somewhere that Windows can only handle up to 2TB MBR tables, that didn't help much. Tried searching for partitioning utilities that I could use, but couldn't find anything free. Formatted the disk as NTFS partition from within Linux, it stop showing in Windows all together, even MediaShield windows utility isn't showing at anymore. Windows is installed on a separate 500Gb hard drive, that's setup not to support RAID. Any ideas?

    Read the article

  • How to determine the used size of device associated's buffer

    - by dubbaluga
    Hi, when mounting a device without the "sync" option, e. g. by invoking the following: mount -o async /dev/sdc1 /mnt a buffer is associated with a device to optimize (speed) read/write operations. Is there a way to determine the size of this buffer? Another question that comes into my mind is, if it's possible to find out how much of it is used currently. This can be interesting to determine the time it would take to "sync" or "umount" slow devices, such as flash-based media. Thanks in advance for your answers, Rainer

    Read the article

  • Logrotate Successful, original file goes back to original size

    - by drewrockshard
    Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate } Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1 truncating /var/log/mylogfile.log compressing log with: /bin/gzip removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.

    Read the article

  • How to choose size for a cloud server (rackspace)

    - by Emil
    We're going to test the rackspace cloud next week to see how it's working with our web app. It's a LAMP environment with a lot of MySQL databases. How do I choose the "right" server size? On Rackspace I can choose slices with the memory of 256, 512, 1024, 2048, 4096 etc. Right now we don't have a lot of traffic (approx. 1000 visitors/day) but I thought the whole "cloud" idea was to not be limited and auto scale. Update: What I'm looking for is now a specification of what I need. I know it's too complex. I'm looking for examples, case studies etc. It would be interesting to hear something like "Yes we're serving 10 000 daily requests without spikes on a LAMP stack with only one slice on with 2 GB RAM".

    Read the article

  • Outlook new message size nearly 1mb

    - by Yossi Dahan
    I've been using Outlook 2010 for several weeks with no issues. Suddently, a few days ago, the size of my outgoing messages got huge. Looking at thsi it appeas that a huge CSS style is beign created with around 14,000 definition for list items, making the message almost 1mb before I even typed in one word. Emails before that point were very small. Needless to say I can't remember changing anything, nor can anyone around here provide any possible explanation... Any ideas?

    Read the article

  • IIS7 response size thresholds

    - by DanielM
    I have a customer who is attempting video playback via HTTP progressive download of very large files ( 1 GB). There is no problem once a file is cached at the edge via my CDN, but hits to my origin (first hits prior to edge-cache population) experience stalling and loss of sync between audio and video about an hour and a half into playback. This occurs pretty reliably at that point, suggesting that some threshold somehwere is getting hit. Are there IIS configuration knobs governing HTTP Response size? Other data points: I am unable to replicate this problem. I am looking at client bandwidth and last mile issues. I am looking at possible encoding recipe dependencies. But this problem never came up when we were using a "push" cache configuration (CDN-hosted origin), so something funky serverside at my origin seems like a likely culprit. Thanks ...

    Read the article

  • reducing Windows 7 size

    - by Sejanus
    My Windows 7 uses around 16 GB while Windows XP only around 4 GB hard disk space. Seems weird. I use Windows only for gaming so I dont need a lot of stuff they have to offer. What is the best way to reduce Windows 7 size? What can I delete / uninstall and how? Also I'd like reduce RAM and processor usage as much as possible (as long as it doesnt hurt game performance)... turn off all that fancy stuff and so on. What can I turn off and how? Thanks in advance!

    Read the article

  • Trying to make changes to the size of the events buffer in prelude-ids auditd plugin

    - by tharris
    I am running systems using the prelude-ids plugin for auditd. When the manager is up every thing works fine however I have a requirement that when the clients can't talk to the manager they should store no more than 250MB of messages, and when they hit that point they should start deleting the oldest events. All I can find is that audispd can be set to an overflow action of ignore,syslog,suspend,single, and halt none of which meet my requirement, and several of which I really cannot use. Does anyone know a way to do this? I know the events get stored in /var/spool/prelude/auditd/global, but I can't find anything about configuring how things are stored here. There are usually several files in the global directory but only 2 of them ever go above 0 in size, data0 and data0.journal.

    Read the article

  • Maximum MTU size

    - by user192702
    Thought one of the issues I'm experiencing with the following question is due to MTU rightfully so. ESXi 5 VM Putty session hangs, vSphere client timing out However, when I tried testing the maximum MTU size it seems there's just no limit. Thought Ethernet only allows maximum MTU. But I'm up to 54450. ping -l 54450 192.168.10.7 Pinging 192.168.50.7 with 54450 bytes of data: Reply from 192.168.10.7: bytes=54450 time=1081ms TTL=62 Reply from 192.168.10.7: bytes=54450 time=1079ms TTL=62 Reply from 192.168.10.7: bytes=54450 time=1079ms TTL=62 Reply from 192.168.10.7: bytes=54450 time=1079ms TTL=62 Ping statistics for 192.168.10.7: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1079ms, Maximum = 1081ms, Average = 1079ms

    Read the article

  • System has reached the maximum size allowed for the system part of the registry

    - by Bob Denny
    To be precise System has reached the maximum size allowed for the system part of the registry. Additional storage requests will be ignored. WinXP/64 running fine for 2 years (no /3Gb switch), just started happening. I used ntregopt and the problem went away at least temporarily. However, looking before and after in Windows\System32\Config I see that my System file was reduced only by 10% and is still 170+ Mb. According to my rather extensive research with Google, this is "huge" and should be more like 10-20Mb. The system runs fine. There is a System.bak that is only 11Mb and has the date when I ran ntregopt. That's what I know. Now my question: Is there anything I can do to reduce or rebuild the System registry hive given the above info?

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • Feeding the kernels entropy source from other machines and/or increasing its maximum size

    - by David Spillett
    We have has a little trouble with a small box that acts as a VPN end-point and mail relay for our network, caused by the available entropy for /dev/random being too low (which causes TLS connection attempts by exim to fail). The machine doesn't do anything else, so the normal feed into the entropy pool (interrupt timings from things like disk access) is not enough. As a quick hack I've set a looping script that reads from /dev/hda at a couple of Mbyte/sec which keeps it topped up. Other than buying a hardware RNG, is there a clean way of piping data for entry from elsewhere, such as a copy of the data our file server uses for its entropy source? I've spotted several tips for using rng-tools to feed it from /dev/urandom on the same machine but that "feels dirty". Also, is it possible to increase the maximum pool size? It currently seems to max out at 3585.

    Read the article

  • When copying VM filesystem over netcat, dd copies double the disk size

    - by JivanAmara
    I'm attempting to copy the disk of a working headless virtualbox VM (VM1) on one server to a new VM (VM2) on a vCloud server. I don't have access to the host of VM2. The OS is Windows Server 2003 (32-bit) I start both VMs with a live Knoppix image. I run 'nc -l | dd of=/dev/sda bs=512' on VM2 I run 'dd if=/dev/sda bs=512 | nc ' on VM1 I previously did this with another windows VM and it worked fine. VM1 has a disk of size ~70GB (verified with fdisk); however, the amount of data dd reports read/written is ~139GB. Of course the target machine doesn't work properly. I get a Windows splash screen, then blue error screen with general 'system not working' information. I'm at a loss what could cause this. Any ideas?

    Read the article

  • Maximizing after moving RDC window between different size monitors

    - by msorens
    My Win7 system has two monitors of different sizes. When I open a Remote Desktop Connection on one monitor set to use full screen, both the RDC window and the remote system's desktop fills the monitor. If I then move the window onto my second monitor (1-Restore Down button to make it movable; 2-Drag window to other monitor; 3-Maximize button to fill monitor) the RDC window fills the monitor, but the remote system's desktop remains the same size it was before. Thus, if I move from the larger to the smaller monitor I have scrollbars to see the whole remote desktop, while if I move from the smaller to the larger monitor the remote desktop occupies only a portion of the monitor. My workaround is to close the RDC window completely then re-establish it on the other monitor. Is there a way to avoid this overhead and just resize the remote desktop to fit?

    Read the article

  • Tcp window size won't go above 130048

    - by Roger
    I have 2 servers set up with about 80ms latency between them. Both are centos 6 and run a java app that transfers data from on location to another. Both are on 1gbps connections. I have been trying different sysctl settings and different send & receive buffer settings in java but no matter what I set them to, I cannot get the tcp window size to go above 130048 in the tcp dumps. This equates to roughly 13mbps which is the actual throughput I am getting.

    Read the article

  • Dowload size of Streaming Videos

    - by Excalibur2000
    I would like to know that if a website advertises a streaming download as say 100MB, would my download to my computer be 100MB ? Would there be streaming control packets that a service provider would charge for over and above the 100MB content ? Assume the latest RealPlayer viewer. The rub for me is that I have downloaded MIT lectures and according to my file manager the file sizes have matched up to the download sizes on YouTube. However my ISP seems to think that the streams were larger and charged me for more than the file size of the download. I am left wondering where the data came from.

    Read the article

  • Excel Document Size is at 0KB, can't be opened

    - by Bassam
    After I saved an Excel document, I remembered that I needed to change something in it, so I go back to open it and it said Excel cannot open the file, because the file format or the file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file. I know when I saved before, around 2hours ago, it worked just fine. The document size is at 0KB now. How do I recover this document? Its crucial for my business!

    Read the article

  • Default Window Size and Positions

    - by muntoo
    I'm running Vista Home Premium 32-Bit, though I doubt that matters, in this case. So, how do you set the default windows size/positions? Is there a registry tweak for this? Going to Tools->Folder Options...->View Apply to Folders or Reset Folders doesn't work. So, whenever I open a new window to a random new folder I haven't opened before, I don't want it to take 75% of the screen. Is there any way I could make it open smaller? (So it opens correctly the first time.)

    Read the article

  • Very small computer for note-taking, with full-size keyboard

    - by Reid
    I am looking for a very small, lightweight computer with a full-size keyboard for taking text notes. Ideally it would be 500g or less including batteries for 16 hours of use. And writing text is the only use - a typewriter, if I could find one light enough, would be just fine. [I realize this is not the place for product recommendations, and that's not what I'm looking for. Rather, I have no experience in this space, so what I'd like is to understand what kinds of equipment are available and what are the right keywords to plug into Google/eBay/etc. In other words, help me learn enough to do a worthwhile search.]

    Read the article

  • Reducing the size of the EDB file.

    - by Toby
    I have hit an issue on a MS SBS machine where every morning the datastore for the exchange mailboxes dismounts itself. We believe the issue is that it has grown too large over time and needs cut down a bit. As part of this we have removed (purged) some mail files that were no longer needed, which should have given us a saving of roughly 3GB (more than enough saving for what we need). So I deleted the mailboxes, then purged them and noticed that the .edb file was still reporting the same size, I dismounted and remounted it to see if that would have any effect but it did not. Am I missing a step? I have read online that you can run offline defrag on the file but that seems to only save you a small amount of whitespace. Any help would be greatly appreciated.

    Read the article

  • Transient mysqlcheck errors about "size of datafile" (file too small)

    - by Adam Backstrom
    Running mysqlcheck on a live database is giving me transient errors like this one: mydatabase.mytable error : Size of datafile is: 500719688 Should be: 501000484 error : Corrupt When I run the command again or check the table one-off using mysql, it's listed as OK. Is this just a side effect of running checks on live tables? Is it possible that data is not flushed, hence the strange discrepancy? We moved several databases this morning by shutting down mysqld on the source and rsyncing files across to the new server, but these are all MyISAM tables so I don't believe the two things are related. (But I mention it just in case.)

    Read the article

  • zLib on iPhone, stop at first BLOCK

    - by cedric
    I am trying to call iPhone zLib to decompress the zlib stream from our HTTP based server, but the code always stop after finishing the first zlib block. Obviously, iPhone SDK is using the standard open Zlib. My doubt is that the parameter for inflateInit2 is not appropriate here. I spent lots of time reading the zlib manual, but it isn't that helpful. Here is the details, your help is appreciated. (1) the HTTP request: NSURL *url = [NSURL URLWithString:@"http://192.168.0.98:82/WIC?query=getcontacts&PIN=12345678&compression=Y"]; (2) The data I get from server is something like this (if decompressed). The stream was compressed by C# zlib class DeflateStream: $REC_TYPE=SYS Status=OK Message=OK SetID= IsLast=Y StartIndex=0 LastIndex=6 EOR ...... $REC_TYPE=CONTACTSDISTLIST ID=2 Name=CTU+L%2EA%2E OnCallEnabled=Y OnCallMinUsers=1 OnCallEditRight= OnCallEditDLRight=D Fields= CL= OnCallStatus= EOR (3) However, I will only get the first Block. The code for decompression on iPhone (copied from a code piece from somewhere here) is as follow. The loop between Line 23~38 always break the second time execution. + (NSData *) uncompress: (NSData*) data { 1 if ([data length] == 0) return nil; 2 NSInteger length = [data length]; 3 unsigned full_length = length; 4 unsigned half_length =length/ 2; 5 NSMutableData *decompressed = [NSMutableData dataWithLength: 5*full_length + half_length]; 6 BOOL done = NO; 7 int status; 8 z_stream strm; 9 length=length-4; 10 void* bytes= malloc(length); 11 NSRange range; 12 range.location=4; 13 range.length=length; 14 [data getBytes: bytes range: range]; 15 strm.next_in = bytes; 16 strm.avail_in = length; 17 strm.total_out = 0; 18 strm.zalloc = Z_NULL; 19 strm.zfree = Z_NULL; 20 strm.data_type= Z_BINARY; 21 // if (inflateInit(&strm) != Z_OK) return nil; 22 if (inflateInit2(&strm, (-15)) != Z_OK) return nil; //It won't work if change -15 to positive numbers. 23 while (!done) 24 { 25 // Make sure we have enough room and reset the lengths. 26 if (strm.total_out >= [decompressed length]) 27 [decompressed increaseLengthBy: half_length]; 28 strm.next_out = [decompressed mutableBytes] + strm.total_out; 29 strm.avail_out = [decompressed length] - strm.total_out; 30 31 // Inflate another chunk. 32 status = inflate (&strm, Z_SYNC_FLUSH); //Z_SYNC_FLUSH-->Z_BLOCK, won't work either 33 if (status == Z_STREAM_END){ 34 35 done = YES; 36 } 37 else if (status != Z_OK) break; 38 } 39 if (inflateEnd (&strm) != Z_OK) return nil; 40 // Set real length. 41 if (done) 42 { 43 [decompressed setLength: strm.total_out]; 44 return [NSData dataWithData: decompressed]; 45 } 46 else return nil; 47 }

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >