Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 175/670 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Booting 11.10 from USB stick on MacBook Pro 5,1 fails

    - by Helge Stenström
    I've created a bootable memory stick on a Windows computer, and tested it on an HP PC. It's made from a 64-bit image of Ubuntu 11.10, downloaded from http://www.ubuntu.com/download/ubuntu/download. When I boot from this memory stick, there is some kind of boot menu, where I can choose to run Ubuntu from the memory stick, or install. I select Run from memory stick. (the words may be wrong here, I'm taking it from memory.) From this point, the screen is black (but backlighted), and I can't do anything but turn off the computer. It gets hot, too. Has anyone been more successful than me? Are there known issues? The computer is a 15 inch MacBook Pro 5,1 (unibody, late 2008), 4 GB memory.

    Read the article

  • Computer won't reboot without waiting for a while

    - by Benjamin
    I've got an unusual problem with my computer. When ever I reboot my computer it won't boot, I get a few beeps from the BIOS and nothing else, however if I wait for a few minuets the computer will boot perfectly. I tried to count the beeps and I get around 7-9 of them; the first two are noticeably closer together than the rest. [Edit: I'm now reasonably confident it's 1 long followed by 8 short beeps. That would be a display related issue: http://www.bioscentral.com/beepcodes/amibeep.htm] My BIOS is American Megatrends Inc and version P1.80, the Motherboard is an ASRock X58 Extreme (both according to dmidecode) Here's an output from LSPCI, I'm not sure what else might be useful but I can provide whatever's asked. 00:00.0 Host bridge: Intel Corporation 5520/5500/X58 I/O Hub to ESI Port (rev 13) 00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13) 00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13) 00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13) 00:14.0 PIC: Intel Corporation 5520/5500/X58 I/O Hub System Management Registers (rev 13) 00:14.1 PIC: Intel Corporation 5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13) 00:14.2 PIC: Intel Corporation 5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13) 00:14.3 PIC: Intel Corporation 5520/5500/X58 I/O Hub Throttle Registers (rev 13) 00:1a.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4 00:1a.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5 00:1a.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6 00:1a.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1 00:1c.1 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 2 00:1c.5 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 6 00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1 00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2 00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3 00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) 00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller 00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller 00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 02:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6315 Series Firewire Controller 02:00.1 IDE interface: VIA Technologies, Inc. VT6415 PATA IDE Host Controller (rev a0) 03:00.0 SATA controller: JMicron Technology Corp. JMB360 AHCI Controller (rev 02) 05:00.0 VGA compatible controller: nVidia Corporation GT200b [GeForce GTX 285] (rev a1) ff:00.0 Host bridge: Intel Corporation Xeon 5500/Core i7 QuickPath Architecture Generic Non-Core Registers (rev 05) ff:00.1 Host bridge: Intel Corporation Xeon 5500/Core i7 QuickPath Architecture System Address Decoder (rev 05) ff:02.0 Host bridge: Intel Corporation Xeon 5500/Core i7 QPI Link 0 (rev 05) ff:02.1 Host bridge: Intel Corporation Xeon 5500/Core i7 QPI Physical 0 (rev 05) ff:03.0 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller (rev 05) ff:03.1 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Target Address Decoder (rev 05) ff:03.4 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Test Registers (rev 05) ff:04.0 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Control Registers (rev 05) ff:04.1 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Address Registers (rev 05) ff:04.2 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Rank Registers (rev 05) ff:04.3 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 0 Thermal Control Registers (rev 05) ff:05.0 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Control Registers (rev 05) ff:05.1 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Address Registers (rev 05) ff:05.2 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Rank Registers (rev 05) ff:05.3 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 1 Thermal Control Registers (rev 05) ff:06.0 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Control Registers (rev 05) ff:06.1 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Address Registers (rev 05) ff:06.2 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Rank Registers (rev 05) ff:06.3 Host bridge: Intel Corporation Xeon 5500/Core i7 Integrated Memory Controller Channel 2 Thermal Control Registers (rev 05) Update: ok I installed lm-sensors and here's the output. coretemp-isa-0000 Adapter: ISA adapter Core 0: +58.0°C (high = +80.0°C, crit = +100.0°C) Core 1: +59.0°C (high = +80.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +80.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +80.0°C, crit = +100.0°C) it8720-isa-0a10 Adapter: ISA adapter in0: +0.93 V (min = +0.00 V, max = +4.08 V) in1: +0.06 V (min = +0.00 V, max = +4.08 V) in2: +3.25 V (min = +0.00 V, max = +4.08 V) +5V: +2.91 V (min = +0.00 V, max = +4.08 V) in4: +3.04 V (min = +0.00 V, max = +4.08 V) in5: +2.94 V (min = +0.00 V, max = +4.08 V) in6: +2.14 V (min = +0.00 V, max = +4.08 V) 5VSB: +2.96 V (min = +0.00 V, max = +4.08 V) Vbat: +3.28 V fan1: 1869 RPM (min = 0 RPM) fan2: 0 RPM (min = 0 RPM) fan3: 0 RPM (min = 0 RPM) fan4: 1106 RPM (min = -1 RPM) fan5: 225000 RPM (min = -1 RPM) temp1: +39.0°C (low = +0.0°C, high = +127.0°C) sensor = thermistor temp2: +56.0°C (low = +0.0°C, high = +127.0°C) sensor = thermistor temp3: +127.0°C (low = +0.0°C, high = +127.0°C) sensor = thermistor cpu0_vid: +1.650 V intrusion0: ALARM If it helps here's the summery from sensors-detect Driver `it87': * ISA bus, address 0xa10 Chip `ITE IT8720F Super IO Sensors' (confidence: 9) Driver `adt7475': * Bus `NVIDIA i2c adapter 3 at 5:00.0' Busdriver `nvidia', I2C address 0x2e Chip `Analog Devices ADT7473' (confidence: 5) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9)

    Read the article

  • disable intel gpu in ubuntu 12.04

    - by small_potato
    I am wondering if there is anything to disable the intel gpu on ubuntu 12.04. I want to be able to setup dual monitor using nvidia-settings. It seems the intel gpu is used for display as suggested by sudo lshw -c display the output is *-display description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:c0000000-c0ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:4000(size=128) memory:a2000000-a207ffff *-display description: VGA compatible controller product: Haswell Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:47 memory:c2000000-c23fffff memory:b0000000-bfffffff ioport:5000(size=64) I have a lenovoY410 with GT750M. It seems there is no way to turn off the intel gpu in bios either. Help please. Thanks.

    Read the article

  • The Collatz Sequence problem

    - by Gandalf StormCrow
    I'm trying to solve this problem, its not a homework question, its just code I'm submitting to uva.onlinejudge.org so I can learn better java trough examples. Here is the problem sample input : 3 100 34 100 75 250 27 2147483647 101 304 101 303 -1 -1 Here is simple output : Case 1: A = 3, limit = 100, number of terms = 8 Case 2: A = 34, limit = 100, number of terms = 14 Case 3: A = 75, limit = 250, number of terms = 3 Case 4: A = 27, limit = 2147483647, number of terms = 112 Case 5: A = 101, limit = 304, number of terms = 26 Case 6: A = 101, limit = 303, number of terms = 1 The thing is this has to execute within 3sec time interval otherwise your question won't be accepted as solution, here is with what I've come up so far, its working as it should just the execution time is not within 3 seconds, here is code : import java.util.Scanner; class Main { public static void main(String[] args) { Scanner stdin = new Scanner(System.in); int start; int limit; int terms; int a = 0; while (stdin.hasNext()) { start = stdin.nextInt(); limit = stdin.nextInt(); if (start > 0) { terms = getLength(start, limit); a++; } else { break; } System.out.println("Case "+a+": A = "+start+", limit = "+limit+", number of terms = "+terms); } } public static int getLength(int x, int y) { int length = 1; while (x != 1) { if (x <= y) { if ( x % 2 == 0) { x = x / 2; length++; }else{ x = x * 3 + 1; length++; } } else { length--; break; } } return length; } } And yes here is how its meant to be solved : An algorithm given by Lothar Collatz produces sequences of integers, and is described as follows: Step 1: Choose an arbitrary positive integer A as the first item in the sequence. Step 2: If A = 1 then stop. Step 3: If A is even, then replace A by A / 2 and go to step 2. Step 4: If A is odd, then replace A by 3 * A + 1 and go to step 2. And yes my question is how can I make it work inside 3 seconds time interval?

    Read the article

  • AS3 - Unloaded AVM1 swfs trace out as unloaded but memory is not freed for the AVM2 machine

    - by puppbits
    I have a large project built in as3. Part of its main functionality is to load and unload various as2 swfs. The problem is that the memory ins't free up once they are unloaded. I have access to the as2 swfs code base and destroyed all objects, stopped and killed timers, listeners, removed from stage, destroyed all the MovieClip.protoypes that were created. They look to be clean as far as the AS2 debugger show no remnants of the object after the destroy function is run. In AS3 i've closed the local connection, cleaned all references/listeners to the AVM1Movie and ran Loader.unloadAndStop(). The trace out in flex says the swf was unloaded but looking at windows task manager the memory usage never drops to when it was before the as2 swf was loaded. Each as2 swf can take up to 80 megs each time it's run so memory gets eaten up fast and loading and unloading a few as2 files. At this point if the AS2 swfs are unloaded the only thing that I can assume that could be left is MovieClip.prototype and/or _global, _root variables add during the AS2's run time. But i've gone through those and can't find anything else that might be sticking. Has anyone ever seen problems before with the AVM1 machine not freeing up its memory?

    Read the article

  • Does Ctypes Structures and POINTERS automatically free the memory when the Python object is deleted?

    - by jsbueno
    When using Python CTypes there are the Structures, that allow you to clone c-structures on the Python side, and the POINTERS objects that create a sofisticated Python Object from a memory address value and can be used to pass objects by reference back and forth C code. What I could not find on the documentation or elsewhere is what happens when a Python object containing a Structure class that was de-referenced from a returning pointer from C Code (that is - the C function alocated memory for the structure) is itself deleted. Is the memory for the original C structure freed? If not how to do it? Furthermore -- what if the Structure contains Pointers itself, to other data that was also allocated by the C function? Does the deletion of the Structure object frees the Pointers onits members? (I doubt so) Else - -how to do it? Trying to call the system "free" from Python for the Pointers in the Structure is crashing Python for me. In other words, I have this structure filled up by a c Function call: class PIX(ctypes.Structure): """Comments not generated """ _fields_ = [ ("w", ctypes.c_uint32), ("h", ctypes.c_uint32), ("d", ctypes.c_uint32), ("wpl", ctypes.c_uint32), ("refcount", ctypes.c_uint32), ("xres", ctypes.c_uint32), ("yres", ctypes.c_uint32), ("informat", ctypes.c_int32), ("text", ctypes.POINTER(ctypes.c_char)), ("colormap", ctypes.POINTER(PIXCOLORMAP)), ("data", ctypes.POINTER(ctypes.c_uint32)) ] And I want to free the memory it is using up from Python code.

    Read the article

  • how can I save/keep-in-sync an in-memory graph of objects with the database?

    - by Greg
    Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Background: That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents) The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS. Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Ideal requirements would include: build up changes in-memory and then 'save' afterwards (mandatory) when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory) keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable) mechanism for doing optimistic locking (desirable) Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]

    Read the article

  • Correct way to switch between UIView with ARC. My way leads to memory leaks :( (iOS)

    - by Andrei Golubev
    i use xcode 4.4 with ARC on.. I have dynamically created UIViews in the ViewController.m: UIView*myviews[10]; Then in the - (void)viewDidLoad function i fill each of it with pictures i need myviews[viewIndex] = [[UIView alloc]initWithFrame:myrec]; UIImage *testImg; UIImageView * testImgView; testImg = [UIImage imageNamed:[NSString stringWithFormat:@"imgarray%d.png", viewIndex]; testImgView.image = testImg; viewindex++; So all seems to be fine, when i want to jump from one view to another i do with two buttons next: [self.view addSubview:views[viewIndex]]; CATransition *animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:1.0f]; [animation setType:@"rippleEffect"]; [animation setSubtype:kCATransitionFromTop]; //[animation setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]]; [self.view.layer addAnimation:animation forKey:@"transitionViewAnimation"]; Nothing seems to be bad, but the memory consumption grows with huge speed when i switch between views.. and then i get low memory warning or sometimes application will just crash. I have tried to use UIViewController array and was switching between the controllers: nothing changes, the memory low warning is what i end up with.. Maybe i need to clean the memory somehow? But how? ARC does not allow to use release and so on.. last what i have tried (sorry, maybe not very professional) before to add new subview is this NSArray *viewsToRemove = [self.view subviews]; for (UIView *views in viewsToRemove) { [views removeFromSuperview]; } But this does not help either.. Please don't judge too strong, i am still new to iOS and Objective-c..

    Read the article

  • Is it possible to limit outside connections to a subdomain with .htaccess or similar?

    - by digidave0205
    I host a web application. This application serves static html pages that are refreshed at various intervals. Some as often as every 30 secs. At this time I have about 300 unique pages that are accessed via 300 unique subdomains. Some clients have at most 50 visitors to their unique page and it refreshes every 30 secs, no problem. Other clients have up to 1000 or more visitors to their page. These clients are killing my server. There was no predefined limit upon signup but now I have to impose such a limit to remain afloat financially. I would like to define a finite number of connections allowed for each individual subdomain in my hosting account. Connections attempted out of range of this finite value would either be rejected or redirected. I have access to .htaccess and php.ini. Is something of this nature possible? Oh, I have a dedicated/managed server at 1and1.

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • float** allocation limit + serialized struct problem. Need advice!

    - by jmgunn
    basically im getting an allocation limit error/warning when i create a float** array. the function i am calling to fill the float** retrieves data from a struct loaded from a file. The function works fine when i use one object but when i load 2 objects into memory i get the limit error. I am pretty sure this is to do with byte alignment or a similar thing because my struct is saved with a float** member which i am sure you are not susposed to do !?! Please confirm this! The next question i have now is how to save/serialize the float** member of this struct? I cant really afford to put an upper bound on the array ie "float [10000][3]" because i need/want to use this structure as a base for many other types of objects that may have well under the upper bound. Stroking my chin here! Any help/advice will recieve my highest gratitude. BTW these said struct objects will be used in a game/graphics package, the float** is a float[3] array for storing vertices in a model. Much thanks in advance

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • How does a hard drive compare to Flash memory working as a hard drive in terms of speed?

    - by Jian Lin
    Some experiment I did with hard drive read/write speed was 10MB/s write and 40MB/s read, and with a USB Flash drive, it can be 5MB/s write and 10MB/s read. Also, if I put a virtual hard drive .vhd file in a hard drive or in a USB Flash drive and try a Virtual Machine using it, the one using the hard drive is quite fast, while the one using the USB Flash drive is close to not usable. So I wonder some early netbooks use 4GB or 8GB flash memory as the hard drive, and even the Apple Mac Air has an option of using flash memory instead of a hard drive. But in those situation, will the speed be slower than using a hard drive, like in the case of a USB Flash drive?

    Read the article

  • How do I make kdump use a permissible range of memory for the crash kernel?

    - by Philip Durbin
    I've read the Red Hat Knowledgebase article "How do I configure kexec/kdump on Red Hat Enterprise Linux 5?" at http://kbase.redhat.com/faq/docs/DOC-6039 and http://prefetch.net/blog/index.php/2009/07/06/using-kdump-to-get-core-files-on-fedora-and-centos-hosts/ The crashkernel=128M@16M kernel parameter works fine for me in a RHEL 6.0 beta VM, but not on the RHEL 5.5 hosts I've tried. dmesg shows me: Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump Here's the line from grub.conf: kernel /vmlinuz-2.6.18-194.3.1.el5 ro root=/dev/md2 console=ttyS0,115200 panic=15 rhgb quiet crashkernel=128M@16M How do I make kdump use a permissible range of memory for the crash kernel?

    Read the article

  • how much more memcache memory do i need to get 95% hit ratio? [on hold]

    - by OneSolitaryNoob
    I have a memcache instance running that has a 90% hit ratio. How can I estimate how much more memory it needs to get to a 95% hit ratio? edit: This question was blocked, but I do not think this is impossible to answer. After all, anyone that's used a caching system HAS answered this question, most likely with trial&error&luck. I can look at my usage patterns. I can increase or decrease memory and see how hit rate changes. Both of these provide data that informs an estimate. But what's a good/better/best way to do this?

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • Small 64-bit linux server distro with low memory footprint?

    - by djangofan
    I am looking for a linux server distro with a low memory footprint. I usually use Ubuntu but I need something with a smaller footprint in order to run a large Java JVM service inside of it and also run X-windows. Any ideas? The Java service needs to handle a 3GB memory heap and so I require a 64-bit OS and JRE. http://en.wikipedia.org/wiki/Comparison%5Fof%5FLinux%5Fdistributions I am thinking that ArchLinux is the only one that I can find right now. It uses 250MB out of the box (without X-win). Any better suggestions?

    Read the article

  • mysql 5.1 - innodb - query_cache_size - 9,418,108 queries have been removed from the query cache due to lack of memory

    - by Tom C
    Currently running on a 16GB system - Ubuntu 64 bit. INnodb Buffer Pool is set to 10GB. tuning-primer shows the following: QUERY CACHE Query cache is enabled Current query_cache_size = 512 M Current query_cache_used = 501 M Current query_cache_limit = 4 M Current Query cache Memory fill ratio = 97.87 % Current query_cache_min_res_unit = 4 K However, 9418108 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size That is over 9million queries removed. System uptime is 8 days. Should I remove the Query Cache altogether? Our db is always under heavy I/O. tia

    Read the article

  • Is there anyway that I can set the 'real' memory usage value while running my java code?

    - by vira
    I'm running a code on a server to generate a 10,000x10,000 matrix and save each value into a table (MySQL). I was informed by the administrator that I can use up to 32g of the physical memory of our server but have no idea how to do it. I googling around and so far only found information about setting the virtual memory using -Xmx. I tried it anyway and using top command I got this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3981 gv 35 15 32.4g 304m 10m S 1 0.5 9:54.84 java So, it shows that the -Xmx set the VIRT and not the RES value. Is there anyway that I can set the RES value into 32g?

    Read the article

  • Need a set based solution to group rows

    - by KM
    I need to group a set of rows based on the Category column, and also limit the combined rows based on the SUM(Number) column to be less than or equal to the @Limit value. For each distinct Category column I need to identify "buckets" that are <=@limit. If the SUM(Number) of all the rows for a Category column are <=@Limit then there will be only 1 bucket for that Category value (like 'CCCC' in the sample data). However if the SUM(Number)@limit, then there will be multiple bucket rows for that Category value (like 'AAAA' in the sample data), and each bucket must be <=@Limit. There can be as many buckets as necessary. Also, look at Category value 'DDDD', its one row is greater than @Limit all by itself, and gets split into two rows in the result set. Given this simplified data: DECLARE @Detail table (DetailID int primary key, Category char(4), Number int) SET NOCOUNT ON INSERT @Detail VALUES ( 1, 'AAAA',100) INSERT @Detail VALUES ( 2, 'AAAA', 50) INSERT @Detail VALUES ( 3, 'AAAA',300) INSERT @Detail VALUES ( 4, 'AAAA',200) INSERT @Detail VALUES ( 5, 'BBBB',500) INSERT @Detail VALUES ( 6, 'CCCC',200) INSERT @Detail VALUES ( 7, 'CCCC',100) INSERT @Detail VALUES ( 8, 'CCCC', 50) INSERT @Detail VALUES ( 9, 'DDDD',800) INSERT @Detail VALUES (10, 'EEEE',100) SET NOCOUNT OFF DECLARE @Limit int SET @Limit=500 I need one of these result set: DetailID Bucket | DetailID Category Bucket -------- ------ | -------- -------- ------ 1 1 | 1 'AAAA' 1 2 1 | 2 'AAAA' 1 3 1 | 3 'AAAA' 1 4 2 | 4 'AAAA' 2 5 3 OR 5 'BBBB' 1 6 4 | 6 'CCCC' 1 7 4 | 7 'CCCC' 1 8 4 | 8 'CCCC' 1 9 5 | 9 'DDDD' 1 9 6 | 9 'DDDD' 2 10 7 | 10 'EEEE' 1

    Read the article

  • Memory footprint of a parsed XML file in Classic ASP?

    - by Pete Duncanson
    Anyone know of a way to find out the amount of memory/size of a XMLDocument once it has parsed a XML file? I've been doing "beer mat" calculations so far but have been asked to come up with some more legit numbers through monitoring some how. I need to create about 1500 XML files (via FreeThreadedXMl-DOM object), which verge between 3-9K in size and store them in Application vars but our SysAdmin is worried about us gobbling up too much memory. Other than the crude method of booting up a fresh IIS instance and then loading everything in and monitoring before and after memory usage in Task Manager I can't think of a way of doing it with a bit more accuracy.

    Read the article

  • How do I assign a non-persistent (in-memory) cookie in ASP.NET?

    - by Jørn Schou-Rode
    The following code will send a cookie to the user as part of the response: var cookie = new HttpCookie("theAnswer", "42"); cookie.Expires = DateTime.Now.AddDays(7); Response.Cookies.Add(cookie); The cookie is of the persistent type, which by most browsers will be written to disk and used across sessions. That is, the cookie is still on the client's PC tomorrow, even if the browser and the PC has been closed in between. After a week, the cookie will be deleted (due to line #2). Non-persistent/in-memory cookies are another bread of cookies, which have a lifespan determined by the duration of the client's browsing session. Usually, such cookies are held in memory, and they are discarded when the browser is closed. How do I assign an in-memory cookie from ASP.NET?

    Read the article

  • Why does my program occasionally segfault when out of memory rather than throwing std::bad_alloc?

    - by Bradford Larsen
    I have a program that implements several heuristic search algorithms and several domains, designed to experimentally evaluate the various algorithms. The program is written in C++, built using the GNU toolchain, and run on a 64-bit Ubuntu system. When I run my experiments, I use bash's ulimit command to limit the amount of virtual memory the process can use, so that my test system does not start swapping. Certain algorithm/test instance combinations hit the memory limit I have defined. Most of the time, the program throws an std::bad_alloc exception, which is printed by the default handler, at which point the program terminates. Occasionally, rather than this happening, the program simply segfaults. Why does my program occasionally segfault when out of memory, rather than reporting an unhandled std::bad_alloc and terminating?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >