Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 116/409 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • Fast multi-window rendering with C#

    - by seb
    I've been searching and testing different kind of rendering libraries for C# days for many weeks now. So far I haven't found a single library that works well on multi-windowed rendering setups. The requirement is to be able to run the program on 12+ monitor setups (financial charting) without latencies on a fast computer. Each window needs to update multiple times every second. While doing this CPU needs to do lots of intensive and time critical tasks so some of the burden has to be shifted to GPUs. That's where hardware rendering steps in, in another words DirectX or OpenGL. I have tried GDI+ with windows forms and figured it's way too slow for my needs. I have tried OpenGL via OpenTK (on windows forms control) which seemed decently quick (I still have some tests to run on it) but painfully difficult to get working properly (hard to find/program good text rendering libraries). Recently I tried DirectX9, DirectX10 and Direct2D with Windows forms via SharpDX. I tried a separate device for each window and a single device/multiple swap chains approaches. All of these resulted in very poor performance on multiple windows. For example if I set target FPS to 20 and open 4 full screen windows on different monitors the whole operating system starts lagging very badly. Rendering is simply clearing the screen to black, no primitives rendered. CPU usage on this test was about 0% and GPU usage about 10%, I don't understand what is the bottleneck here? My development computer is very fast, i7 2700k, AMD HD7900, 16GB ram so the tests should definitely run on this one. In comparison I did some DirectX9 tests on C++/Win32 API one device/multiple swap chains and I could open 100 windows spread all over the 4-monitor workspace (with 3d teapot rotating on them) and still had perfectly responsible operating system (fps was dropping of course on the rendering windows quite badly to around 5 which is what I would expect running 100 simultaneous renderings). Does anyone know any good ways to do multi-windowed rendering on C# or am I forced to re-write my program in C++ to get that performance (major pain)? I guess I'm giving OpenGL another shot before I go the C++ route... I'll report any findings here. Test methods for reference: For C# DirectX one-device multiple swapchain test I used the method from this excellent answer: Display Different images per monitor directX 10 Direct3D10 version: I created the d3d10device and DXGIFactory like this: D3DDev = new SharpDX.Direct3D10.Device(SharpDX.Direct3D10.DriverType.Hardware, SharpDX.Direct3D10.DeviceCreationFlags.None); DXGIFac = new SharpDX.DXGI.Factory(); Then initialized the rendering windows like this: var scd = new SwapChainDescription(); scd.BufferCount = 1; scd.ModeDescription = new ModeDescription(control.Width, control.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm); scd.IsWindowed = true; scd.OutputHandle = control.Handle; scd.SampleDescription = new SampleDescription(1, 0); scd.SwapEffect = SwapEffect.Discard; scd.Usage = Usage.RenderTargetOutput; SC = new SwapChain(Parent.DXGIFac, Parent.D3DDev, scd); var backBuffer = Texture2D.FromSwapChain<Texture2D>(SC, 0); _rt = new RenderTargetView(Parent.D3DDev, backBuffer); Drawing command executed on each rendering iteration is simply: Parent.D3DDev.ClearRenderTargetView(_rt, new Color4(0, 0, 0, 0)); SC.Present(0, SharpDX.DXGI.PresentFlags.None); DirectX9 version is very similar: Device initialization: PresentParameters par = new PresentParameters(); par.PresentationInterval = PresentInterval.Immediate; par.Windowed = true; par.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; par.PresentationInterval = PresentInterval.Immediate; par.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; par.EnableAutoDepthStencil = true; par.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; // firsthandle is the handle of first rendering window D3DDev = new SharpDX.Direct3D9.Device(new Direct3D(), 0, DeviceType.Hardware, firsthandle, CreateFlags.SoftwareVertexProcessing, par); Rendering window initialization: if (parent.D3DDev.SwapChainCount == 0) { SC = parent.D3DDev.GetSwapChain(0); } else { PresentParameters pp = new PresentParameters(); pp.Windowed = true; pp.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; pp.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; pp.EnableAutoDepthStencil = true; pp.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; pp.PresentationInterval = PresentInterval.Immediate; SC = new SharpDX.Direct3D9.SwapChain(parent.D3DDev, pp); } Code for drawing loop: SharpDX.Direct3D9.Surface bb = SC.GetBackBuffer(0); Parent.D3DDev.SetRenderTarget(0, bb); Parent.D3DDev.Clear(ClearFlags.Target, Color.Black, 1f, 0); SC.Present(Present.None, new SharpDX.Rectangle(), new SharpDX.Rectangle(), HWND); bb.Dispose(); C++ DirectX9/Win32 API test with multiple swapchains and one device code is here: http://pastebin.com/tjnRvATJ It's a modified version from Kevin Harris's nice example code.

    Read the article

  • Tomcat - virtualhosting - name / ip / port - based

    - by lisak
    Hey, what are the usage scenarios for these kinds of virtual hosting ? Name Based - typical tomcat virtual hosting, one HOME instance with many contexts, each as an individual host IP based / port based - multiple instances of tomcat ( how is it with performance and memory consuption?) running on IP aliases (virtual IPs) for one network adapter, usually behind http apache server that can run name based virtual hostings. Otherwise I can't figure out how would I forward requests in iptables/firewall based on IP address, which is just one. How is IP based virtual hosting done as to Tomcat and multiple instances ? I'd like to hear some usage scenarios from your experience. How are you running your applications. Cause there are applications having it's own modified classloader and they are developed in a way to run alone withing a tomcat instance. Then there are trivial applications which can run within one instance without problems. Many thanks

    Read the article

  • Penalty of using QGraphicsObject vs QGraphicsItem?

    - by Dutch
    I currently have a hierarchy of items based off of QGraphicsItem. I want to move to QGraphicsObject instead so that I can put properties on my items. I will not be making use of signals/slots or any other features of QObject. I'm told that you shouldn't derive from QObject because it's "heavy" and "slow". To test the impact, I derive from QGraphicsObject, add a couple properties to my items, and look at the memory usage of the running app. I create 1000 items using both flavors and I don't notice anything more than 10k more memory usage. Since all I am adding on to my items are properties, is it safe to say that QObject only adds weight if you are using signals/slots?

    Read the article

  • Armchair Linguists: 'code' vs. 'codes'--or why I write 'code' and my manager asks for 'codes'

    - by Ukko
    I wanted to tap into the collective wisdom here to see if I can get some insight into one of my pet peeves, people who thread "code" as a countable noun. Let me also preface this by saying that I am not talking about anyone who speaks english as a second language, this is a native phenomenon. For those of us who slept through grammar class there are two classes of nouns which basically refer to things that are countable and non-countable (sometimes referred to as count and noncount). For instance 'sand' is a non-count noun and 'apple' is count. You can talk about "two apples" but "two sands" does not parse. The bright students then would point out a word like "beer" where is looks like this is violated. Beer as a substance is certainly a non-count noun, but I can ask for "two beers" without offending the grammar police. The reason is that there are actually two words tied up in that one utterance, Definition #1 is a yummy golden substance and Definition #2 is a colloquial term for a container of said substance. #1 is non-count and #2 is countable. This gets to my problem with "codes" as a countable noun. In my mind the code that we programmers write is non-count, "I wrote some code today." When used in the plural like "Have you got the codes" I can only assume that you are asking if I have the cryptographically significant numbers for launching a missile or the like. Every time my peer in marketing asks about when we will have the new codes ready I have a vision of rooms of code breakers going over the latest Enigma coded message. I corrected the usage in all the documents I am asked to review, but then I noticed that our customer was also using the work "codes" when they meant "code". At this point I have realized that there is a significant sub-population that uses "codes" and they seem to be impervious to what I see as the dominant "correct" usage. This is the part I want some help on, has anyone else noticed this phenomenon? Do you know what group it is associated with, old Fortran programmer perhaps? Is it a regionalism? I have become quick to change my terms when I notice a customer's usage, but it would be nice to know if I am sending a proposal somewhere what style they expect. I would hate to get canned with a review of "Ha, these guy's must be morons they don't even know 'code' is plural!"

    Read the article

  • Minimum Hardware requirements for Android development

    - by vishwanath
    I need information about minimum hardware requirement I need to have better experience in developing Android application. My current configuration is as follows. P4 3.0 GHz, 512 MB of ram. Started with Hello Android development on my machine and experience was sluggish, was using Eclipse Helios for development. Emulator used to take lot of time to start. And running program too. Do I need to upgrade my machine for the development purpose or is there anything else I am missing on my machine(like heavy processing by some other application I might have installed). And If I do need to upgrade, do I need to upgrade my processor too(that counts to new machine actually, which I am not in favor of), or only upgrading RAM will suffice.

    Read the article

  • How can you toggle between two sets of values per data series in flot?

    - by Jedidja
    flot has built-in support for multiple data series (sample code) and also dual-axis (sample code). Assuming multiple data series (water, electricity, etc) that each have an amount (usage) and a dollar value (charge for that usage), what would the best way be to to use flot to display either the amount or dollar values for all the data series, while still supporting toggling display for each individual series? The idea is to send down all the data in one GET request and then let the client take care of everything else in Javascript. Ideally we could use triplets somehow {date, amount, charge}, and then possibly split that into two arrays for flot.

    Read the article

  • How do I format positional argument help using Python's optparse?

    - by cdleary
    As mentioned in the docs the optparse.OptionParser uses an IndentedHelpFormatter to output the formatted option help, for which which I found some API documentation. I want to display a similarly formatted help text for the required, positional arguments in the usage text. Is there an adapter or a simple usage pattern that can be used for similar positional argument formatting? Clarification Preferably only using the stdlib. Optparse does great except for this one formatting nuance, which I feel like we should be able to fix without importing whole other packages. :-)

    Read the article

  • recommended parser for XML in java(absolute beginner to xml)

    - by poeschlorn
    Hi Pro's, which parser (java) would you recommend for parsing GPX data? Im looking for a one that is very intuitive to use and should not need too much RAM (it seems that DOM requires too much, doesn't it?). I have no idea about parsing xml, so it is time for me to learn this ;-) My documents are not very huge and are always read twice(a point for DOM), but I don't want to keep as few things as possible in RAM. What would you do in this situation? Which one would you coose and why?

    Read the article

  • Java Heap Overflow, Forcing Garbage Collection

    - by Nicholas
    I've create a trie tree with an array of children. When deleting a word, I set the children null, which I would assume deletes the node(delete is a relative term). I know that null doesn't delete the child, just sets it to null, which when using a large amount of words it causes to overflow the heap. Running a top on linux, I can see my memory usage spike to 1gb pretty quickly, but if I force garbage collection after the delete (Runtime.gc()) the memory usage goes to 50mb and never above that. From what I'm told, java by default runs garbage collection before a heap overflow happens, but I can't see to make that happen.

    Read the article

  • Measuring device drivers CPU/IO utilization caused by my program

    - by Lior Kogan
    Sometimes code can utilize device drivers up to the point where the system is unresponsive. Lately I've optimized a WIN32/VC++ code which made the system almost unresponsive. The CPU usage, however, was very low. The reason was 1000's of creations and destruction of GDI objects (pens, brushes, etc.). Once I refactored the code to create all objects only once - the system became responsive again. This leads me to the question: Is there a way to measure CPU/IO usage of device drivers (GPU/disk/etc) for a given program / function / line of code?

    Read the article

  • How to correctly calculate address spaces?

    - by user337308
    Below is an example of a question given on my last test in a Computer Engineering course. Anyone mind explaining to me how to get the start/end addresses of each? I have listed the correct answers at the bottom... The MSP430F2410 device has an address space of 64 KB (the basic MSP430 architecture). Fill in the table below if we know the following. The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.), the next 240 bytes is reserved for 8-bit peripheral devices, and the next 256 bytes is reserved for 16-bit peripheral devices. The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100. At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table. What Start Address End Address Special Function Registers (16 bytes) 0x0000 0x000F 8-bit peripheral devices (240 bytes) 0x0010 0x00FF 16-bit peripheral devices (256 bytes) 0x0100 0x01FF RAM memory (2 Kbytes) 0x1100 0x18FF Flash Memory (56 Kbytes) 0x2000 0xFFFF

    Read the article

  • the problem about different treatment to __VA_ARGS__ when using VS 2008 and GCC

    - by liuliu
    I am trying to identify a problem because of an unusual usage of variadic macros. Here is the hypothetic macro: #define va(c, d, ...) c(d, __VA_ARGS__) #define var(a, b, ...) va(__VA_ARGS__, a, b) var(2, 3, printf, “%d %d %d\n”, 1); For gcc, the preprocessor will output printf("%d %d %d\n", 1, 2, 3) but for VS 2008, the output is printf, “%d %d %d\n”, 1(2, 3); I suspect the difference is caused by the different treatment to VA_ARGS, for gcc, it will first expand the expression to va(printf, "%d %d %d\n", 1, 2, 3), and treat 1, 2, 3 as the VA_ARGS for macro va. But for VS 2008, it will first treat b as VA_ARGS for macro va, and then do the expansion. Which one is correct interpretation for C99 variadic macro? or my usage falls into an undefined behavior?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Criteria for selecting software for embedded device

    - by Suresh Kumar
    We are currently evaluating Web servers for an embedded device. We have laid down the evaluation criteria for things like HTTP version, Security, Compression etc. On the embeddable side, we have identified the following criteria: Memory footprint Memory management (support for plugging in a custom memory manager) CPU usage Thread usage (support for thread pool) Portability What I want inputs on is: Are there any other criteria that an embeddable software should meet? What exactly does it mean when someone says that a software is designed for embeddable use? We currently have zeroed in on two Web servers: AppWeb Lighttpd (lighty) Feature wise, both the above Web servers seem to be on par. However, it is claimed that AppWeb is designed for embedded use while Lighttpd is not. To choose between the above two Web servers, what criteria should I be looking at?

    Read the article

  • StockTrader RI > Controllers, Presenters, WTF?

    - by SandRock
    I am currently learning how to make advanced usage of WPF via the Prism (Composite WPF) project. I watch many videos and examples and the demo application StockTraderRI makes me ask this question: What is the exact role of each of the following part? SomethingService: Ok, this is something to manage data SomethingView: Ok, this is what's displayed SomethingPresentationModel: Ok, this contains data and commands for the view to bind to (equivalent to a ViewModel). SomethingPresenter: I don't really understand it's usage SomethingController: Don't understand too I saw that a Presenter and a Controller are not necessary but I would like to understand why they are here. Can someone tell me their role and when to use them?

    Read the article

  • how to set a status

    - by ejah85
    hello guys..here i've a problem where i want to set the status whether it is approved or reject.. the condition are if admin select the registration number and driver name, that means the status is approve otherwise, if admin fill up the reason, that means the request is reject.. here is the code to set status if ($reason =='null'){ $query2 = "UPDATE usage SET status ='APPROVED' WHERE '$bookingno'=bookingno"; $result2 = @mysql_query($query2); } elseif (($regno =='null')&&($d_name =='null')) { $query3 = "UPDATE usage SET status ='REJECT' WHERE '$bookingno'=bookingno"; $result3 = @mysql_query($query3); } when i save the data, the status field are not updates..

    Read the article

  • Gradual memory leak in loop over contents of QTMovie

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases. Instruments is not detecting any memory leaks per se, but the object trace shows certain General Data blocks to be increasing in size. [Edited out reference to slowdown as it doesn't seem to be as much of a problem as I thought.] Edit: let's knock out some parts of the code inside the loop & see what we find out... Test 1 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } Here we simply loop over the whole movie, creating the filename and logging it. Memory characteristics: remains at 15MB usage for the duration. Test 2 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; [sourceMovie stepForward]; [arp release]; } Here we add back in the creation of the NSImage from the current frame. Memory characteristics: gradually increasing memory usage. RSIZE is at 60MB by frame 200; 75MB by f300. Test 3 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; NSData *imageData = [image TIFFRepresentation]; [sourceMovie stepForward]; [arp release]; } We've added back in the creation of an NSData object from the NSImage. Memory characteristics: Memory usage is again increasing: 62MB at f200; 75MB at f300. In other words, largely identical. It looks like a memory leak in the underlying system QTMovie uses to do currentFrameImage, to me.

    Read the article

  • Limiting the maximum number of concurrent requests django/apache

    - by Johan
    Hi, I have a django site that demonstrates the usage of a tool. One of my views takes a file as input and runs some fairly heavy computation trough an external python script and returns some output to the user. The tool runs fast enough to return the output in the same request though. I would however want to limit how many concurrent requests to this URL/view to keep the server from getting congested. Any tips on how i would go about doing this? The page in itself is very simple and the usage will be low.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Generate a list of file names based on month and year arithmetic

    - by MacUsers
    How can I list the numbers 01 to 12 (one for each of the 12 months) in such a way so that the current month always comes last where the oldest one is first. In other words, if the number is grater than the current month, it's from the previous year. e.g. 02 is Feb, 2011 (the current month right now), 03 is March, 2010 and 09 is Sep, 2010 but 01 is Jan, 2011. In this case, I'd like to have [09, 03, 01, 02]. This is what I'm doing to determine the year: for inFile in os.listdir('.'): if inFile.isdigit(): month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) I don't have a clue what to do next. What should I do here? Update: I think, I better upload the entire script for better understanding. #!/usr/bin/env python import os, sys from time import strftime from calendar import month_abbr vGroup = {} vo = "group_lhcb" SI00_fig = float(2.478) months = tuple(month_abbr) print "\n%-12s\t%10s\t%8s\t%10s" % ('VOs','CPU-time','CPU-time','kSI2K-hrs') print "%-12s\t%10s\t%8s\t%10s" % ('','(in Sec)','(in Hrs)','(*2.478)') print "=" * 58 for inFile in os.listdir('.'): if inFile.isdigit(): readFile = open(inFile, 'r') lines = readFile.readlines() readFile.close() month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) for line in lines[2:]: if line.find(vo)==0: g, i = line.split() s = vGroup.get(g, 0) vGroup[g] = s + int(i) sumHrs = ((vGroup[g]/60)/60) sumSi2k = sumHrs*SI00_fig print "%-12s\t%10s\t%8s\t%10.2f" % (mnYear,vGroup[g],sumHrs,sumSi2k) del vGroup[g] When I run the script, I get this: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ================================================== Jan, 2011 211201372 58667 145376.83 Dec, 2010 5064337 1406 3484.07 Feb, 2011 17506049 4862 12048.04 Sep, 2010 210874275 58576 145151.33 As I said in the original post, I like the result to be in this order instead: Sep, 2010 210874275 58576 145151.33 Dec, 2010 5064337 1406 3484.07 Jan, 2011 211201372 58667 145376.83 Feb, 2011 17506049 4862 12048.04 The files in the source directory reads like this: [root@serv07 usage]# ls -l total 3632 -rw-r--r-- 1 root root 1144972 Feb 9 19:23 01 -rw-r--r-- 1 root root 556630 Feb 13 09:11 02 -rw-r--r-- 1 root root 443782 Feb 11 17:23 02.bak -rw-r--r-- 1 root root 1144556 Feb 14 09:30 09 -rw-r--r-- 1 root root 370822 Feb 9 19:24 12 Did I give a better picture now? Sorry for not being very clear in the first place. Cheers!! Update @Mark Ransom This is the result from Mark's suggestion: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ========================================================== Dec, 2010 5064337 1406 3484.07 Sep, 2010 210874275 58576 145151.33 Feb, 2011 17506049 4862 12048.04 Jan, 2011 211201372 58667 145376.83 As I said before, I'm looking for the result to b printed in this order: Sep, 2010 - Dec, 2010 - Jan, 2011 - Feb, 2011 Cheers!!

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >