Search Results

Search found 12282 results on 492 pages for 'memory deallocation'.

Page 103/492 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance?

    - by Varun
    Question: OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance? Answer: OBI Caching only speeds up what has been seen before. An In-memory data structure generated by the summary advisor is optimized to provide maximum value by accounting for the expected broad usage and drilldowns. It is possible to adapt the in-memory data to seasonality by running the summary advisor on specific workloads. Moreover, the in-memory data is created in an analytic database providing maximum performance for the large amount of memory available.

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by ming yeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Memory leak with Microsoft.JScript.Eval.JScriptEvaluate?

    - by Dmi
    I'm evaluating some javascript with Microsoft.JScript.Eval.JScriptEvaluate() and I noticed from my memory profiler that there are a large number of System.String left allocated by Microsoft.JScript.HashtableEntry. Microsoft.JScript.Eval is a static class, does anyone know what class is holding these instances and how I can clear them?

    Read the article

  • Memory management for NSURLConnection

    - by eman
    Sorry if this has been asked before, but I'm wondering what the best memory management practice is for NSURLConnection. Apple's sample code uses -[NSURLConnection initWithRequest:delegate:] in one method and then releases in connection:didFailWithError: or connectionDidFinishLoading:, but this spits out a bunch of analyzer warnings and seems sort of dangerous (what if neither of those methods is called?). I've been autoreleasing (using +[NSURLConnection connectionWithRequest:delegate:]), which seems cleaner, but I'm wondering--in this case, is it at ever possible for the NSURLConnection to be released before the connection has closed?

    Read the article

  • Missing processor/memory counters in the Windows XP Performance Monitor application (perfmon)

    - by Jader Dias
    Perfmon is a Windows utility that helps the developer to find bottlenecks in his applications, by measuring system counters. I was reading a perfmon tutorial and from this list of essential counters I have found the following ones on my machine: PhysicalDisk\Bytes/sec_Total Network Interface\Bytes Total/Sec\nic name But I haven't found the following counters nowhere: Processor\% Processor Time_Total Process\Working Set_Total Memory\Available MBytes Where do I find them? Note that my Windows is pt-BR (instead of en-US). Where do I find language specific documentation for windows tools like PerfMon?

    Read the article

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • Decoding bitmaps in Android with the right size

    - by hgpc
    I decode bitmaps from the SD card using BitmapFactory.decodeFile. Sometimes the bitmaps are bigger than what the application needs or that the heap allows, so I use BitmapFactory.Options.inSampleSize to request a subsampled (smaller) bitmap. The problem is that the platform does not enforce the exact value of inSampleSize, and I sometimes end up with a bitmap either too small, or still too big for the available memory. From http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize: Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor. How should I decode bitmaps from the SD card to get a bitmap of the exact size I need while consuming as little memory as possible to decode it?

    Read the article

  • Boost::Interprocess Container Container Resizing No Default Constructor

    - by CuppM
    Hi, After combing through the Boost::Interprocess documentation and Google searches, I think I've found the reason/workaround to my issue. Everything I've found, as I understand it, seems to be hinting at this, but doesn't come out and say "do this because...". But if anyone can verify this I would appreciate it. I'm writing a series of classes that represent a large lookup of information that is stored in memory for fast performance in a parallelized application. Because of the size of data and multiple processes that run at a time on one machine, we're using Boost::Interprocess for shared memory to have a single copy of the structures. I looked at the Boost::Interprocess documentation and examples, and they typedef classes for shared memory strings, string vectors, int vector vectors, etc. And when they "use" them in their examples, they just construct them passing the allocator and maybe insert one item that they've constructed elsewhere. Like on this page: http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess/allocators_containers.html So following their examples, I created a header file with typedefs for shared memory classes: namespace shm { namespace bip = boost::interprocess; // General/Utility Types typedef bip::managed_shared_memory::segment_manager segment_manager_t; typedef bip::allocator<void, segment_manager_t> void_allocator; // Integer Types typedef bip::allocator<int, segment_manager_t> int_allocator; typedef bip::vector<int, int_allocator> int_vector; // String Types typedef bip::allocator<char, segment_manager_t> char_allocator; typedef bip::basic_string<char, std::char_traits<char>, char_allocator> string; typedef bip::allocator<string, segment_manager_t> string_allocator; typedef bip::vector<string, string_allocator> string_vector; typedef bip::allocator<string_vector, segment_manager_t> string_vector_allocator; typedef bip::vector<string_vector, string_vector_allocator> string_vector_vector; } Then for one of my lookup table classes, it's defined something like this: class Details { public: Details(const shm::void_allocator & alloc) : m_Ids(alloc), m_Labels(alloc), m_Values(alloc) { } ~Details() {} int Read(BinaryReader & br); private: shm::int_vector m_Ids; shm::string_vector m_Labels; shm::string_vector_vector m_Values; }; int Details::Read(BinaryReader & br) { int num = br.ReadInt(); m_Ids.resize(num); m_Labels.resize(num); m_Values.resize(num); for (int i = 0; i < num; i++) { m_Ids[i] = br.ReadInt(); m_Labels[i] = br.ReadString().c_str(); int count = br.ReadInt(); m_Value[i].resize(count); for (int j = 0; j < count; j++) { m_Value[i][j] = br.ReadString().c_str(); } } } But when I compile it, I get the error: 'boost::interprocess::allocator<T,SegmentManager>::allocator' : no appropriate default constructor available And it's due to the resize() calls on the vector objects. Because the allocator types do not have a empty constructor (they take a const segment_manager_t &) and it's trying to create a default object for each location. So in order for it to work, I have to get an allocator object and pass a default value object on resize. Like this: int Details::Read(BinaryReader & br) { shm::void_allocator alloc(m_Ids.get_allocator()); int num = br.ReadInt(); m_Ids.resize(num); m_Labels.resize(num, shm::string(alloc)); m_Values.resize(num, shm::string_vector(alloc)); for (int i = 0; i < num; i++) { m_Ids[i] = br.ReadInt(); m_Labels[i] = br.ReadString().c_str(); int count = br.ReadInt(); m_Value[i].resize(count, shm::string(alloc)); for (int j = 0; j < count; j++) { m_Value[i][j] = br.ReadString().c_str(); } } } Is this the best/correct way of doing it? Or am I missing something. Thanks!

    Read the article

  • Unload image of UIImageView thats offscreen

    - by ludo
    Hi, I'm coding an application on Ipad, in a certain point of my application I present a ViewController with the presentModalViewController. My ViewController is a UISScrollView who take the larger of the modalView and inside it I display some images, I allow pagingEnabled so I can see all my images inside the scrollView. Sometimes I have to display more than 10 images inside the scrollView, so I have this error RECEIVE MEMORY WARNING LEVEL=1 after this one RECEIVE MEMORY WARNING LEVEL=2 and finnaly the debugger exited due to signal 10 (Sigbus). What can I do? is there a way to unload the image thats offscreen? or others things to do? Thanks,

    Read the article

  • Monitor RAM usage on CentOS and restart Apache at a certain usage

    - by Chris
    Hi, I'm running a CentOS 5.3 server with a basic LAMP stack. I've optimized LAMP and my code to run efficiently as possible, but Apache has a memory leak somewhere that kills my server every hour or so. What is the best way to write a script that will monitor the memory usage and if it peaks over, say, 450MB kill all the Apache processes and restart Apache. I know C++/PHP and basic Linux server administration but I'm not familiar with Perl or bash scripting. I'd be open to learn any solutions, though, as a temporary solution while I find the issue.

    Read the article

  • problem while finding iphone memory programatically

    - by sneha
    I am facing a strange problem with my iphone. It shows available memory as 278 Mb from settings and also in the itunes . But when i find it programatically like this NSDictionary *fileSystemAttributes = [[NSFileManager defaultManager] attributesOfFileSystemForPath:NSHomeDirectory() error:&error]; double availableSpace = [[fileSystemAttributes objectForKey:NSFileSystemFreeSize] floatValue]; I am getting it as 458.0 Mb. Can any one help me out why i m having so much difference between both the values ?? As both the values should be same Thanks in advance

    Read the article

  • C# or C++ game: many 16 color images loaded into RAM. Efficient solution?

    - by user560639
    I am in the planning stages of creating a fighting game and am unsure how to handle one issue relating to memory. Background info: - Still debating whether to use C# (XNA) or C++. We do not want to commit to either until we have explored how to solve this problem in both languages. - Using a max of 256MB RAM would be great if possible. - Two characters will be present at a time, and these characters can only change between battles. There is time to load/free memory between battles, but the game needs to run at a constant 60 drawn frames per second during combat. Each frame is 16.67ms - The total number of images per character is in the low hundreds. Each image is roughly 200x400 pixels. Only one image from each character will be displayed at any given moment. Uncompressed, each image takes roughly 300kb from my calculations; upwards of 100MB for a whole character. This is pushing too close to the 256MB limit given that memory will be needed for some other resources as well. Since each image can be made with a total of 16 colors. Theoretically I should be able to use 1/8th the space if I can take advantage of this. I've looked around but haven't found any word of native support for paletted images. (Storing each pixel using fewer bits that each map to a 32-bit RGBa color) I was considering making my own file format with 4 bits per pixel (and some extra palette info), loading all the images of this new format into RAM before battle, and then when drawing any specific image, decompress only that image into a raw image so it can be rendered properly. I don't know if it's realistic to perform so many assignment operations (appx 200x400 for each character = 160k) each frame. It sounds very hacky to me. Does anyone have advice on whether my solution sounds reasonable, and if there is perhaps a better one available? Thanks so much! (I also attempted to use an image with only 1 channel, then use a shader to perform a series of if statements to translate various values into other colors. Unfortunately, there were too many lines of code for the shader. It is also rather hacky and does not scale well.)

    Read the article

  • img_data_lock iphone - imageNamed vs imageWithContentsofFile

    - by franz
    I am noticing a surge in memory and the responsible caller as listed in instruments is img_data_lock and responsible library is coregraphics. I have been reading that the issue relates to cached vs not cached image load ? (http://stackoverflow.com/questions/316236/uiimage-imagenamed-vs-uiimage-imagewithdata) Currently my app loads a series of images via "imageNamed:" replacing the "imageNamed:" call with "imageWithContentsOfFile" seems to solve the issue. Has anybody any information about the img_data_lock caller ? Why would someone use "imageNamed" if it takes such a toll on memory ?

    Read the article

  • XCode 3.2.1 and Instruments: Useless Stack Trace

    - by Jason George
    I've reached the stage where it's time to start tracking down memory leaks and, to my dismay, Instruments is giving me very little to go on (other than the fact that I definitely have leaks). My stack trace contains no information other than memory addresses. Since I'm working on a new project and I've transitioned to version 3.2.1 of XCode in tandem, I'm not sure if it's my program configuration or XCode that's causing the problem. I have found one reference to the issue coupled with a post on the dyld leak that seems to be prevalent with the 3.2.1 release. Since I haven't been able to find much on the problem I'm guessing it's something I've created rather than a systematic issue with XCode. If someone has any idea where I might have thrown a wrench in the works, I would love some pointers. Also, if someone could just verify that the stack trace is indeed functioning properly in 3.2.1 that would be useful as well.

    Read the article

  • Testing a Doctrine2 Entity with assertEquals results in fatal out-of-memory error

    - by Matt
    I have a PHPUnit test that's using a Doctrine2 custom repository and Doctrine Fixtures. I wanted to test that a query gave me back an expected entity from my fixture. But when I try $this->assertEquals($expectedEntity, $result);, I get Fatal error: out of memory. I'm guessing it is recursing into all the relations and the entity manager and whatnot. Is there a good way to test this equality? Should I just assertEquals on the IDs of the entities?

    Read the article

  • Freezing Eclipse

    - by Radek Šimko
    I use Eclipse for programming in PHP and Java(Android) and sometimes Python, unfortunately Eclipse is nowadays much more often freezing. Often when I write this bracket "[" for defining an array in PHP, Eclipse just freeze and I have to close it manualy and start again. I've noted also, that Eclipse is consuming really much of my RAM... 200-300MiB of my available memory is nothing special. :-( Is there any way to check, what is consuming the memory in Eclipse and why it's freezing? I'm running on Windows Vista, 3GB RAM.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >