Search Results

Search found 12445 results on 498 pages for 'memory fragmentation'.

Page 369/498 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • CPU and Data alignment

    - by MS
    Dear All, Pardon me if you feel this has been answered numerous times, but I need answers to the following queries! Why data has to be aligned (on 4 byte/ 8 byte/ 2 byte boundaries)? Here my doubt is when the CPU has address lines Ax Ax-1 Ax-2 ... A2 A1 A0 then it is quite possible to address the memory locations sequentially. So why there is the need to align the data at specific boundaries? How to find the alignment requirements when I am compiling my code and generating the executatble? If for e.g the data alignment is 4 byte boundary, does that mean each consecutive byte is located at modulo 4 offsets? My doubt is if data is 4 byte aligned does that mean that if a byte is at 1004 then the next byte is at 1008 (or at 1005)? Your thoughts are much welcome. Thanks in advance! /MS

    Read the article

  • When getting substring in .Net, does the new string reference the same original string data or does

    - by Elan
    Assuming I have the following strings: string str1 = "Hello World!"; string str2 = str1.SubString(6, 5); // "World" I am hoping that in the above example str2 does not copy "World", but simply ends up being a new string that points to the same memory space only that it starts with an offset of 6 and a length of 5. In actuality I am dealing with some potentially very long strings and am interested in how this works behind the scenes for performance reasons. I am not familiar enaugh with IL to look into this.

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • How can I modify objects inside frames with AS3 in a permanent way?

    - by curro
    I have a MovieClip symbol created with flash in a fla file library. There is a textfield in frame one of this movieclip's timeline . There is another frame in the movieclip timeline. There is a custon class definition for this symbol. It is a flipping card in a memory game. I access the textfield by going to frame 2 (gotoAndStop(2)) and setting the textfield's text property ( this.field.text = "hello" ). However if I go to frame 1 and then return to frame 2, the text becomes the original one in the library's symbol. I have to modify the text propery again in a showFace method I've written. Besides, I cannot pass parameters in the constructor because it is a symbol in the library and that would give errors. I find this behaviour of flash extremely weird. Is there a way I can set properties inside frames permanently? Thank you

    Read the article

  • Slow page unload in IE

    - by ForYourOwnGood
    I am developing a site which creates many table rows dynamically. The total amount of rows right now is 187. Everything works fine when creating the rows, but in IE when I leave the page, there is a large amount of lag. I do not know if this is some how related to the heavy DOM manipulation I am doing in the page? I do not create any function closures when building the dynamic content's event handlers so I do not believe this problem is related to memory leaks. Any insight is much appreciated.

    Read the article

  • ORM solutions (JPA; Hibernate) vs. JDBC

    - by Grasper
    I need to be able to insert/update objects at a consistent rate of at least 8000 objects every 5 seconds in an in-memory HSQL database. I have done some comparison performance testing between Spring/Hibernate/JPA and pure JDBC. I have found a significant difference in performance using HSQL.. With Spring/Hib/JPA, I can insert 3000-4000 of my 1.5 KB objects (with a One-Many and a Many-Many relationship) in 5 seconds, while with direct JDBC calls I can insert 10,000-12,000 of those same objects. I cannot figure out why there is such a huge discrepancy. I have tweaked the Spring/Hib/JPA settings a lot trying to get close in performance without luck. I want to use Spring/Hib/JPA for future purposes, expandability, and because the foreign key relationships (one-many and many-many) are difficult to maintain by hand; but the performance requirements seem to point towards using pure JDBC. Any ideas of why there would be such a huge discrepancy?

    Read the article

  • Performance monitor shows 4294967293 sessions active

    - by TGnat
    I have an ASP.Net 3.5 website running in IIS 6 on Windows Server 2003 R2. It is a relatively small internal application that probably serves less than ten users at any given time. The server has 4 Gig of memory and shows that 3+ Gig is available while the site is active. Just minutes after restarting the web application Performance monitor shows that there is a whopping 4,294,967,293 sessions active! I am fairly certain that this number is incorrect; at the time this reading there were only 100 requests to the website. Has anyone else experienced this kind odd behavior from perf mon? Any ideas on how to get an accurate reading? UPDATE: After running for about an hour the number of active sessions has dropped by 4. So it does seem to be responding to sessions timing out.

    Read the article

  • Why do AlarmManager broadcasts get cancelled when app gets killed?

    - by skooter
    Ok so I have two BroadcastReceiver registered. When the app is closed they both fire at the appropriate times and do the appropriate things. If the app is closed then killed (say with an AppKiller), the receivers never receive their broadcasts, and nothing happens. Presumably the same thing happens if the parent app is killed due to low memory, so how do I ensure those broadcasts are fired/received. The API states that even if the app is killed it should fire, does anyone else have experience with this situation? If it helps my manifest is: <!-- receivers for AlarmManager --> <receiver android:exported="true" android:label="Shift roster updating calendar." android:name="com.skooter.shiftroster.backend.service.UpdateCalendar" > </receiver> <receiver android:exported="true" android:label="Shift roster checking alarm." android:name="com.skooter.shiftroster.backend.service.SetWakeup" > </receiver> and nothing esoteric is going on in the AlarmManager/BroadcastReceivers

    Read the article

  • Groovy Grails, How do you stream or buffer a large file in a Controller's response?

    - by Julian Noye
    Hi Guys I have a controller that makes a connection to a url to retrieve a csv file. I am able to send the file in the response using the following code, this works fine. def fileURL = "www.mysite.com/input.csv" def thisUrl = new URL(fileURL); def connection = thisUrl.openConnection(); def output = connection.content.text; response.setHeader "Content-disposition", "attachment; filename=${'output.csv'}" response.contentType = 'text/csv' response.outputStream << output response.outputStream.flush() However I don't think this method is inappropriate for a large file, as the whole file is loaded into the controllers memory. I want to be able to read the file chunk by chunk and write the file to the response chunk by chunk. Any ideas?

    Read the article

  • How would I UPDATE these table entries with SQL?

    - by CT
    I am working on an Asset Database problem. I enter assets into a database. Every object is an asset and has variables within the asset table. An object is also a type of asset. In this example the type is server. Here is the Query to retrieve all necessary data: SELECT asset.id ,asset.company ,asset.location ,asset.purchaseDate ,asset.purchaseOrder ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serialNumber ,server.esc ,server.warranty ,server.user ,server.prevUser ,server.cpu ,server.memory ,server.hardDrive FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = '$id' How would I write a query to update an asset?

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • Whats the problem with int *p; *p=23;

    - by piemesons
    Yesterday in my interview I was asked this question. (At that time I was highly pressurized by so many abrupt questions). int *p; *p=23; printf('%d',*p); Is there any problem with this code? I explained him that you are trying to assign value to a pointer to whom memory is not allocated. But the way he reacted, it was like I am wrong. Although I got the job but after that he said Mohit think about this question again. I don't know what he was trying to say. Please let me know is there any problem in my answer?

    Read the article

  • Lightweight web browser for testing

    - by Ghostrider
    I have e very specific test setup in mind. I would like to start a web-browser that understands Javascript and can use HTTP proxy, point it to a URL (ideally by specifying it in the command line along with the proxy config), wait for the page to load while listening (in the proxy) requests are generated as web-page is rendered and Javascript is executed, then kill the whole thing and restart. I don't care about how the page renders graphically at all. Which browser or tool should I use for this? Ideally it should be something self-contained that doesn't require installation (just an EXE file that runs from command line). Lynx would have been ideal but for the fact that it doesn't support JS. It should have as small memory footprint as possible.

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • iPad crashes that aren't happening on iPhone or iPod Touch

    - by alyoshak
    Has anyone had difficulty getting what has otherwise been a solid iPhone app working on the iPad? I was under the impression that iPhone apps would run without problems on the iPad. We are are experiencing crashes (not intermittent - same place, at same time) that we've never gotten on the iPhone or iPod Touch. I have become suspicious that the crashes are memory-management related, but even if so, why only on the iPad? 2010-05-17 10:19:06.474 ASSIST[82:207] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<UISectionRowData 0x6041480> valueForUndefinedKey:]: this class is not key value coding-compliant for the key deliveryDate.' 2010-05-17 10:19:06.481 ASSIST[82:207] Stack: ( 852041337, 861292157, 852040861, 850755255, 850750995, 850758945, 81279, 123007, 126693, 149141, 851599725, 827486573, 827486477, 827486431, 827485745, 827487359, 827454123, 851903137, 851590065, 851588321, 819339483, 819339655, 827151561, 827144691, 9461, 9324 ) terminate called after throwing an instance of 'NSException' Program received signal: “SIGABRT”.

    Read the article

  • Which Python XML library should I use?

    - by PulpFiction
    Hello. I am going to handle XML files for a project. I had earlier decided to use lxml but after reading the requirements, I think ElemenTree would be better for my purpose. The XML files that have to be processed are: Small in size. Typically < 10 KB. No namespaces. Simple XML structure. Given the small XML size, memory is not an issue. My only concern is fast parsing. What should I go with? Mostly I have seen people recommend lxml, but given my parsing requirements, do I really stand to benefit from it or would ElementTree serve my purpose better?

    Read the article

  • Why should i use EJB?

    - by Nitesh Panchal
    Hello, As we all know that EJB's in 3.1 or 3.0 are simple POJOs. You just need to give annotations here and there and it gets converted to EJB from simple class. So, now my question is why should i use EJBs at all? Can i not do without them? In .Net i created class library and got things done. I never felt the need for anything like EJB. Simple classes were enough. Then, why in Java people stress on EJB? What is difference between a simple POJO and EJB in terms of execution and memory? Further which function should i write in EJB and which should i write in simple class? Should i dump every function in EJB only? or there is some kind of strategy? Does EJB provide anything special?

    Read the article

  • How to store images without taking up huge amounts of RAM

    - by Sheeo
    I'm working on a silverlight project where users get to create their own Collages. The problem When loading a bunch of images by using the BitmapImage class, Silverlight hogs up huge unreasonable amounts of RAM. 150 pictures where single ones fill up at most 4,5mb takes up about 1,6GB of RAM--thus ending up throwing memory exceptions. I'm loading them through streams, since the user selects their own photos. What I'm looking for A class, method or some process to eliminate the huge amount of RAM being sucked up. I've tried using a WriteableBitmap to render the images into, but I find this method forces me to reinvent the wheel when it comes to drag/drop and other things I want users to be able to do with the images.

    Read the article

  • Simple File-based Record Storage with Fast Text Searching for Compact Framework and Silverlight

    - by Eric Farr
    I have a single table with lots of records ( 100k) that I need to be able to index and search on several text fields. The easiest searches will have the first part of the string specified (eg, LIKE 'ABC%' in SQL). The tougher searches will need to search for any substring within the text fields (eg, LIKE '%ABC%' in SQL). I need to run on the Compact Framework. SQL Compact is a memory hog and overkill for my one table. Besides, I'd like to be able to run on Silverlight 4 eventually. The file and indexes can be generated on the full .NET Framework and I only need read capability on the Compact Framework. My records are not especially large and can be expressed in fix length format. I'm looking for some existing code or libraries to avoid having to write a file-based BTree implementation from scratch.

    Read the article

  • Fast, lightweight XML parser

    - by joe90
    I have a specific format XML document that I will get pushed. This document will always be the same type so it's very strict. I need to parse this so that I can convert it into JSON (well, a slightly bastardized version so someone else can use it with DOJO). My question is, shall I use a very fast lightweight (no need for SAX, etc.) XML parser (any ideas?) or write my own, basically converting into a StringBuffer and spinning through the array? Basically, under the covers I assume all HTML parsers will spin thru the string (or memory buffer) and parse, producing output on the way through. Thanks //edit Thanks for the responses so far :) The xml will be between 3/4 lines to about 50 max (at the extreme)..

    Read the article

  • java - volatile keyword

    - by Tiyoal
    Say I have two threads and an object. One thread assigns the object: public void assign(MyObject o) { myObject = o; } Another thread uses the object: public void use() { myObject.use(); } Does the variable myObject have to be declared as volatile? I am trying to understand when to use volatile and when not, and this is puzzling me. Is it possible that the second thread keeps a reference to an old object in its local memory cache? If not, why not? Thanks a lot.

    Read the article

  • A hooked DirectX 9 program crashes on window resize, texture related.

    - by Ben
    I'm using EasyHook and SlimDX to overlay some graphics using SlimDX's Sprite and Texture classes. When I resize windows some programs fine, but others will crash - Winamp's MilkDrop 2 gives me an ambiguous memory error for example. I expect this is due to the after market Texture I created. The question is what VTable function should I hook and/or how/when do I dispose and recreate the Texture? Reset perhaps? If it isn't obvious I don't know much about DirectX.

    Read the article

  • ASP.NET MVC Session usage

    - by Ben
    Currently I am using ViewData or TempData for object persistance in my ASP.NET MVC application. However in a few cases where I am storing objects into ViewData through my base controller class, I am hitting the database on every request (when ViewData["whatever"] == null). It would be good to persist these into something with a longer lifespan, namely session. Similarly in an order processing pipeline, I don't want things like Order to be saved to the database on creation. I would rather populate the object in memory and then when the order gets to a certain state, save it. So it would seem that session is the best place for this? Or would you recommend that in the case of order, to retrieve the order from the database on each request, rather than using session? Thoughts, suggestions appreciated. Thanks Ben

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • How to create a custom ADO Multi Demensional Catalog with no database

    - by Alan Clark
    Does anyone know of an example of how to dynamically define and build ADO MD (ActiveX Data Objects Multidimensional) catalogs and cube definitions with a set of data other than a database? Background: we have a huge amount of data in our application that we export to a database and then query using the usual SQL joins, groups, sums etc to produce reports. The data in the application is originally in objects and arrays. The problem is the amount of data is so large the export can take 2 hours. So I am trying to figure out a good way of querying the objects in memory, either by a custom OLAP algorithm or library, or ADO MD. But I haven't been able to find an example of using ADO MD without a database behind it. We are using Delphi 2010 so would use ADO ActiveX but I imagine the ADO.NET MD is similar. I realize that if the application data was already stored in a database the problem would solve itself. Also if Delphi had LINQ capability I could query the objects and arrays that way.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >