Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 563/626 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • How do the operators < and > work with pointers?

    - by Øystein
    Just for fun, I had a std::list of const char*, each element pointing to a null-terminated text string, and ran a std::list::sort() on it. As it happens, it sort of (no pun intended) did not sort the strings. Considering that it was working on pointers, that makes sense. According to the documentation of std::list::sort(), it (by default) uses the operator < between the elements to compare. Forgetting about the list for a moment, my actual question is: How do these (, <, =, <=) operators work on pointers in C++ and C? Do they simply compare the actual memory addresses? char* p1 = (char*) 0xDAB0BC47; char* p2 = (char*) 0xBABEC475; e.g. on a 32-bit, little-endian system, p1 p2 because 0xDAB0BC47 0xBABEC475? Testing seems to confirm this, but I thought it'd be good to put it on StackOverflow for future reference. C and C++ both do some weird things to pointers, so you never really know...

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Calling a WPF Application and modify exposed properties?

    - by Justin
    I have a WPF Keyboard Application, it is developed in such a way that an application could call it and modify its properties to adapt the Keyboard to do what it needs to. Right now I have a file *.Keys.Set which tells the application (on open) to style itself according to that new style. I know this file could be passed as a command line argument into the application. That would not be a problem. My concern is, is there a way via a managed environment to change the properties of the executable as long as they are exposed properly, an example: 'Creates a new instance of the Keyboard Application Dim e_key as new WpfApplication("C:\egt\components\keyboard.exe") 'Sets the style path e_key.SetStylePath("c:\users\joe\apps\me\default.keys.set") e_key.Refresh() 'Applies the style e_key.HideMenu() 'Hides the menu e_key.ShowDeck("PIN") 'Shows the custom "deck" of keyboard keys the developer 'Created in the style application. ''work with events and response 'Clear the instance from memory e_key.close e_key.dispose e_key = nothing This would allow my application to become easily accessible to other Touch Screen Application Developers, allowing them to use my keyboard and keep the functionality they need. It seems like it might be possible because (name of executable).application shows all the exposed functions, properties, and values. I just have never done this before. Any help would be appreciated, thank you in advance.

    Read the article

  • List iterator not dereferencable?

    - by Roderick
    Hi All I get the error "list iterator not dereferencable" when using the following code: bool done = false; while (!_list_of_messages.empty() && !done) { // request the next message to create a frame // DEBUG ERROR WHEN NEXT LINE IS EXECUTED: Counted_message_reader reader = *(_list_of_messages.begin()); if (reader.has_more_data()) { _list_of_frames.push_back(new Dlp_data_frame(reader, _send_compressed_frames)); done = true; } else { _list_of_messages.pop_front(); } } (The line beginning with "Counted_message_reader..." is the one giving the problem) Note that the error doesn't always occur but seemingly at random times (usually when there's lots of buffered data). _list_of_messages is declared as follows: std::list<Counted_message_reader> _list_of_messages; In the surrounding code we could do pop_front, push_front and size, empty or end checks on _list_of_messages but no erase calls. I've studied the STL documentation and can't see any glaring problems. Is there something wrong with the above code or do I have a memory leak somewhere? Thanks! Appreciated!

    Read the article

  • How to get address of va_arg?

    - by lionbest
    I hack some old C API and i got a compile error with the following code: void OP_Exec( OP* op , ... ) { int i; va_list vl; va_start(vl,op); for( i = 0; i < op->param_count; ++i ) { switch( op->param_type[i] ) { case OP_PCHAR: op->param_buffer[i] = va_arg(vl,char*); // ok it works break; case OP_INT: op->param_buffer[i] = &va_arg(vl,int); // error here break; // ... more here } } op->pexec(op); va_end(vl); } The error with gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) was: main.c|55|error: lvalue required as unary ‘&’ operand So why exactly it's not possible here to get a pointer to argument? How to fix it? This code is executed very often with different OP*, so i prefer to not allocate extra memory. Is it possible to iterate over va_list knowing only the size of arguments?

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • [C++][OpenMP] Proper use of "atomic directive" to lock STL container

    - by conradlee
    I have a large number of sets of integers, which I have, in turn, put into a vector of pointers. I need to be able to update these sets of integers in parallel without causing a race condition. More specifically. I am using OpenMP's "parallel for" construct. For dealing with shared resources, OpenMP offers a handy "atomic directive," which allows one to avoid a race condition on a specific piece of memory without using locks. It would be convenient if I could use the "atomic directive" to prevent simultaneous updating to my integer sets, however, I'm not sure whether this is possible. Basically, I want to know whether the following code could lead to a race condition vector< set<int>* > membershipDirectory(numSets, new set<int>); #pragma omp for schedule(guided,expandChunksize) for(int i=0; i<100; i++) { set<int>* sp = membershipDirectory[5]; #pragma omp atomic sp->insert(45); } (Apologies for any syntax errors in the code---I hope you get the point) I have seen a similar example of this for incrementing an integer, but I'm not sure whether it works when working with a pointer to a container as in my case.

    Read the article

  • How would I instruct extconf.rb to use additional g++ optimization flags, and which are advisable?

    - by mohawkjohn
    I'm using Rice to write a C++ extension for a Ruby gem. The extension is in the form of a shared object (.so) file. This requires 'mkmf-rice' instead of 'mkmf', but the two (AFAIK) are pretty similar. By default, the compiler uses the flags -g -O2. Personally, I find this kind of silly, since it's hard to debug with any optimization enabled. I've resorted to editing the Makefile to take out the flags I don't like (e.g., removing -fPIC -shared when I need to debug using main() instead of Ruby's hooks). But I figure there's got to be a better way. I know I can just do $CPPFLAGS += " -DRICE" to add additional flags. But how do I remove things without editing the Makefile directly? A secondary question: what optimizations are safe for shared objects loaded by Ruby? Can I do things like -funroll-loops? What do you all recommend? It's a scientific computing project, so the faster the better. Memory is not much of an issue. Many thanks!

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

  • What's the best way to store Logon User information for Web Application?

    - by Morgan Cheng
    I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important. This design brings several problems found later: 1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability. 2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected. Now, I'm wondering whether there is better design to handle such logon user case. It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance. Is there any design pattern for long user web app? Thanks.

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • Explanation of `self` usage during dealloc?

    - by Greg
    I'm trying to lock down my understanding of proper memory management within Objective-C. I've gotten into the habit of explicitly declaring self.myProperty rather than just myProperty because I was encountering occasional scenarios where a property would not be set to the reference that I intended. Now, I'm reading Apple documentation on releasing IBOutlets, and they say that all outlets should be set to nil during dealloc. So, I put this in place as follows and experienced crashes as a result: - (void)dealloc { [self.dataModel close]; [self.dataModel release], self.dataModel = nil; [super dealloc]; } So, I tried taking out the "self" references, like so: - (void)dealloc { [dataModel close]; [dataModel release], dataModel = nil; [super dealloc]; } This second system seems to work as expected. However, it has me a bit confused. Why would self cause a crash in that case, when I thought self was a fairly benign reference more used as a formality than anything else? Also, if self is not appropriate in this case, then I have to ask: when should you include self references, and when should you not?

    Read the article

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • Char C question about encoding signed/unsigned.

    - by drigoSkalWalker
    Hi guys. I read that C not define if a char is signed or unsigned, and in GCC page this says that it can be signed on x86 and unsigned in PowerPPC and ARM. Okey, I'm writing a program with GLIB that define char as gchar (not more than it, only a way for standardization). My question is, what about UTF-8? It use more than an block of memory? Say that I have a variable unsigned char *string = "My string with UTF8 enconding ~ çã"; See, if I declare my variable as unsigned I will have only 127 values (so my program will to store more blocks of mem) or the UTF-8 change to negative too? Sorry if I can't explain it correctly, but I think that i is a bit complex. NOTE: Thanks for all answer I don't understand how it is interpreted normally. I think that like ascii, if I have a signed and unsigned char on my program, the strings have diferently values, and it leads to confuse, imagine it in utf8 so.

    Read the article

  • Code Golf: Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • Building "isolated" and "automatically updated" caches (java.util.List) in Java.

    - by Aidos
    Hi Guys, I am trying to write a framework which contains a lot of short-lived caches created from a long-living cache. These short-lived caches need to be able to return their entier contents, which is a clone from the original long-living cache. Effectively what I am trying to build is a level of transaction isolation for the short-lived caches. The user should be able to modify the contents of the short-lived cache, but changes to the long-living cache should not be propogated through (there is also a case where the changes should be pushed through, depending on the Cache type). I will do my best to try and explain: master-cache contains: [A,B,C,D,E,F] temporary-cache created with state [A,B,C,D,E,F] 1) temporary-cache adds item G: [A,B,C,D,E,F] 2) temporary-cache removes item B: [A,C,D,E,F] master-cache contains: [A,B,C,D,E,F] 3) master-cache adds items [X,Y,Z]: [A,B,C,D,E,F,X,Y,Z] temporary-cache contains: [A,C,D,E,F] Things get even harder when the values in the items can change and shouldn't always be updated (so I can't even share the underlying object instances, I need to use clones). I have implemented the simple approach of just creating a new instance of the List using the standard Collection constructor on ArrayList, however when you get out to about 200,000 items the system just runs out of memory. I know the value of 200,000 is excessive to iterate, but I am trying to stress my code a bit. I had thought that it might be able to somehow "proxy" the list, so the temporary-cache uses the master-cache, and stores all of it's changes (effectively a Memento for the change), however that quickly becomes a nightmare when you want to iterate the temporary-cache, or retrieve an item at a specific index. Also given that I want some modifications to the contents of the list to come through (depending on the type of the temporary-cache, whether it is "auto-update" or not) and I get completly out of my depth. Any pointers to techniques or data-structures or just general concepts to try and research will be greatly appreciated. Cheers, Aidos

    Read the article

  • Android - Where to store generated bitmaps?

    - by Josh
    I've got an app which dynamically generates anywhere from 6 to 100 small bitmaps for the user to move around the screen in a given session. I currently generate them in onCreate and store them to the sd card, so that after an orientation change I can grab them out of external storage and display them again. However, this takes time (the loading) and I'd like to keep the bitmap references around between lifecyle changes for quicker access. My question is, is there a better place to store my generated bitmaps? I was thinking about creating a static storage library in my base activity, something that would only need to be reloaded when the app is completely removed from memory (shutdown, other apps need resources, 30 minute restart, etc). Ideally, I'd like the user to be able to back out to the title screen, click a "Resume" button, and in onCreate I just have access to those resident bitmap references instead of having to load them from storage again. For this reason I don't think Activity.onRetainNonConfigurationInstance is what I need. Alternatively, is there a better way to handle multiple generated bitmaps than what I'm doing or the plan I described?

    Read the article

  • CreateThread() fails on 64 bit Windows, works on 32 bit Windows. Why?

    - by Stephen Kellett
    Operating System: Windows XP 64 bit, SP2. I have an unusual problem. I am porting some code from 32 bit to 64 bit. The 32 bit code works just fine. But when I call CreateThread() for the 64 bit version the call fails. I have three places where this fails. 2 call CreateThread(). 1 calls beginthreadex() which calls CreateThread(). All three calls fail with error code 0x3E6, "Invalid access to memory location". The problem is all the input parameters are correct. HANDLE h; DWORD threadID; h = CreateThread(0, // default security 0, // default stack size myThreadFunc, // valid function to call myParam, // my param 0, // no flags, start thread immediately &threadID); All three calls to CreateThread() are made from a DLL I've injected into the target program at the start of the program execution (this is before the program has got to the start of main()/WinMain()). If I call CreateThread() from the target program (same params) via say a menu, it works. Same parameters etc. Bizarre. If I pass NULL instead of &threadID, it still fails. If I pass NULL as myParam, it still fails. I'm not calling CreateThread from inside DllMain(), so that isn't the problem. I'm confused and searching on Google etc hasn't shown any relevant answers. If anyone has seen this before or has any ideas, please let me know. Thanks for reading.

    Read the article

  • Usage of autorelease pools for fetch method

    - by Matthias
    Hi, I'm a little bit confused regarding the autorelease pools when programming for the iPhone. I've read a lot and the oppionions seem to me from "Do-NOT-use" to "No problem to use". My specific problem is, I would like to have a class which encapsulates the SQLite3 Access, so I have for example the following method: -(User*)fetchUserWithId:(NSInteger)userId Now, within this method a SQL query is done and a new user object is created with the data from the database and then returned. Within this DB Access class I don't need this object anymore, so I can do a release, but since the calling method needs it, I would do an autorelease, wouldn't I? So, is it okay to use autorelease here oder would it gain too much memory, if this method is called quite frequently? Some websites say, that the autorelease pool is released first at the end of the application, some say, at every event (e.g. user touches something). If I should not use autorelease, how can I make sure, that the object is released correctly? Can I do a release in the fetch method and hope, that the object is still there until the calling method can do a retain? Thanks for your help! Regards Matthias

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Howto use predicates in LINQ to Entities for Entity Framework objects

    - by user274947
    I'm using LINQ to Entities for Entity Framework objects in my Data Access Layer. My goal is to filter as much as I can from the database, without applying filtering logic on in-memory results. For that purpose Business Logic Layer passes a predicate to Data Access Layer. I mean Func<MyEntity, bool> So, if I use this predicate directly, like public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched) { return qry = _Context.MyEntities.Where(x => isMatched(x)); } I'm getting the exception [System.NotSupportedException] --- {"The LINQ expression node type 'Invoke' is not supported in LINQ to Entities."} Solution in that question suggests to use AsExpandable() method from LINQKit library. But again, using public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched) { return qry = _Context.MyEntities.AsExpandable().Where(x => isMatched(x)); } I'm getting the exception Unable to cast object of type 'System.Linq.Expressions.FieldExpression' to type 'System.Linq.Expressions.LambdaExpression' Is there way to use predicate in LINQ to Entities query for Entity Framework objects, so that it is correctly transformed it into a SQL statement. Thank you.

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >