Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 604/670 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • Extra NotifyIcon shown in system tray

    - by Kettch19
    I'm having an issue with an app where my NotifyIcon displays an extra icon. The steps to reproduce it are easy, but the problem is that the extra icon shows up after any of the actual codebehind we've added fires. Put simply, clicking a button triggers execution of method FooBar() which runs all the way through fine but its primary duty is to fire a backgroundworker to log into another of our apps. It only appears if this particular button is clicked. Strangely enough, we have a WndProc method override and if I step through until the extra NotifyIcon appears, it always appears during this method so something else beyond the codebehind must be triggering the behavior. Our WndProc method is currently (although I don't think it's caused by the WndProc): Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message) 'Check for WM_COPYDATA message from other app or drag/drop action and handle message If m.Msg = NativeMethods.WM_COPYDATA Then ' get the standard message structure from lparam Dim CD As NativeMethods.COPYDATASTRUCT = m.GetLParam(GetType(NativeMethods.COPYDATASTRUCT)) 'setup byte array Dim B(CD.cbData) As Byte 'copy data from memory into array Runtime.InteropServices.Marshal.Copy(New IntPtr(CD.lpData), B, 0, CD.cbData) 'Get message as string and process ProcessWMCopyData(System.Text.Encoding.Default.GetString(B)) 'empty array Erase B 'set message result to 'true', meaning message handled m.Result = New IntPtr(1) End If 'pass on result and all messages not handled by this app MyBase.WndProc(m) End Sub The only place in the code where the NotifyIcon in question is manipulated at all is in the following event handler (again, don't think this is the culprit, but just for more info): Private Sub TrayIcon_MouseDoubleClick(ByVal sender As System.Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles TrayIcon.MouseDoubleClick If Me.Visible Then Me.Hide() Else PositionBottomRight() Me.Show() End If End Sub The backgroundworker's DoWork is as follows (just a class call to log in to our other app, but again just for info): Private Sub LoginBackgroundWorker_DoWork(ByVal sender As Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles LoginBackgroundWorker.DoWork Settings.IsLoggedIn = _wdService.LogOn(Settings.UserName, Settings.Password) End Sub Does anyone else have ideas on what might be causing this or how to possibly further debug this? I've been banging my head on this without seeing a pattern so another set of eyes would be extremely appreciated. :) I've posted this on MSDN winforms forums as well and have had no luck there so far either.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • (How) Can I approximate a "dynamic" index (key extractor) for Boost MultiIndex?

    - by Sarah
    I have a MultiIndex container of boost::shared_ptrs to members of class Host. These members contain private arrays bool infections[NUM_SEROTYPES] revealing Hosts' infection statuses with respect to each of 1,...,NUM_SEROTYPES serotypes. I want to be able to determine at any time in the simulation the number of people infected with a given serotype, but I'm not sure how: Ideally, Boost MultiIndex would allow me to sort, for example, by Host::isInfected( int s ), where s is the serotype of interest. From what I understand, MultiIndex key extractors aren't allowed to take arguments. An alternative would be to define an index for each serotype, but I don't see how to write the MultiIndex container typedef ... in such an extensible way. I will be changing the number of serotypes between simulations. (Do experienced programmers think this should be possible? I'll attempt it if so.) There are 2^(NUM_SEROTYPES) possible infection statuses. For small numbers of serotypes, I could use a single index based on this number (or a binary string) and come up with some mapping from this key to actual infection status. Counting is still darn slow. I could maintain a separate structure counting the total numbers of infecteds with each serotype. The synchrony is a bit of a pain, but the memory is fine. I would prefer a slicker option, since I would like to do further sorts on other host attributes (e.g., after counting the number infected with serotype s, count the number of those infected who are also in a particular household and have a particular age). Thanks in advance.

    Read the article

  • Is it not possible to make a C++ application "Crash Proof"?

    - by Enno Shioji
    Let's say we have an SDK in C++ that accepts some binary data (like a picture) and does something. Is it not possible to make this SDK "crash-proof"? By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data). I have no experience with C++, but when I googled, I found several means that sounded like a solution (use a vector instead of an array, configure the compiler so that automatic bounds check is performed, etc.). When I presented this to the developer, he said it is still not possible.. Not that I don't believe him, but if so, how is language like Java handling this? I thought the JVM performs everytime a bounds check. If so, why can't one do the same thing in C++ manually? UPDATE By "Crash proof" I don't mean that the application does not terminate. I mean it should not abruptly terminate without information of what happened (I mean it will dump core etc., but is it not possible to display a message like "Argument x was not valid" etc.?)

    Read the article

  • Passing C++ object to C++ code through Python?

    - by cornail
    Hi all, I have written some physics simulation code in C++ and parsing the input text files is a bottleneck of it. As one of the input parameters, the user has to specify a math function which will be evaluated many times at run-time. The C++ code has some pre-defined function classes for this (they are actually quite complex on the math side) and some limited parsing capability but I am not satisfied with this construction at all. What I need is that both the algorithm and the function evaluation remain speedy, so it is advantageous to keep them both as compiled code (and preferrably, the math functions as C++ function objects). However I thought of glueing the whole simulation together with Python: the user could specify the input parameters in a Python script, while also implementing storage, visualization of the results (matplotlib) and GUI, too, in Python. I know that most of the time, exposing C++ classes can be done, e.g. with SWIG but I still have a question concerning the parsing of the user defined math function in Python: Is it possible to somehow to construct a C++ function object in Python and pass it to the C++ algorithm? E.g. when I call f = WrappedCPPGaussianFunctionClass(sigma=0.5) WrappedCPPAlgorithm(f) in Python, it would return a pointer to a C++ object which would then be passed to a C++ routine requiring such a pointer, or something similar... (don't ask me about memory management in this case, though :S) The point is that no callback should be made to Python code in the algorithm. Later I would like to extend this example to also do some simple expression parsing on the Python side, such as sum or product of functions, and return some compound, parse-tree like C++ object but let's stay at the basics for now. Sorry for the long post and thx for the suggestions in advance.

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • How to optimalize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • Does IE completely ignore cache control headers for AJAX requests?

    - by Joshua Hayworth
    Hello there, I've got, what I would consider, a simple test web site. A single page with a single button. Here is a copy of the source I'm working with if you would like to download it and play with it. When that button is clicked, it creates a JavaScript timer that executes once a second. When the timer function is executed, An AJAX call is made to retrieve a text value. That text value is then placed into the DOM. What's my problem? IE Caching. Crack open Task Manager and watch what happens to the iexplorer.exe process (IE 8.0.7600.16385 for me) while the timer in that page is executing. See the memory and handle count getting larger? Why is that happening when, by all accounts, I have caching turned off. I've got the jQuery cache option set to false in $.ajaxSetup. I've got the CacheControl header set to no-cache and no-store. The Expires header is set to DateTime.Now.AddDays(-1). The headers are set in both the page code-behind as well as the HTTP Handler's response. Anybody got any ideas as to how I could prevent IE from caching the results of the AJAX call? Here is what the iexplorer.exe process looks like in ProcessMonitor. I believe that the activity shown in this picture is exactly what I'm attempting to prevent.

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Why do I get "request for member in something not a struct or union" from this code?

    - by pyroxene
    I'm trying to teach myself C by coding up a linked list. I'm new to pointers and memory management and I'm getting a bit confused. I have this code: /* Remove a node from the list and rejiggle the pointers */ void rm_node(struct node **listP, int index) { struct node *prev; struct node *n = *listP; if (index == 0) { *listP = *listP->next; free(n); return; } for (index; index > 0; index--) { n = n->next; if (index == 2) { prev = n; } } prev->next = n->next; free(n); } to remove an element from the list. If I want to remove the first node, I still need some way of referring to the list, which is why the listP arg is a double pointer, so it can point to the first element of the list and allow me to free the node that used to be the head. However, when I try to dereference listP to access the pointer to the next node, the compiler tells me error: request for member ‘next’ in something not a structure or union . What am I doing wrong here? I think I might be hopelessly mixed up..?

    Read the article

  • What was your "aha moment" in understanding delegates?

    - by CM90
    Considering the use of delegates in C#, does anyone know if there is a performance advantage or if it is a convenience to the programmer? If we are creating an object that holds a method, it sounds as if that object would be a constant in memory to be called on, instead of loading the method every time it is called. For example, if we look at the following Unity3D-based code: public delegate H MixedTypeDelegate<G, H>(G g) public class MainParent : MonoBehaviour // Most Unity classes inherit from M.B. { public static Vector3 getPosition(GameObject g) { /* GameObject is a Unity class, and Vector3 is a struct from M.B. The "position" component of a GameObject is a Vector3. This method takes the GameObject I pass to it & returns its position. */ return g.transform.position; } public static MixedTypeDelegate<GameObject, Vector3> PositionOf; void Awake( ) // Awake is the first method called in Unity, always. { PositionOf = MixedTypeDelegate<GameObject, Vector3>(getPosition); } } public class GameScript : MainParent { GameObject g = new GameObject( ); Vector3 whereAmI; void Update( ) { // Now I can say: whereAmI = PositionOf(g); // Instead of: whereAmI = getPosition(g); } } . . . But that seems like an extra step - unless there's that extra little thing that it helps. I suppose the most succinct way to ask a second question would be to say: When you had your aha moment in understanding delegates, what was the context/scenario/source? Thank you!

    Read the article

  • Flash browser game - HTTP + PHP vs Socket + Something else

    - by Maurycy Zarzycki
    I am developing a non-real time browser RPG game (think Kingdom of Loathing) which would be played from within a Flash app. At first I just wanted to make the communication with server using simply URLLoader to tell PHP what I am doing, and using $_SESSION to store data needed in-between request. I wonder if it wouldn't be better to base it on a socket connection, an app residing on a server written in Java or Python. The problem is I have never ever written such an app so I have no idea how much I'd have to "shift" my thoughts from simple responding do request (like PHP) to continuously working application. I won't hide I am also concerned about the memory and CPU usage of such Server app, when for example there would be hundreds of users connected. I've done some research. I have tried to do some research, but thanks to my nil knowledge on the sockets subject I haven't found anything helpful. So, considering the fact I don't need real time data exchange, will it be wise to develop the server side part as socket server, not in plain ol' PHP?

    Read the article

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

  • transferring binary files between systems

    - by tim
    Hi guys I'm trying to transfer my files between 2 UNIX clusters, the data is pure numeric (vectors of double) in binary form. Unfortunately one of the systems is IBM ppc997 and the other is AMD Opteron, It seems the format of binary numbers in these systems are different. I have tried 3 ways till now: 1- Changed my files to ASCII format (i.e. saved a number at each line in a text file), sent them to the destination and changed them again to binary on the target system (they both are UNIX, no end of line character difference??!) 2- Sent pure binaries to the destination 3- used uuencode sent them to the destination and decoded them Unfortunately any of these methods does not work (my code in the destination system generates garbage, while it works on the first system, I'm 100% sure the code itself is portable). I don't know what else I can do? Do you have any idea? I'm not a professional, please don't use computer scientists terminology! And: my codes are in C, so by binary I mean a one to one mapping between memory and hard disk. Thanks

    Read the article

  • How to determine 2D unsigned short pointers array length in c++

    - by tuman
    Hello, I am finding it difficult to determine the length of the columns in a 2D unsigned short pointer array. I have done memory allocation correctly as far as I know. and can print them correctly. plz see the following code segment: int number_of_array_index_required_for_pointer_abc=3; char A[3][16]; strcpy(A[0],"Hello"); strcpy(A[1],"World"); strcpy(A[2],"Tumanicko"); cout<<number_of_array_index_required_for_pointer_abc*sizeof(unsigned short)<<endl; unsigned short ** pqr=(unsigned short **)malloc(number_of_array_index_required_for_pointer_abc*sizeof(unsigned short)); for(int i=0;i<number_of_array_index_required_for_pointer_abc;i++) { int ajira = strlen(A[i])*sizeof(unsigned short); cout<<i<<" = "<<ajira<<endl; pqr[i]=(unsigned short *)malloc(ajira); cout<<"alocated pqr[i]= "<<sizeof pqr<<endl; int j=0; for(j=0;j<strlen(A[i]);j++) { pqr[i][j]=(unsigned short)A[i][j]; } pqr[i][j]='\0'; } for(int i=0;i<number_of_array_index_required_for_pointer_abc;i++) { //ln= (sizeof pqr[i])/(sizeof pqr[0]); //cout<<"Size of pqr["<<i<<"]= "<<ln<<endl; // I want to know the size of the columns i.e. pqr[i]'s length instead of finding '\0' for(int k=0;(char)pqr[i][k]!='\0';k++) cout<<(char)pqr[i][k]; cout<<endl; }

    Read the article

  • C++ ulong to class method pointer and back

    - by Simone Margaritelli
    Hi guys, I'm using a hash table (source code by Google Inc) to store some method pointers defined as: typedef Object *(Executor::*expression_delegate_t)( vframe_t *, Node * ); Where obviously "Executor" is the class. The function prototype to insert some value to the hash table is: hash_item_t *ht_insert( hash_table_t *ht, ulong key, ulong data ); So basically i'm doing the insert double casting the method pointer: ht_insert( table, ASSIGN, reinterpret_cast<ulong>( (void *)&Executor::onAssign ) ); Where table is defined as a 'hash_table_t *' inside the declaration of the Executor class, ASSIGN is an unsigned long value, and 'onAssign' is the method I have to map. Now, Executor::onAssign is stored as an unsigned long value, its address in memory I think, and I need to cast back the ulong to a method pointer. But this code: hash_item_t* item = ht_find( table, ASSIGN ); expression_delegate_t delegate = reinterpret_cast < expression_delegate_t > (item->data); Gives me the following compilation error : src/executor.cpp:45: error: invalid cast from type ‘ulong’ to type ‘Object* (Executor::*)(vframe_t*, Node*)’ I'm using GCC v4.4.3 on a x86 GNU/Linux machine. Any hints?

    Read the article

  • refactoring my code. My headers (Header Guard Issues)

    - by numerical25
    I had a post similar to this awhile ago based on a error I was getting. I was able to fix it but since then I been having trouble doing things because headers keep blocking other headers from using code. Honestly, these headers are confusing me and if anyone has any resources that will address these types of issues, that will be helpful. What I essentially want to do is be able to have rModel.h be included inside RenderEngine.h. every time I add rModel.h to RenderEngine.h, rModel.h is no longer able to use RenderEngine.h. (rModel.h has a #include of RenderEngine.h as well). So in a nutshell, RenderEngine and rModel need to use each others functionalities. On top of all this confusion, the Main.cpp needs to use RenderEngine. stdafx.h #include "targetver.h" #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers // Windows Header Files: #include <windows.h> // C RunTime Header Files #include <stdlib.h> #include <malloc.h> #include <memory.h> #include <tchar.h> #include "resource.h" main.cpp #include "stdafx.h" #include "RenderEngine.h" #include "rModel.h" // Global Variables: RenderEngine go; rModel *g_pModel; ...code........... rModel.h #ifndef _MODEL_H #define _MODEL_H #include "stdafx.h" #include <vector> #include <string> #include "rTri.h" #include "RenderEngine.h" ........Code RenderEngine.h #pragma once #include "stdafx.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #include "rModel.h" .......Code......

    Read the article

  • Pointer and malloc issue

    - by Andy
    I am fairly new to C and am getting stuck with arrays and pointers when they refer to strings. I can ask for input of 2 numbers (ints) and then return the one I want (first number or second number) without any issues. But when I request names and try to return them, the program crashes after I enter the first name and not sure why. In theory I am looking to reserve memory for the first name, and then expand it to include a second name. Can anyone explain why this breaks? Thanks! #include <stdio.h> #include <stdlib.h> void main () { int NumItems = 0; NumItems += 1; char* NameList = malloc(sizeof(char[10])*NumItems); printf("Please enter name #1: \n"); scanf("%9s", NameList[0]); fpurge(stdin); NumItems += 1; NameList = realloc(NameList,sizeof(char[10])*NumItems); printf("Please enter name #2: \n"); scanf("%9s", NameList[1]); fpurge(stdin); printf("The first name is: %s",NameList[0]); printf("The second name is: %s",NameList[1]); return 0; }

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • C++: Vector of objects vs. vector of pointers to new objects?

    - by metamemetics
    Hello, I am seeking to improve my C++ skills by writing a sample software renderer. It takes objects consisting of points in a 3d space and maps them to a 2d viewport and draws circles of varying size for each point in view. Which is better: class World{ vector<ObjectBaseClass> object_list; public: void generate(){ object_list.clear(); object_list.push_back(DerivedClass1()); object_list.push_back(DerivedClass2()); or... class World{ vector<ObjectBaseClass*> object_list; public: void generate(){ object_list.clear(); object_list.push_back(new DerivedClass1()); object_list.push_back(new DerivedClass2()); ?? Would be using pointers in the 2nd example to create new objects defeat the point of using vectors, because vectors automatically call the DerivedClass destructors in the first example but not in the 2nd? Are pointers to new objects necessary when using vectors because they handle memory management themselves as long as you use their access methods? Now let's say I have another method in world: void drawfrom(Viewport& view){ for (unsigned int i=0;i<object_list.size();++i){ object_list.at(i).draw(view); } } When called this will run the draw method for every object in the world list. Let's say I want derived classes to be able to have their own versions of draw(). Would the list need to be of pointers then in order to use the method selector (-) ?

    Read the article

  • Session scoped bean as class attribute of Spring MVC Controller

    - by Sotirios Delimanolis
    I have a User class: @Component @Scope("session") public class User { private String username; } And a Controller class: @Controller public class UserManager { @Autowired private User user; @ModelAttribute("user") private User createUser() { return user; } @RequestMapping(value = "/user") public String getUser(HttpServletRequest request) { Random r = new Random(); user.setUsername(new Double(r.nextDouble()).toString()); request.getSession().invalidate(); request.getSession(true); return "user"; } } I invalidate the session so that the next time i got to /users, I get another user. I'm expecting a different user because of user's session scope, but I get the same user. I checked in debug mode and it is the same object id in memory. My bean is declared as so: <bean id="user" class="org.synchronica.domain.User"> <aop:scoped-proxy/> </bean> I'm new to spring, so I'm obviously doing something wrong. I want one instance of User for each session. How?

    Read the article

  • C# cross thread dialogue co-operation

    - by John Attridge
    K I am looking at a primarily single thread windows forms application in 3.0. Recently my boss had a progress dialogue added on a separate thread so the user would see some activity when the main thread went away and did some heavy duty work and locked out the GUI. The above works fine unless the user switches applications or minimizes as the progress form sits top most and will not disappear with the main application. This is not so bad if there are lots of little operations as the event structure of the main form catches up with its events when it gets time so minimized and active flags can be checked and thus the dialog thread can hide or show itself accordingly. But if a long running sql operation kicks off then no events fire. I have tried intercepting the WndProc command but this also appears queued when a long running sql operation is executing. I have also tried picking up the processes, finding the current app and checking various memory values isiconic and the like inside the progress thread but until the sql operation finishes none of these get updated. Removing the topmost causes the dialog to disappear when another app activates but if the main app is then brought back it does not appear again. So I need a way to find out if the other thread is minimized or no longer active that does not involve querying the actual thread as that locks until the sql operation finishes. Now I know that this is not the best way to write this and it would be better to have all the heavy processing on separate threads leaving the GUI free but as this is a huge ancient legacy app the time to re-write in that fashion will not be provided so I have to work with what I have got. Any help is appreciated

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >