Search Results

Search found 8414 results on 337 pages for 'critical section'.

Page 249/337 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Why have or haven't you moved to ASP.NET MVC yet?

    - by Jason
    I find myself on the edge of trying out ASP.NET MVC but there is still "something" holding me back. Are you still waiting to try it, and if so, why? If you finally decided to use it, what helped you get over your hesitation? I'm not worried about it from a technical point of view; I know the pros and cons of web forms vs MVC. My concerns are more on the practical side. Will Microsoft continue to support ASP.NET MVC if they don't reach some critical threshold of developers/customers using it? Are customers willing to try ASP.NET MVC? Have you had to convince a customer to use it? How did that go? Are there major sites using ASP.NET MVC (besides SO)? Could you provide links if you have them? Did you try ASP.NET MVC and found yourself regretting it? If so, what do you regret? If you have any other concerns preventing you from using MVC.NET, what are they? If you had concerns but felt they were addressed and now use MVC.NET, could you list them as well? Thanks

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • Is anyone familiar with SDPT.clsSDPT?

    - by David Stratton
    Normally I wouldn't ask this kind of question here, but I'm desperate at this point. I'm attempting to support a classic ASP app written by a predecessor who is no longer available. Keeping it short, several applications use a dll to perform encryption of sensitive data. This dll is named SDPT.dll, and the line of code used to create an object is set objSDPT = server.CreateObject("SDPT.clsSDPT") At this point, I am getting errors in a critical app on one of my servers, and I've actually hit a dead end. The error is a standard "Server.CreateObject Failed" message, which I know how to troubleshoot in most cases. However, in this case, all of my normal tries, plus several hours of Google searches are coming up with nothing that works. At this point, I'm not so much looking for help in troubleshooting the issue as I am in finding any sort of reference on this third party component. Even finding that is proving to be difficult, so I'm resorting to asking any of the seasoned developers that hang out here if they are familiar with this product, who it was developed by, and if any documentation on it exists anywhere.

    Read the article

  • Using shared_ptr to implement RCU (read-copy-update)?

    - by yongsun
    I'm very interested in the user-space RCU (read-copy-update), and trying to simulate one via tr1::shared_ptr, here is the code, while I'm really a newbie in concurrent programming, would some experts help me to review? The basic idea is, reader calls get_reading_copy() to gain the pointer of current protected data (let's say it's generation one, or G1). writer calls get_updating_copy() to gain a copy of the G1 (let's say it's G2), and only one writer is allowed to enter the critical section. After the updating is done, writer calls update() to do a swap, and make the m_data_ptr pointing to data G2. The ongoing readers and the writer now hold the shared_ptr of G1, and either a reader or a writer will eventually deallocate the G1 data. Any new readers would get the pointer to G2, and a new writer would get the copy of G2 (let's say G3). It's possible the G1 is not released yet, so multiple generations of data my co-exists. template <typename T> class rcu_protected { public: typedef T type; typedef std::tr1::shared_ptr<type> rcu_pointer; rcu_protected() : m_data_ptr (new type()) {} rcu_pointer get_reading_copy () { spin_until_eq (m_is_swapping, 0); return m_data_ptr; } rcu_pointer get_updating_copy () { spin_until_eq (m_is_swapping, 0); while (!CAS (m_is_writing, 0, 1)) {/* do sleep for back-off when exceeding maximum retry times */} rcu_pointer new_data_ptr(new type(*m_data_ptr)); // as spin_until_eq does not have memory barrier protection, // we need to place a read barrier to protect the loading of // new_data_ptr not to be re-ordered before its construction _ReadBarrier(); return new_data_ptr; } void update (rcu_pointer new_data_ptr) { while (!CAS (m_is_swapping, 0, 1)) {} m_data_ptr.swap (new_data_ptr); // as spin_until_eq does not have memory barrier protection, // we need to place a write barrier to protect the assignments of // m_is_writing/m_is_swapping be re-ordered bofore the swapping _WriteBarrier(); m_is_writing = 0; m_is_swapping = 0; } private: volatile long m_is_writing; volatile long m_is_swapping; rcu_pointer m_data_ptr; };

    Read the article

  • What is happening in Crockford's object creation technique?

    - by Chris Noe
    There are only 3 lines of code, and yet I'm having trouble fully grasping this: Object.create = function (o) { function F() {} F.prototype = o; return new F(); }; newObject = Object.create(oldObject); (from Prototypal Inheritance) 1) Object.create() starts out by creating an empty function called F. I'm thinking that a function is a kind of object. Where is this F object being stored? Globally I guess. 2) Next our oldObject, passed in as o, becomes the prototype of function F. Function (i.e., object) F now "inherits" from our oldObject, in the sense that name resolution will route through it. Good, but I'm curious what the default prototype is for an object, Object? Is that also true for a function-object? 3) Finally, F is instantiated and returned, becoming our newObject. Is the "new" operation strictly necessary here? Doesn't F already provide what we need, or is there a critical difference between function-objects and non-function-objects? Clearly it won't be possible to have a constructor function using this technique. What happens the next time Object.create() is called? Is global function F overwritten? Surely it is not reused, because that would alter previously configured objects. And what happens if multiple threads call Object.create(), is there any sort of synchronization to prevent race conditions on F?

    Read the article

  • Is the REST support in Spring 3's MVC Framework production quality yet?

    - by glenjohnson
    Hi all, Since Spring 3 was released in December last year, I have been trying out the new REST features in the MVC framework for a small commercial project involving implementing a few RESTful Web Services which consume XML and return XML views using JiBX. I plan to use either Hibernate or JDBC Templates for the data persistence. As a Spring 2.0 developer, I have found Spring 3's (and 2.5's) new annotations way of doing things quite a paradigm shift and have personally found some of the new MVC annotation features difficult to get up to speed with for non-trivial applications - as such, I am often having to dig for information in forums and blogs that is not apparent from going through the reference guide or from the various Spring 3 REST examples on the web. For deadline-driven production quality and mission critical applications implementing a RESTful architecture, should I be holding off from Spring 3 and rather be using mature JSR 311 (JAX-RS) compliant frameworks like RESTlet or Jersey for the REST layer of my code (together with Spring 2 / 2.5 to tie things together)? I had no problems using RESTlet 1.x in a previous project and it was quite easy to get up to speed with (no magic tricks behind the scenes), but when starting my current project it initially looked like the new REST stuff in Spring 3's MVC Framework would make life easier. Do any of you out there have any advice to give on this? Does anyone know of any commercial / production-quality projects using, or having successfully delivered with, the new REST stuff in Spring 3's MVC Framework. Many thanks Glen

    Read the article

  • Is learning C++ a good idea?

    - by chang
    The more I hear and read about C++ (e.g. this: http://lwn.net/Articles/249460/), I get the impression, that I'd waste my time learning C++. I some wrote network routing algorithm in C++ for a simulator, and it was a pain (as expected, especially coming from a perl/python/Java background ...). I'm never happy about giving up on some technology, but I would be happy, if I could limit my knowledge of C-family languages to just C, C# and Objective-C (even OS Xs Cocoa, which is huge and takes a lot of time to learn looks like joy compared to C++ ...). Do I need to consider myself dumb or unwilling, just because I'm not partial to the pain involved learning this stuff? Technologies advance and there will be options other than C++, when deciding on implementation languages, or not? And for speed: If speed were that critical, I'd go for a plain C implementation instead, or write C extensions for much more productive languages like ruby or python ... The one-line version of the above: Will C++ stay such a relevant language that every committed programmer should be familiar with it? [ edit / thank you very much for your interesting and useful answers so far .. ] [ edit / .. i am accepting the top-rated answer; thanks again for all answers! ]

    Read the article

  • PHP Exceptions in Classes

    - by mike condiff
    I'm writing a web application (PHP) for my friend and have decided to use my limited OOP training from Java. My question is what is the best way to note in my class/application that specific critical things failed without actually breaking my page. Currently my problem is I have an Object "SummerCamper" which takes a camper_id as it's argument to load all of the necessary data into the object from the database. Say someone specifies a camper_id in the querystring that does not exist, I pass it to my objects constructor and the load fails. Currently I don't see a way for me to just return false from the constructor. I have read I could possibly do this with Exceptions, throwing an exception if no records are found in the database or if some sort of validation fails on input of the camper_id from the application etc. However, I have not really found a great way to alert my program that the Object Load has failed. I tried returning false from within the CATCH but the Object still persists in my php page. I do understand I could put a variable $is_valid = false if the load fails and then check the Object using a get method but I think there may be better ways. What is the best way of achieving the essential termination of an object if a load fails? Should I load data into the object from outside the constructor? Is there some osrt of design pattern that I should look into? Any help would be appreciated.

    Read the article

  • Fastest inline-assembly spinlock

    - by sigvardsen
    I'm writing a multithreaded application in c++, where performance is critical. I need to use a lot of locking while copying small structures between threads, for this I have chosen to use spinlocks. I have done some research and speed testing on this and I found that most implementations are roughly equally fast: Microsofts CRITICAL_SECTION, with SpinCount set to 1000, scores about 140 time units Implementing this algorithm with Microsofts InterlockedCompareExchange scores about 95 time units Ive also tried to use some inline assembly with __asm {} using something like this code and it scores about 70 time units, but I am not sure that a proper memory barrier has been created. Edit: The times given here are the time it takes for 2 threads to lock and unlock the spinlock 1,000,000 times. I know this isn't a lot of difference but as a spinlock is a heavily used object, one would think that programmers would have agreed on the fastest possible way to make a spinlock. Googling it leads to many different approaches however. I would think this aforementioned method would be the fastest if implemented using inline assembly and using the instruction CMPXCHG8B instead of comparing 32bit registers. Furthermore memory barriers must be taken into account, this could be done by LOCK CMPXHG8B (I think?), which guarantees "exclusive rights" to the shared memory between cores. At last [some suggests] that for busy waits should be accompanied by NOP:REP that would enable Hyper-threading processors to switch to another thread, but I am not sure whether this is true or not? From my performance-test of different spinlocks, it is seen that there is not much difference, but for purely academic purpose I would like to know which one is fastest. However as I have extremely limited experience in the assembly-language and with memory barriers, I would be happy if someone could write the assembly code for the last example I provided with LOCK CMPXCHG8B and proper memory barriers in the following template: __asm { spin_lock: ;locking code. spin_unlock: ;unlocking code. }

    Read the article

  • Is there a way to increase the efficiency of shared_ptr by storing the reference count inside the co

    - by BillyONeal
    Hello everyone :) This is becoming a common pattern in my code, for when I need to manage an object that needs to be noncopyable because either A. it is "heavy" or B. it is an operating system resource, such as a critical section: class Resource; class Implementation : public boost::noncopyable { friend class Resource; HANDLE someData; Implementation(HANDLE input) : someData(input) {}; void SomeMethodThatActsOnHandle() { //Do stuff }; public: ~Implementation() { FreeHandle(someData) }; }; class Resource { boost::shared_ptr<Implementation> impl; public: Resource(int argA) explicit { HANDLE handle = SomeLegacyCApiThatMakesSomething(argA); if (handle == INVALID_HANDLE_VALUE) throw SomeTypeOfException(); impl.reset(new Implementation(handle)); }; void SomeMethodThatActsOnTheResource() { impl->SomeMethodThatActsOnTheHandle(); }; }; This way, shared_ptr takes care of the reference counting headaches, allowing Resource to be copyable, even though the underlying handle should only be closed once all references to it are destroyed. However, it seems like we could save the overhead of allocating shared_ptr's reference counts and such separately if we could move that data inside Implementation somehow, like boost's intrusive containers do. If this is making the premature optimization hackles nag some people, I actually agree that I don't need this for my current project. But I'm curious if it is possible.

    Read the article

  • How to get the most out of a 3 month intern?

    - by firoso
    We've got a software engineering intern coming in who's fairly competent and shows promise. There's one catch: we have him for 3 months full time and can't count on anything past that. He still has a year of school left, which is why we can't say for sure that we have him past 3 months. We have a specific project we're putting him on. How can we maximize his productivity while still giving him a positive learning experience? He wants to learn about development cycles and real-world software engineering. Anything that you think would be critical that you wish you had learned earlier? Nearly six months later: He's preformed admirably and even I have learned a lot from him. Thank you all for the input. Now I want to provide feedback to YOU! He has benefited most from sitting down and writing code. However, he has had a nasty history of bad software engineering practices which I'm trying to replace with good habits (properly finishing a method before moving on, not hacking code together, proper error channeling, etc). He has also really gained a lot by feeling involved in design decisions, even if most of the time they're related to my own design plans.

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

    - by Roger_S
    Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML. XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily. Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.) I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern. Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration? Thanks, Roger_S

    Read the article

  • Five unique, random numbers from a subset

    - by tau
    I know similar questions come up a lot and there's probably no definitive answer, but I want to generate five unique random numbers from a subset of numbers that is potentially infinite (maybe 0-20, or 0-1,000,000). The only catch is that I don't want to have to run while loops or fill an array. My current method is to simply generate five random numbers from a subset minus the last five numbers. If any of the numbers match each other, then they go to their respective place at the end of the subset. So if the fourth number matches any other number, it will bet set to the 4th from the last number. Does anyone have a method that is "random enough" and doesn't involve costly loops or arrays? Please keep in mind this a curiosity, not some mission-critical problem. I would appreciate it if everyone didn't post "why are you having this problem?" answers. I am just looking for ideas. Thanks a lot!

    Read the article

  • Blocking error in celery

    - by dmitry
    I have no idea what's this. Python 2.7 + django-1.5.1 + httpd + rabbitmq + django-celery==3.0.17 Tasks are not executed because of some error. Below is celery's log. Maybe someone has faced it before. [2013-06-24 17:10:03,792: CRITICAL/MainProcess] Can't decode message body: AttributeError("'JoinInfo' object has no attribute '__dict__'",) (type:u'application/x-python-serialize' encoding:u'binary' raw:'\'\\x80\\x02}q\\x01(U\\x07expiresq\\x02NU\\x03utcq\\x03\\x88U\\x04argsq\\x04cdjango.contrib.auth.models\\nUser\\nq\\x05)\\x81q\\x06}q\\x07(U\\x08usernameq\\x08X\\x19\\x00\\x00\\[email protected]\\nfirst_nameq\\tX\\x05\\x00\\x00\\x00BibbyU\\tlast_nameq\\nX\\x08\\x00\\x00\\x00OffshoreU\\r_client_cacheq\\x0bccopy_reg\\n_reconstructor\\nq\\x0ccbongoregistration.models\\nClient\\nq\\rc__builtin__\\nobject\\nq\\x0eN\\x87Rq\\x0f}q\\x10(h\\nX\\x08\\x00\\x00\\x00OffshoreU\\x1bpurchase_confirmation_emailq\\x11X\\x1f\\x00\\x00\\[email protected]\\x1dpurchase_confirmation_email_1q\\x12X!\\x00\\x00\\[email protected]\\x06_stateq\\x13cdjango.db.models.base\\nModelState\\nq\\x14)\\x81q\\x15}q\\x16(U\\x06addingq\\x17\\x89U\\x02dbq\\x18U\\x07defaultq\\x19ubU\\x0buser_ptr_idq\\x1aJ\\xb4\\xa2\\x03\\x00U\\x08is_staffq\\x1b\\x89U\\x08postcodeq\\x1cX\\x08\\x00\\x00\\x00AB11 5BSU\\x0cdegree_limitq\\x1dK\\x06U\\x07messageq\\x1eX\\xd1E\\x00\\x00<table id="container" style="margin: 0px; padding: 0px; width: 100%; background-color: #ffffff;" cellspacing="0" cellpadding="0"... (22911b)'') Traceback (most recent call last): File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/messaging.py", line 556, in _receive_callback decoded = None if on_m else message.decode() File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/transport/base.py", line 147, in decode self.content_encoding, accept=self.accept) File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/serialization.py", line 187, in decode return decode(data) File "/opt/www/MyProject-main/eggs/kombu-2.5.10-py2.7.egg/kombu/serialization.py", line 74, in pickle_loads return load(BytesIO(s)) AttributeError: 'JoinInfo' object has no attribute '__dict__'

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • MFC/CCriticalSection: Simple lock situation hangs

    - by raph.amiard
    I have to program a simple threaded program with MFC/C++ for a uni assignment. I have a simple scenario in wich i have a worked thread which executes a function along the lines of : UINT createSchedules(LPVOID param) { genProgThreadVal* v = (genProgThreadVal*) param; // v->searcherLock is of type CcriticalSection* while(1) { if(v->searcherLock->Lock()) { //do the stuff, access shared object , exit clause etc.. v->searcherLock->Unlock(); } } PostMessage(v->hwnd, WM_USER_THREAD_FINISHED , 0,0); delete v; return 0; } In my main UI class, i have a CListControl that i want to be able to access the shared object (of type std::List). Hence the locking stuff. So this CList has an handler function looking like this : void Ccreationprogramme::OnLvnItemchangedList5(NMHDR *pNMHDR, LRESULT *pResult) { LPNMLISTVIEW pNMLV = reinterpret_cast<LPNMLISTVIEW>(pNMHDR); if((pNMLV->uChanged & LVIF_STATE) && (pNMLV->uNewState & LVNI_SELECTED)) { searcherLock.Lock(); // do the stuff on shared object searcherLock.Unlock(); // do some more stuff } *pResult = 0; } The searcherLock in both function is the same object. The worker thread function is passed a pointer to the CCriticalSection object, which is a member of my dialog class. Everything works but, as soon as i do click on my list, and so triggers the handler function, the whole program hangs indefinitely.I tried using a Cmutex. I tried using a CSingleLock wrapping over the critical section object, and none of this has worked. What am i missing ?

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • To OpenID or not to OpenID? Is it worth it?

    - by Eloff
    Does OpenID improve the user experience? Edit Not to detract from the other comments, but I got one really good reply below that outlined 3 advantages of OpenID in a rational bottom line kind of way. I've also heard some whisperings in other comments that you can get access to some details on the user through OpenID (name? email? what?) and that using that it might even be able to simplify the registration process by not needing to gather as much information. Things that definitely need to be gathered in a checkout process: Full name Email (I'm pretty sure I'll have to ask for these myself) Billing address Shipping address Credit card info There may be a few other things that are interesting from a marketing point of view, but I wouldn't ask the user to manually enter anything not absolutely required during the checkout process. So what's possible in this regard? /Edit (You may have noticed stackoverflow uses OpenID) It seems to me it is easier and faster for the user to simply enter a username and password in a signup form they have to go through anyway. I mean you don't avoid entering a username and password either with OpenID. But you avoid the confusion of choosing a OpenID provider, and the trip out to and back from and external site. With Microsoft making Live ID an OpenID provider (More Info), bringing on several hundred million additional accounts to those provided by Google, Yahoo, and others, this question is more important than ever. I have to require new customers to sign up during the checkout process, and it is absolutely critical that the experience be as easy and smooth as possible, every little bit harder it becomes translates into lost sales. No geek factor outweighs cold hard cash at the end of the day :) OpenID seems like a nice idea, but the implementation is of questionable value. What are the advantages of OpenID and is it really worth it in my scenario described above?

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • Have you dealt with space hardening?

    - by Tim Post
    I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. Can anyone recommend some books, offer links to papers on the topic or (gasp) even a simulator that shows you what happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount.

    Read the article

  • friend declaration in C++

    - by Happy Mittal
    In Thinking in C++ by Bruce eckel, there is an example given regarding friend functions as // Declaration (incomplete type specification): struct X; struct Y { void f(X*); }; struct X { // Definition private: int i; public: friend void Y::f(X*); // Struct member friend }; void Y::f(X* x) { x->i = 47; } Now he explained this: Notice that Y::f(X*) takes the address of an X object. This is critical because the compiler always knows how to pass an address, which is of a fixed size regardless of the object being passed, even if it doesn’t have full information about the size of the type. If you try to pass the whole object, however, the compiler must see the entire structure definition of X, to know the size and how to pass it, before it allows you to declare a function such as Y::g(X). But when I tried void f(X); as declaration in struct Y, it shows no error. Please explain why?

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >