Search Results

Search found 6324 results on 253 pages for 'nathan fast'.

Page 235/253 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Cloning a selector + all its children in jQuery?

    - by HipHop-opatamus
    I'm having trouble getting the following JQuery script to function properly - its functionality is as follows: 1) Hide the content below each headline 2) Upon clicking a headline, substitute the "#first-post" with the headline + the hidden content below the headline. I can only seem to get the script to clone the headline itself to #first-post, not the headline + the content beneath it. Any idea why? <HTML> <HEAD> <script src="http://code.jquery.com/jquery-latest.js"></script> </HEAD> <script> $(document).ready( function(){ $('.title').siblings().hide(); $('.title').click( function() { $('#first-post').replaceWith($(this).closest(".comment").clone().attr('id','first-post')); $('html, body').animate({scrollTop:0}, 'fast'); return false; }); }); </script> <BODY> <div id="first-post"> <div class="content"><p>This is a test discussion topic</p> </div> </div> <div class="comment"> <h2 class="title"><a href="#1">1st Post</a></h2> <div class="content"> <p>this is 1st reply to the original post</p> </div> <div class="test">1st post second line</div> </div> <div class="comment"> <h2 class="title"><a href="#2">2nd Post</a></h2> <div class="content"> <p>this is 2nd reply to the original post</p> </div> <div class="test">2nd post second line</div> </div> </div> <div class="comment"> <h2 class="title"><a href="#3">3rd Post</a></h2> <div class="content"> <p>this is 3rd reply to the original post</p> </div> <div class="test">3rd post second line</div> </div> </div> </BODY> </HTML>

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • How do you remind your Scrum Product Owner about his promises/actions?

    - by Felix Ogg
    ** EDIT: Rephrased the question to re-focus ** Our Scrum team meets as seldomly as possible, but we meet with the product owner every chance we get. We track everyone's agreed action points (particularly theirs). We are 100% agile, but our product owner lives in traditional world, we remain off-site. We facilitate him in crossing over to our fast-paced world. There's not much wrong. The team and the PO are in good spirits. PO is present at every meeting and positively energized. Just imagine this person as a 70 year old, slow grandpa, who is forgetful, yet kind. In reality he isn't, but he is used to a working environment (public servants) that is much slooooower. Manyana-manyana etc. It is frustrating for my team to cooperate: PO lives in a non-prioritized environment, and everyone in it has learned the productivity-technique of NGTD (Not Getting Things Done). He WANTS to, it's just that he forgets or 'sinks' somewhere along the away. We have experimented with a text file, maintained by the Scrum master (low-tech), which he broadcasts by e-mail every day JIRA, our issue tracker. Turns out this is nice for programmers, but too steep for 'regular people' I Googled for Issue tracking webtools but came up empty handed: All tools are aimed at IT issue tracking, instead of meeting action point tracking/planning for mere mortals. I did find TODO-lists like RememberTheMilk, but they don't track comments, and - to be honest - I doubt we could get our product owner to use it (too complicated). We have three requirements: Register action points, assign to a team member and a deadline Offer anyone to 'comment' on progress of any action point Do not build our own tool from scratch We do not need: - impressive authorization models, - multi-project, - workflow, - crosslinking. Is there any trick/tool you use to assist your product owner 'fly' like the rest of the rest of the team? Communication before tools I agree with the general consensus that one should not try to apply technology to a communication problem, however in this case I am merely looking for a tool to save me time in setting up prioritized lists. I found www.thymer.com today, may be what I am looking for. The guys are cool. It is getting rather feature-bloated though.

    Read the article

  • Multi-threaded Pooled Allocators

    - by Darren Engwirda
    I'm having some issues using pooled memory allocators for std::list objects in a multi-threaded application. The part of the code I'm concerned with runs each thread function in isolation (i.e. there is no communication or synchronization between threads) and therefore I'd like to setup separate memory pools for each thread, where each pool is not thread-safe (and hence fast). I've tried using a shared thread-safe singleton memory pool and found the performance to be poor, as expected. This is a heavily simplified version of the type of thing I'm trying to do. A lot has been included in a pseudo-code kind of way, sorry if it's confusing. /* The thread functor - one instance of MAKE_QUADTREE created for each thread */ class make_quadtree { private: /* A non-thread-safe memory pool for int linked list items, let's say that it's * something along the lines of BOOST::OBJECT_POOL */ pooled_allocator<int> item_pool; /* The problem! - a local class that would be constructed within each std::list as the * allocator but really just delegates to ITEM_POOL */ class local_alloc { public : //!! I understand that I can't access ITEM_POOL from within a nested class like //!! this, that's really my question - can I get something along these lines to //!! work?? pointer allocate (size_t n) { return ( item_pool.allocate(n) ); } }; public : make_quadtree (): item_pool() // only construct 1 instance of ITEM_POOL per // MAKE_QUADTREE object { /* The kind of data structures - vectors of linked lists * The idea is that all of the linked lists should share a local pooled allocator */ std::vector<std::list<int, local_alloc>> lists; /* The actual operations - too complicated to show, but in general: * * - The vector LISTS is grown as a quadtree is built, it's size is the number of * quadtree "boxes" * * - Each element of LISTS (each linked list) represents the ID's of items * contained within each quadtree box (say they're xy points), as the quadtree * is grown a lot of ID pop/push-ing between lists occurs, hence the memory pool * is important for performance */ } }; So really my problem is that I'd like to have one memory pool instance per thread functor instance, but within each thread functor share the pool between multiple std::list objects.

    Read the article

  • jQuery to hide and show divs with an indicator

    - by songdogtech
    Using the jQuery below to toggle the hiding and showing of divs of text: how would I add some sort of indicator - like an up and down arrow as a graphic - to the titles when the divs are either open and closed? What's the best way to do that? Two images? A CSS sprite? And most importantly: how would that be integrated into the JS? I've looked at other jQuery that assigns a random number to each div and then determines which are open and which are closed and toggles one of two images. But I'm using php in a WordPress loop to show a posts, and that gives problems with incrementing in the loop, so there must be an easier way that doesn't require changes in the name of the title div. Thanks.... This JS fires the showing and hiding. Each div can be expanded and collapsed independently. $(document).ready(function() { $('div.demo-show:eq(0)> div').hide(); $('div.demo-show:eq(0)> h3').click(function() { $(this).next().slideToggle('fast'); }); }); This is the HTML it works with: <div class="collapser"> <p class="title">Header-1 </p> <div class="contents">Lorem ipsum</div> <p class="title">Header-2</p> <div class="contents">Lorem ipsum </div> <p class="title">Header-3</p> <div class="contents">Lorem ipsum</div> </div> The CSS is arbitrary: .collapser { margin: 0; padding: 0; width: 500px; } .title { margin: 1px; color: #fff; padding: 3px 10px; cursor: pointer; position: relative; background-color:#c30; } .contents { padding: 5px 10px; background-color:#fafafa; }

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • Change the Session Variable Output

    - by user567230
    Hello I am using Dreamweaver CS5 with Coldfusion 9 to build a dynamic website. I have a MS Access Database that stores login information which includes ID, FullName, FirstName, LastName, Username, Pawword, AcessLevels. My question is this: I currently have session variable to track the Username when it is entered into the login page. However I would like to use that Username to pull the User's FullName to display throughout the web pages and use for querying data. How do I change the session variable to read that when they are not entering their FullName on the login page but only Username and password. I have listed my login information code below if there is any additional information needed please let me know. This is the path for which the FullName values reside DataSource "Access" Table "Logininfo" Field "FullName" I want the FullName to be unique based on the Username submitted from the Login page. I apologize in advance for any rookie mistake I may have made I am new to this but learning fast! Ha. <cfif IsDefined("FORM.username")> <cfset MM_redirectLoginSuccess="members_page.cfm"> <cfset MM_redirectLoginFailed="sorry.cfm"> <cfquery name="MM_rsUser" datasource="Access"> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username=<cfqueryparam value="#FORM.username#" cfsqltype="cf_sql_clob" maxlength="50"> AND Password=<cfqueryparam value="#FORM.password#" cfsqltype="cf_sql_clob" maxlength="50"> </cfquery> <cfif MM_rsUser.RecordCount NEQ 0> <cftry> <cflock scope="Session" timeout="30" type="Exclusive"> <cfset Session.MM_Username=FORM.username> <cfset Session.MM_UserAuthorization=MM_rsUser.AccessLevels[1]> </cflock> <cfif IsDefined("URL.accessdenied") AND false> <cfset MM_redirectLoginSuccess=URL.accessdenied> </cfif> <cflocation url="#MM_redirectLoginSuccess#" addtoken="no"> <cfcatch type="Lock"> <!--- code for handling timeout of cflock ---> </cfcatch> </cftry> </cfif> <cflocation url="#MM_redirectLoginFailed#" addtoken="no"> <cfelse> <cfset MM_LoginAction=CGI.SCRIPT_NAME> <cfif CGI.QUERY_STRING NEQ ""> <cfset MM_LoginAction=MM_LoginAction & "?" & XMLFormat(CGI.QUERY_STRING)> </cfif> </cfif>

    Read the article

  • What makes great software?

    - by VirtuosiMedia
    From the perspective of an end user, what makes a software great rather than just good or functional? What are some fundamental principles that can shift the way a software is used and perceived? What are some of the little finishing touches that help put an application over the top? I'm in the later stages of developing a web app and I'm looking for ideas or concepts that I may have missed. If you have specific examples of software or apps that you absolutely love, please share the reasons or features that make it special. Keep in mind that I'm looking for examples that directly affect the end user, but not necessarily just UI suggestions. Here are some of the principles and little touches I'm trying to use: Keep the UI as simple as possible. Remove absolutely everything that isn't necessary. Use progressive disclosure when more information can be needed sometimes but isn't needed all the time. Provide inline help and useful error messages. Verbs on buttons wherever possible. Make anything that's clickable obvious. Fast, responsive UI. Accessibility (this is a work in progress). Reusable UI patterns. Once a user learns a skill, they will be able to use it in multiple places. Intelligent default settings. Auto-focusing forms when filling out the form is the primary action to be taken on the page. Clear metaphors (like tabs) and headings indicating location within the app. Automating repetitive tasks (with the ability to disable the automation). Use standardized or accepted metaphors for icons (like an "x" for delete). Larger text sizes for improved readability. High contrast so that each section is distinct. Making sure that it's obvious on every page what the user is supposed to do by establishing a clear information hierarchy and drawing the eye to the call to action. Most deletions can be undone. Discoverability - Make it easy to learn how to do new tasks. Group similar elements together.

    Read the article

  • manipulate content inserted by ajax, without using the callback

    - by Cody
    I am using ajax to insert a series of informational blocks via a loop. The blocks each have a title, and long description in them that is hidden by default. They function like an accordion, only showing one description at a time amongst all of the blocks. The problem is opening the description on the first block. I would REALLY like to do it with javascript right after the loop that is creating them is done. Is it possible to manipulate elements created ofter an ajax call without using the callback? <!-- example code--> <style> .placeholder, .long_description{ display:none;} </style> </head><body> <script> /* yes, this script is in the body, dont know if it matters */ $(document).ready(function() { $(".placeholder").each(function(){ // Use the divs to get the blocks var blockname = $(this).html(); // the contents if the div is the ID for the ajax POST $.post("/service_app/dyn_block",'form='+blockname, function(data){ var divname = '#div_' + blockname; $(divname).after(data); $(this).setupAccrdFnctly(); //not the actual code }); }); /* THIS LINE IS THE PROBLEM LINE, is it possible to reference the code ajax inserted */ /* Display the long description in the first dyn_block */ $(".dyn_block").first().find(".long_description").addClass('active').slideDown('fast'); }); </script> <!-- These lines are generated by PHP --> <!-- It is POSSIBLE to display the dyn_blocks --> <!-- here but I would really rather not --> <div id="div_servicetype" class="placeholder">servicetype</div> <div id="div_custtype" class="placeholder">custtype</div> <div id="div_custinfo" class="placeholder">custinfo</div> <div id="div_businfo" class="placeholder">businfo</div> </body>

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Having trouble understanding some code (Ruby on Rails)

    - by user284194
    I posted a question awhile ago asking how I could limit the rate at which a form could be submitted from a rails application. I was helped by a very patient user and their solution works great. The code was for my comments controller, and now I find myself wanting to add this functionality to another controller, my Messages controller. I immediately tried reusing the working code from the comments controller but I couldn't get it to work. Instead of asking for the working code, could someone please help me understand my working comment controller code? class CommentsController < ApplicationController #... before_filter :post_check def record_post_time cookies[:last_post_at] = Time.now.to_i end def last_post_time Time.at((cookies[:last_post_at].to_i rescue 0)) end MIN_POST_TIME = 2.minutes def post_check return true if (Time.now - last_post_time) > MIN_POST_TIME flash[:warning] = "You are trying to reply too fast." @message = Message.find(params[:message_id]) redirect_to(@message) return false end #... def create @message = Message.find(params[:message_id]) @comment = @message.comments.build(params[:comment]) if @comment.save record_post_time flash[:notice] = "Replied to \"#{@message.title}\"" redirect_to(@message) else render :action => "new" end end def update @message = Message.find(params[:message_id]) @comment = Comment.find(params[:id]) if @comment.update_attributes(params[:comment]) record_post_time redirect_to post_comment_url(@message, @comment) else render :action => "edit" end end #... end My Messages controller is pretty much a standard rails generated controller with a few before filters and associated private methods for DRYing up the code and a redirect for non existent pages. I'll explain how much of the code I understand. When a comment is created, a cookie is created with a last_post_time value. If they try to post another comment, the cookie is checked if the last one was made in the last two minutes. If it was a flash warning is displayed and no comment is recorded. What I don't really understand is how the post_check method works and how I can adapt it for my simpler posts controller. I thought I could reuse all the code in the message controller with the exception of the line: @message = Message.find(params[:message_id]) # (don't need the redirect code) in the post_check method. But it trips up on the "record_post_time" in the create action/method. I really want to understand this. Can someone explain why this doesn't work? I greatly appreciate you reading my lengthy question.

    Read the article

  • thread management in nbody code of cuda-sdk

    - by xnov
    When I read the nbody code in Cuda-SDK, I went through some lines in the code and I found that it is a little bit different than their paper in GPUGems3 "Fast N-Body Simulation with CUDA". My questions are: First, why the blockIdx.x is still involved in loading memory from global to share memory as written in the following code? for (int tile = blockIdx.y; tile < numTiles + blockIdx.y; tile++) { sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(blockIdx.x + q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : //this line positions[WRAP(blockIdx.x + tile, gridDim.x) * p + threadIdx.x]; //this line __syncthreads(); // This is the "tile_calculation" function from the GPUG3 article. acc = gravitation(bodyPos, acc); __syncthreads(); } isn't it supposed to be like this according to paper? I wonder why sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : positions[WRAP(tile, gridDim.x) * p + threadIdx.x]; Second, in the multiple threads per body why the threadIdx.x is still involved? Isn't it supposed to be a fix value or not involving at all because the sum only due to threadIdx.y if (multithreadBodies) { SX_SUM(threadIdx.x, threadIdx.y).x = acc.x; //this line SX_SUM(threadIdx.x, threadIdx.y).y = acc.y; //this line SX_SUM(threadIdx.x, threadIdx.y).z = acc.z; //this line __syncthreads(); // Save the result in global memory for the integration step if (threadIdx.y == 0) { for (int i = 1; i < blockDim.y; i++) { acc.x += SX_SUM(threadIdx.x,i).x; //this line acc.y += SX_SUM(threadIdx.x,i).y; //this line acc.z += SX_SUM(threadIdx.x,i).z; //this line } } } Can anyone explain this to me? Is it some kind of optimization for faster code?

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Counting point size based on chart area during zooming/unzoomin

    - by Gacek
    Hi folks. I heave a quite simple task. I know (I suppose) it should be easy, but from the reasons I cannot understand, I try to solve it since 2 days and I don't know where I'm making the mistake. So, the problem is as follows: - we have a chart with some points - The chart starts with some known area and points have known size - we would like to "emulate" the zooming effect. So when we zoom to some part of the chart, the size of points is getting proportionally bigger. In other words, the smaller part of the chart we select, the bigger the point should get. So, we have something like that. We know this two parameters: initialArea; // Initial area - area of the whole chart, counted as width*height initialSize; // initial size of the points Now lets assume we are handling some kind of OnZoom event. We selected some part of the chart and would like to count the current size of the points float CountSizeOnZoom() { float currentArea = CountArea(...); // the area is counted for us. float currentSize = initialSize * initialArea / currentArea; return currentSize; } And it works. But the rate of change is too fast. In other words, the points are getting really big too soon. So I would like the currentSize to be invertly proportional to currentArea, but with some scaling coefficient. So I created the second function: float CountSizeOnZoom() { float currentArea = CountArea(...); % the area is counted for us. // Lets assume we want the size of points to change ten times slower, than area of the chart changed. float currentSize = initialSize + 0.1f* initialSize * ((initialArea / currentArea) -1); return currentSize; } Lets do some calculations in mind. if currentArea is smaller than initialArea, initialArea/currentArea > 1 and then we add "something" small and postive to initialSize. Checked, it works. Lets check what happens if we would un-zoom. currentArea will be equal to initialArea, so we would have 0 at the right side (1-1), so new size should be equal to initialSize. Right? Yeah. So lets check it... and it doesn't work. My question is: where is the mistake? Or maybe you have any ideas how to count this scaled size depending on current area in some other way?

    Read the article

  • Implications of trying to double free memory space in C

    - by SidNoob
    Here' my piece of code: #include <stdio.h> #include<stdlib.h> struct student{ char *name; }; int main() { struct student s; s.name = malloc(sizeof(char *)); // I hope this is the right way... printf("Name: "); scanf("%[^\n]", s.name); printf("You Entered: \n\n"); printf("%s\n", s.name); free(s.name); // This will cause my code to break } All I know is that dynamic allocation on the 'heap' needs to be freed. My question is, when I run the program, sometimes the code runs successfully. i.e. ./struct Name: Thisis Myname You Entered: Thisis Myname I tried reading this I've concluded that I'm trying to double-free a piece of memory i.e. I'm trying to free a piece of memory that is already free? (hope I'm correct here. If Yes, what could be the Security Implications of a double-free?) While it fails sometimes as its supposed to: ./struct Name: CrazyFishMotorhead Rider You Entered: CrazyFishMotorhead Rider *** glibc detected *** ./struct: free(): invalid next size (fast): 0x08adb008 *** ======= Backtrace: ========= /lib/tls/i686/cmov/libc.so.6(+0x6b161)[0xb7612161] /lib/tls/i686/cmov/libc.so.6(+0x6c9b8)[0xb76139b8] /lib/tls/i686/cmov/libc.so.6(cfree+0x6d)[0xb7616a9d] ./struct[0x8048533] /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb75bdbd6] ./struct[0x8048441] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 08:01 288098 /root/struct 08049000-0804a000 r--p 00000000 08:01 288098 /root/struct 0804a000-0804b000 rw-p 00001000 08:01 288098 /root/struct 08adb000-08afc000 rw-p 00000000 00:00 0 [heap] b7400000-b7421000 rw-p 00000000 00:00 0 b7421000-b7500000 ---p 00000000 00:00 0 b7575000-b7592000 r-xp 00000000 08:01 788956 /lib/libgcc_s.so.1 b7592000-b7593000 r--p 0001c000 08:01 788956 /lib/libgcc_s.so.1 b7593000-b7594000 rw-p 0001d000 08:01 788956 /lib/libgcc_s.so.1 b75a6000-b75a7000 rw-p 00000000 00:00 0 b75a7000-b76fa000 r-xp 00000000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fa000-b76fc000 r--p 00153000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fc000-b76fd000 rw-p 00155000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fd000-b7700000 rw-p 00000000 00:00 0 b7710000-b7714000 rw-p 00000000 00:00 0 b7714000-b7715000 r-xp 00000000 00:00 0 [vdso] b7715000-b7730000 r-xp 00000000 08:01 788898 /lib/ld-2.11.1.so b7730000-b7731000 r--p 0001a000 08:01 788898 /lib/ld-2.11.1.so b7731000-b7732000 rw-p 0001b000 08:01 788898 /lib/ld-2.11.1.so bffd5000-bfff6000 rw-p 00000000 00:00 0 [stack] Aborted So why is it that my code does work sometimes? i.e. the compiler is not able to detect at times that I'm trying to free an already freed memory. Has it got to do something with my stack/heap size?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Count seconds and minutes with MCU timer/interrupt?

    - by arynhard
    I am trying to figure out how to create a timer for my C8051F020 MCU. The following code uses the value passed to init_Timer2() with the following formula: 65535-(0.1 / (12/2000000)=48868. I set up the timer to count every time it executes and for every 10 counts, count one second. This is based on the above formula. 48868 when passed to init_Timer2 will produce a 0.1 second delay. It would take ten of them per second. However, when I test the timer it is a little fast. At ten seconds the timer reports 11 seconds, at 20 seconds the timer reports 22 seconds. I would like to get as close to a perfect second as I can. Here is my code: #include <compiler_defs.h> #include <C8051F020_defs.h> void init_Clock(void); void init_Watchdog(void); void init_Ports(void); void init_Timer2(unsigned int counts); void start_Timer2(void); void timer2_ISR(void); unsigned int timer2_Count; unsigned int seconds; unsigned int minutes; int main(void) { init_Clock(); init_Watchdog(); init_Ports(); start_Timer2(); P5 &= 0xFF; while (1); } //============================================================= //Functions //============================================================= void init_Clock(void) { OSCICN = 0x04; //2Mhz //OSCICN = 0x07; //16Mhz } void init_Watchdog(void) { //Disable watchdog timer WDTCN = 0xDE; WDTCN = 0xAD; } void init_Ports(void) { XBR0 = 0x00; XBR1 = 0x00; XBR2 = 0x40; P0 = 0x00; P0MDOUT = 0x00; P5 = 0x00; //Set P5 to 1111 P74OUT = 0x08; //Set P5 4 - 7 (LEDs) to push pull (Output) } void init_Timer2(unsigned int counts) { CKCON = 0x00; //Set all timers to system clock divided by 12 T2CON = 0x00; //Set timer 2 to timer mode RCAP2 = counts; T2 = 0xFFFF; //655535 IE |= 0x20; //Enable timer 2 T2CON |= 0x04; //Start timer 2 } void start_Timer2(void) { EA = 0; init_Timer2(48868); EA = 1; } void timer2_ISR(void) interrupt 5 { T2CON &= ~(0x80); P5 ^= 0xF0; timer2_Count++; if(timer2_Count % 10 == 0) { seconds++; } if(seconds % 60 == 0 && seconds != 0) { minutes++; } }

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • Why does the Ternary\Conditional operator seem significantly faster

    - by Jodrell
    Following on from this question, which I have partially answered. I compile this console app in x64 Release Mode, with optimizations on, and run it from the command line without a debugger attached. using System; using System.Diagnostics; class Program { static void Main() { var stopwatch = new Stopwatch(); var ternary = Looper(10, Ternary); var normal = Looper(10, Normal); if (ternary != normal) { throw new Exception(); } stopwatch.Start(); ternary = Looper(10000000, Ternary); stopWatch.Stop(); Console.WriteLine( "Ternary took {0}ms", stopwatch.ElapsedMilliseconds); stopwatch.Start(); normal = Looper(10000000, Normal); stopWatch.Stop(); Console.WriteLine( "Normal took {0}ms", stopwatch.ElapsedMilliseconds); if (ternary != normal) { throw new Exception(); } Console.ReadKey(); } static int Looper(int iterations, Func<bool, int, int> operation) { var result = 0; for (int i = 0; i < iterations; i++) { var condition = result % 11 == 4; var value = ((i * 11) / 3) % 5; result = operation(condition, value); } return result; } static int Ternary(bool condition, in value) { return value + (condition ? 2 : 1); } static int Normal(int iterations) { if (condition) { return = 2 + value; } return = 1 + value; } } I don't get any exceptions and the output to the console is somthing close to, Ternary took 107ms Normal took 230ms When I break down the CIL for the two logical functions I get this, ... Ternary ... { : ldarg.1 // push second arg : ldarg.0 // push first arg : brtrue.s T // if first arg is true jump to T : ldc.i4.1 // push int32(1) : br.s F // jump to F T: ldc.i4.2 // push int32(2) F: add // add either 1 or 2 to second arg : ret // return result } ... Normal ... { : ldarg.0 // push first arg : brfalse.s F // if first arg is false jump to F : ldc.i4.2 // push int32(2) : ldarg.1 // push second arg : add // add second arg to 2 : ret // return result F: ldc.i4.1 // push int32(1) : ldarg.1 // push second arg : add // add second arg to 1 : ret // return result } Whilst the Ternary CIL is a little shorter, it seems to me that the execution path through the CIL for either function takes 3 loads and 1 or 2 jumps and a return. Why does the Ternary function appear to be twice as fast. I underdtand that, in practice, they are both very quick and indeed, quich enough but, I would like to understand the discrepancy.

    Read the article

  • circles and triangles problem

    - by Faken
    Hello everyone, I have an interesting problem here I've been trying to solve for the last little while: I have 3 circles on a 2D xy plane, each with the same known radius. I know the coordinates of each of the three centers (they are arbitrary and can be anywhere). What is the largest triangle that can be drawn such that each vertice of the triangle sits on a separate circle, what are the coordinates of those verticies? I've been looking at this problem for hours and asked a bunch of people but so far only one person has been able to suggest a plausible solution (though i have no way of proving it). The solution that we have come up with involves first creating a triangle about the three circle centers. Next we look at each circle individually and calculate the equation of a line that passes through the circle's center and is perpendicular to the opposite edge. We then calculate two intersection points of the circle. This is then done for the next two circles with a result of 6 points. We iterate over the 8 possible 3 point triangles that these 6 points create (the restriction is that each point of the big triangle must be on a separate circle) and find the maximum size. The results look reasonable (at least when drawn out on paper) and it passes the special case of when the centers of the circles all fall on a straight line (gives a known largest triangle). Unfortunate i have no way of proving this is correct or not. I'm wondering if anyone has encountered a problem similar to this and if so, how did you solve it? Note: I understand that this is mostly a math question and not programming, however it is going to be implemented in code and it must be optimized to run very fast and efficient. In fact, I already have the above solution in code and tested to be working, if you would like to take a look, please let me know, i chose not to post it because its all in vector form and pretty much impossible to figure out exactly what is going on (because it's been condensed to be more efficient). Lastly, yes this is for school work, though it is NOT a homework question/assignment/project. It's part of my graduate thesis (abet a very very small part, but still technically is part of it). Thanks for your help.

    Read the article

  • mysql timeout - c/C++

    - by user1262876
    Guys i'm facing a problem with this code, the problem is the timeout by timeout i mean the time it takes the program to tell me if the server is connected or not. If i use my localhost i get the answer fast, but when i connect to outside my localhost it takes 50sc - 1.5 min to response and the program frezz until it done. HOw can i fix the frezzing, or make my own timeout, like if still waiting after 50sc, tell me connection failed and stop? please use codes as help, becouse i would understand it better, thanks for any help i get PS: USING MAC #include "mysql.h" #include <stdio.h> #include <stdlib.h> // Other Linker Flags: -lmysqlclient -lm -lz // just going to input the general details and not the port numbers struct connection_details { char *server; char *user; char *password; char *database; }; MYSQL* mysql_connection_setup(struct connection_details mysql_details) { // first of all create a mysql instance and initialize the variables within MYSQL *connection = mysql_init(NULL); // connect to the database with the details attached. if (!mysql_real_connect(connection,mysql_details.server, mysql_details.user, mysql_details.password, mysql_details.database, 0, NULL, 0)) { printf("Conection error : %s\n", mysql_error(connection)); exit(1); } return connection; } MYSQL_RES* mysql_perform_query(MYSQL *connection, char *sql_query) { // send the query to the database if (mysql_query(connection, sql_query)) { printf("MySQL query error : %s\n", mysql_error(connection)); exit(1); } return mysql_use_result(connection); } int main() { MYSQL *conn; // the connection MYSQL_RES *res; // the results MYSQL_ROW row; // the results row (line by line) struct connection_details mysqlD; mysqlD.server = (char*)"Localhost"; // where the mysql database is mysqlD.user = (char*)"root"; // the root user of mysql mysqlD.password = (char*)"123456"; // the password of the root user in mysql mysqlD.database = (char*)"test"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); // assign the results return to the MYSQL_RES pointer res = mysql_perform_query(conn, (char*) "SELECT * FROM me"); printf("MySQL Tables in mysql database:\n"); while ((row = mysql_fetch_row(res)) !=NULL) printf("%s - %s\n", row[0], row[1], row[2]); // <-- Rows /* clean up the database result set */ mysql_free_result(res); /* clean up the database link */ mysql_close(conn); return 0; }

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >