Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 368/402 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking a

    - by surtyaarthoughts
    I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution?

    Read the article

  • Problem with final branch in a parallel activity

    - by Dan Revell
    This might seem like a silly thing to say, the final branch in a parallel activity so I'll clarify. It's a parallel activity with three branches each containing a simple create task, on task changed and complete task. The branch containing the task that is last to complete seems to break. So every task works in it's own right, but the last one encounters a problem. Say the user clicks the final tasks link to open the attached infopath form and submits that. Execution gets to the event handler for that onTaskChanged where a taskCompleted variable gets set to true which will exit the while loop. I've successfully hit a breakpoint on this line so I know that happens. However the final activity in that branch, the completeTask doesn't get hit. When submit is clicked in the final form, the operation in progess screen says of for quite a while before returning to the workflow status page. The task that was opened and submitted says "Not Started". I can disable any of the branches to leave only two, but the same problem happens with the last to be completed. Earlier on in the workflow I do essencially the same thing. I have another 3 branch parallel activity with each brach containing a task. This one works correctly which leads me to believe that it might be a problem with having two parallel activites in the same sequential workflow. I've considered the possibility that it might be a correlation token problem. The token that every task branch uses is unique to that branch and it's owner activity name is est to that of the branch. It stands to reason that if the task complete variable is indeed getting set to true but the while loop isn't being exited, then there's a wire crossing with the variable somewhere. However I'd still have thought that the task status back on the workflow status page would at least say that the task is in progress. This is a frustrating show stopper of a bug for me. Any thoughts or suggestions would be much appricated so I can investigate them.

    Read the article

  • [NEWVERSION]algorithm || method to write prog[UNSOLVED] [closed]

    - by fatai
    I am one of the computer science student. Everyone solve problem with a different or same method, ( but actually I dont know whether they use method or I dont know whether there are such common method to approach problem.) if there are common method, What is it ? If there are different method, which method are you using ? All teacher give us problem which in simple form sometimes, but they donot introduce any approach or method(s) so that we cannot make a decision to choose the method then apply that one to problem , afterward find solution then write code.No help from teacher , push us to find method to solve homework. Ex: my friend is using no method , he says "I start to construct algorithm while I try to write prog." I have found one method when I failed the course, More accurately, my method: When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 now, my other question is ; What is/are other method(s) which have/has been; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c by other user? Some programmer uses generalized method without considering prog-language(java , perl .. ), what is this method ? Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • Using multiple aggregate functions in an algebraic expression in (ANSI) SQL statement

    - by morpheous
    I have the following aggregate functions (AGG FUNCs): foo(), foobar(), fredstats(), barneystats(). I want to know if I can use multiple AGG FUNCs in an algebraic expression. This may seem a strange/simplistic question for seasoned SQL developers - however, the but the reason I ask is that so far, all AGG FUNCs examples I have seen are of the simplistic variety e.g. max(salary) < 100, rather than using the AGG FUNCs in an expression which involves using multiple AGG FUNCs in an expression (like agg_func1() agg_func2()). The information below should help clarify further. Given tables with the following schemas: CREATE TABLE item (id int, length float, weight float); CREATE TABLE item_info (item_id, name varchar(32)); # Is it legal (ANSI) SQL to write queries of this format ? SELECT id, name, foo, foobar, fredstats FROM A, B (SELECT id, foo(123) as foo, foobar('red') as foobar, fredstats('weight') as fredstats FROM item GROUP BY id HAVING [ALGEBRAIC EXPRESSION] ORDER BY id AS A), item_info AS B WHERE item.id = B.id Where: ALGEBRAIC EXPRESSION is the type of expression that can be used in a WHERE clause - for example: ((foo(x) < foobar(y)) AND foobar(y) IN (1,2,3)) OR (fredstats(x) <> 0)) I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible. Assuming it is legal to include AGG FUNCS in the way I have done above, I'd like to know: Is there a more efficient way to write the above query ? Is there any way I can speed up the query in terms of a judicious choice of indexes on the tables item and item_info ? Is there a performance hit of using AGG FUNCs in an algebraic expression like I am (i.e. an expression involving the output of aggregate functions rather than constants? Can the expression also include 'scaled' AGG FUNC? (for example: 2*foo(123) < -3*foobar(456) ) - will scaling (i.e. multiplying an AGG FUNC by a number have an effect on performance?) How can I write the query above using INNER JOINS instead?

    Read the article

  • How to call Ajax to run a PHP file while maintaining PHP & Javascript variables.

    - by Umar
    Hi Stackoverflowers. I'm using the Facebook php-sdk to get the users name and friends, right now the loading friends part takes about +3 seconds so I wanted to do it via Ajax, e.g. so the document can load and jQuery then calls an external PHP script which loads the friends (their names and their profile pictures). So to do this I did: $(document).ready(function() { var loadUrl = "http://localhost/fb/getFriends.php" ; $("#friends") .html("Hold on, your friends are loading!") .load(loadUrl); }); But I get a PHP error: Fatal error: Call to a member function api() on a non-object If I do this in the same PHP file (so I don't use Ajax at all to call it) it works fine. Now I think I understand the reason this is happening, but I don't know how to fix it. In my main index.php file I have a bunch of init and session code e.g. FB.init({ appId : '<?php echo $facebook->getAppId(); ?>', session : <?php echo json_encode($session); ?>, // don't refetch the session when PHP already has it status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); So I'm just wondering what is the best way to treat my new separate PHP file getFriends.php in a way where it has access to all PHP/JavaScript session data/variables? If you haven't used the Facebook php-sdk I'll quickly explain what I mean: Lets say I have index.php and getUsername.php, from index.php I want to retrieve the getUsername.php file via Ajax using .load. Now the problem is getUsername.php needs to access PHP session data/Javascript Init functions which were created in index.php, so I'm thinking of ways to solve this (I'm new to PHP so sorry if this sounds silly) but I'm thinking maybe I could do a POST in jQuery Ajax and post the session data? Or maybe I could create a PHP class, so something like: class getUsername extends index{} /*Yes I'm a newbie*/ If you have a look at the php-sdk example.php link posted at the top maybe you'd better understand what variables exactly need to be accessed from a new file. Also on a different note, I'm using PHP to work out page rendering times and it seems that fetching the users name alone : // Session based API call. if ($session) { try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); } } Can take a good 4 seconds, is this normal? Once I get the users details is it good to cache it or something? -Speed isn't as important right now, for now I'm just trying to figure out this Ajax-separating php files thing. Woah this is a long post. Thanks very much for your time.

    Read the article

  • Understanding VS2010 C# parallel profiling results

    - by Haggai
    I have a program with many independent computations so I decided to parallelize it. I use Parallel.For/Each. The results were okay for a dual-core machine - CPU utilization of about 80%-90% most of the time. However, with a dual Xeon machine (i.e. 8 cores) I get only about 30%-40% CPU utilization, although the program spends quite a lot of time (sometimes more than 10 seconds) on the parallel sections, and I see it employs about 20-30 more threads in those sections compared to serial sections. Each thread takes more than 1 second to complete, so I see no reason for them to work in parallel - unless there is a synchronization problem. I used the built-in profiler of VS2010, and the results are strange. Even though I use locks only in one place, the profiler reports that about 85% of the program's time is spent on synchronization (also 5-7% sleep, 5-7% execution, under 1% IO). The locked code is only a cache (a dictionary) get/add: bool esn_found; lock (lock_load_esn) esn_found = cache.TryGetValue(st, out esn); if(!esn_found) { esn = pData.esa_inv_idx.esa[term_idx]; esn.populate(pData.esa_inv_idx.datafile); lock (lock_load_esn) { if (!cache.ContainsKey(st)) cache.Add(st, esn); } } lock_load_esn is a static member of the class of type Object. esn.populate reads from a file using a separate StreamReader for each thread. However, when I press the Synchronization button to see what causes the most delay, I see that the profiler reports lines which are function entrance lines, and doesn't report the locked sections themselves. It doesn't even report the function that contains the above code (reminder - the only lock in the program) as part of the blocking profile with noise level 2%. With noise level at 0% it reports all the functions of the program, which I don't understand why they count as blocking synchronizations. So my question is - what is going on here? How can it be that 85% of the time is spent on synchronization? How do I find out what really is the problem with the parallel sections of my program? Thanks.

    Read the article

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • Do you feel underappreciated or resent the geek/nerd stigma?

    - by dotnetdev
    At work we have a piece of A4 paper with the number of everyone in the office. The structure of this document is laid out in rectangles, by department. I work for the department that does all the technical stuff. That includes support—bear in mind that the support staff isn't educated in IT but just has experience in PC maintenance and providing support to a system we resell but don't have source code access to, project manager, team leader, a network administrator, a product manager, and me, a programmer. Anyway, on this paper, we are labelled as nerds and geeks. I did take a little offence to this, as much as it is light hearted (but annoying and old) humour. I have a vivid image that a geek is someone who doesn't go out but codes all day. I code all day at home and at work (when I have something to code...), but I keep balance by going out. I don't know why it is only people who work with computers that get such a stigma. No other profession really gets the same stigma—skilled, technical, or whatever. An account manager (and this is hardly a skilled job) says, "Perhaps [MY NAME HERE] could write some geeky code tomorrow to add this functionality to the website." It is funny how I get such an unfair stigma but I am so pivotal. In fact, if it wasn't for me, the company would have nothing to sell so the account managers would be redundant! I make systems, they get sold, and this is what pays the wages. It's funny how the account managers get a commission for how many systems they sell, or manage to make clients resubscribe to. Yet I built the thing in the first place! On top of that, my brother says all I do is type stuff on a keyboard all day. Surely if I did, I'd be typing at my normal typing speed of 100wpm+ as if I am writing a blog entry. Instead, I plan as I code along on the fly if commercial pressures and time prohibit proper planning. I never type as if I'm writing normal English. There is more to our jobs than just typing code. And my brother is a pipe fitter with no formal qualifications in his name. I could easily, and perhaps more justifiably, say he just manipulates a spanner or something. Does you feel underappreciated or that a geek/nerd stigma is undeserved or unfair?

    Read the article

  • What is your most productive shortcut with Vim?

    - by Olivier Pons
    I've heard a lot about Vim, both pros and cons. It really seems you should be (as a developer) faster with Vim than with any other editor. I'm using Vim to do some basic stuff and I'm at best 10 times less productive with Vim. The only two things you should care about when you talk about speed (you may not care enough about them, but you should) are: Using alternatively left and right hands is the fastest way to use the keyboard. Never touching the mouse is the second way to be as fast as possible. It takes ages for you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you often have to look at the keyboard to be sure you returned your hand properly to the right place) Here are two examples demonstrating why I'm far less productive with Vim. Copy/Cut & paste. I do it all the time. With all the classical editors you press Shift with the left hand, and you move the cursor with your right hand to select text. Then Ctrl+C copies, you move the cursor and Ctrl+V pastes. With Vim it's horrible: yy to copy one line (you almost never want the whole line!) [number xx]yy to copy xx lines into the buffer. But you never know exactly if you've selected what you wanted. I often have to do [number xx]dd then u to undo! Another example? Search & replace. In PSPad: Ctrl+f then type what you want you search for, then press Enter. In Vim: /, then type what you want to search for, then if there are some special characters put \ before each special character, then press Enter. And everything with Vim is like that: it seems I don't know how to handle it the right way. NB : I've already read the Vim cheat sheet :) My question is: What is the way you use Vim that makes you more productive than with a classical editor?

    Read the article

  • Managing multiple AJAX calls to PHP scripts

    - by relativelycoded
    I have a set of 5 HTML dropdowns that act as filters for narrowing results returned from a mySQL database. The three pertinent filters are for "Province", "Region", and "City" selection, respectively. I have three functions: findSchools(), which runs when any of the filters (marked with CSS class .filter) are changed, and fetches the results via AJAX from a PHP script. Once that is done, two other functions are called... changeRegionOptions(), which, upon changing the "Province" filter, and updates the available options using the same method as the first function, but posting to a different script. changeCityOptions(), which runs if the "Region" filter was changed, and updates options, again using the same method. The problem is that since I want these AJAX functions to run simultaneously, and they by nature run asynchronously, I've tried using $.when to control the execution of the functions, but it doesn't fix the problem. The functions run, but the Region and City filters return blank (no options); the FireBug report shows absolutely no output, even though the POST request went through. The posted parameter for filter_province gets sent normally, but the one for region gets cut off at the end -- it sends as filter_region=, with no value passed. So I'm presuming my logic is wrong somewhere. The code is below: // When any of the filters are changed, let's query the database... $("select.filter").change(function() { findSchools(); }); // First, we see if there are any results... function findSchools() { var sch_province = document.filterform.filter_province.value; var sch_region = document.filterform.filter_region.value; var sch_city = document.filterform.filter_city.value; var sch_cat = document.filterform.filter_category.value; var sch_type = document.filterform.filter_type.value; $.post("fetch_results.php", { filter_province : sch_province, filter_region : sch_region, filter_city : sch_city, filter_category : sch_cat, filter_type : sch_type }, function(data) { $("#results").html(""); $("#results").hide(); $("#results").html(data); $("#results").fadeIn(600); } ); // Once the results are fetched, we want to see if the filter they changed was the one for Province, and if so, update the Region and City options to match that selection... $("#filter_province").change(function() { $.when(findSchools()) .done(changeRegionOptions()); $.when(changeRegionOptions()) .done(changeCityOptions()); }); }; This is just one of the ways I've tried to solve it; I've tried using an IF statement, and tried calling the functions directly inside the general select.filter.change() function (after findSchools(); ), but they all return the same result. Any help with this would be great!

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • drag to pan on an UserControl

    - by Matías
    Hello, I'm trying to build my own "PictureBox like" control adding some functionalities. For example, I want to be able to pan over a big image by simply clicking and dragging with the mouse. The problem seems to be on my OnMouseMove method. If I use the following code I get the drag speed and precision I want, but of course, when I release the mouse button and try to drag again the image is restored to its original position. using System.Drawing; using System.Windows.Forms; namespace Testing { public partial class ScrollablePictureBox : UserControl { private Image image; private bool centerImage; public Image Image { get { return image; } set { image = value; Invalidate(); } } public bool CenterImage { get { return centerImage; } set { centerImage = value; Invalidate(); } } public ScrollablePictureBox() { InitializeComponent(); SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true); Image = null; AutoScroll = true; AutoScrollMinSize = new Size(0, 0); } private Point clickPosition; private Point scrollPosition; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); clickPosition.X = e.X; clickPosition.Y = e.Y; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X = clickPosition.X - e.X; scrollPosition.Y = clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); e.Graphics.FillRectangle(new Pen(BackColor).Brush, 0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height); if (Image == null) return; int centeredX = AutoScrollPosition.X; int centeredY = AutoScrollPosition.Y; if (CenterImage) { //Something not relevant } AutoScrollMinSize = new Size(Image.Width, Image.Height); e.Graphics.DrawImage(Image, new RectangleF(centeredX, centeredY, Image.Width, Image.Height)); } } } But if I modify my OnMouseMove method to look like this: protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X += clickPosition.X - e.X; scrollPosition.Y += clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } ... you will see that the dragging is not smooth as before, and sometimes behaves weird (like with lag or something). What am I doing wrong? I've also tried removing all "base" calls on a desperate movement to solve this issue, haha, but again, it didn't work. Thanks for your time.

    Read the article

  • Is there anyway to exclude artifacts inherited from a parent POM?

    - by Miguel
    Artifacts from dependencies can be excluded by declaring an <exclusions> element inside a <dependency> But in this case it's needed to exclude an artifact inherited from a parent project. An excerpt of the POM under discussion follows: <project> <modelVersion>4.0.0</modelVersion> <groupId>test</groupId> <artifactId>jruby</artifactId> <version>0.0.1-SNAPSHOT</version> <parent> <artifactId>base</artifactId> <groupId>es.uniovi.innova</groupId> <version>1.0.0</version> </parent> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>ALL-DEPS</artifactId> <version>1.0</version> <scope>provided</scope> <type>pom</type> </dependency> </dependencies> </project> base artifact, depends on javax.mail:mail-1.4.jar, and ALL-DEPS depends on another version of the same library. Due to the fact that mail.jar from ALL-DEPS exist on the execution environment, although not exported, collides with the mail.jar that exists on the parent, which is scoped as compile. A solution could be to rid off mail.jar from the parent POM, but most of the projects that inherit base, need it (as is a transtive dependency for log4j). So What I would like to do is to simply exclude parent's library from the child project, as it could be done if base was a dependency and not the parent pom: ... <dependency> <artifactId>base</artifactId> <groupId>es.uniovi.innova</groupId> <version>1.0.0</version> <type>pom<type> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> ...

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Problem in Fetching Table contents when adding rows in same table

    - by jasmine
    Im trying to write a function for adding category: function addCategory() { $cname = mysql_fix_string($_POST['cname']); $kabst = mysql_fix_string($_POST['kabst']); $kselect = $_POST['kselect']; $kradio = $_POST['kradio']; $ksubmit = $_POST['ksubmit']; $id = $_POST['id']; if($ksubmit){ $query = "INSERT INTO category VALUES (' ', '{$cname}', '{$kabst}', {$kselect}, {$kradio}, ' ') "; $result = mysql_query($query); if ($result) { echo "ok"; } else{ echo $query ; } } $text .= '<div class="form"> <h2>ADD new category</h2> <form action="?page=addCategory" method="post"> <ul> <li><label>Category</label></li> <li><input name="cname" type="text" class="inp" /></li> <li><label>Description</label></li> <li><textarea name="kabst" cols="40" rows="10" class="inx"></textarea></li> <li>Published:</li> <li> <select name="kselect" class="ins"> <option value="1">Active</option> <option value="0">Passive</option> </select> </li> <li>Show in home page:</li> <li> <input type="radio" name="kradio" value="1" /> yes <input type="radio" name="kradio" value="0" /> no </li> <li>Subcategory of</li> <li> <select>'; while ($row = mysql_fetch_assoc(mysql_query("SELECT * FROM category"))){ $text .= '<option>'.$row['name'].'</option>'; } $text .= '</select> </li> <li><input name="ksubmit" type="submit" value="ekle" class="int"/></li> </ul> </form> '; return $text;} And the error: Fatal error: Maximum execution time of 30 seconds exceeded What is wrong in my function?

    Read the article

  • SelectList in Asp-mvc and data from the database

    - by George
    Hello guys. I'm having some troubles with SelectList in ASP.MVC. Here is the issue: I have a Create View and begind a ViewModel model. The page load just fine (GET verb). But when posting, something happens, and my model is considered invalid, and it cannot insert. Here's what i've tried so far. public class DefinitionFormViewModel { private Repository<Category> categoryRepository = new Repository<Category>(); public Definition ViewDefinition { get; private set; } public SelectList Categories { get; private set; } public DefinitionFormViewModel(Definition def) { ViewDefinition = def; // here i wanted to place it directly, like shown in NerdDinner Tutorial // new SelectList(categoryRepository.All(),ViewDefinition.Category); Categories = new SelectList(categoryRepository.All(), "CategoryId", "CategoryName", ViewDefinition.CategoryId); } } // pageview which inherits DefinitionFormViewModel <div class=editor-field"> <%= Html.DropDownList("Category",Model.Categories) %> <%= Html.ValidationMessageFor(model => Model.ViewDefinition.Category) %> </div> // controller methods [Authorize] public ActionResult Create() { Definition definition = new Definition(); return View(new DefinitionFormViewModel(definition)); } [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Create(int id,Definition definition) { Term term = termRepository.SingleOrDefault(t => t.TermId == id); if (term == null) { return View("NotFound", new NotFoundModel("Termo não encontrado", "Termo não encontrado", "Nos desculpe, mas não conseguimos encontrar o termo solicitado.", "Indíce de Termos", "Index", "Term")); } else { if (ModelState.IsValid) { try { definition.TermId = term.TermId; definition.ResponsibleUser = User.Identity.Name; UpdateModel(definition); term.Definitions.Add(definition); termRepository.SaveAll(); return RedirectToAction("Details", "Term", new { id = term.TermId }); } catch (System.Data.SqlClient.SqlException sqlEx) { ModelState.AddModelError("DatabaseError", "Houve um erro na inserção desta nova definição"); } catch { foreach (RuleViolation rv in definition.GetRuleViolations()) { ModelState.AddModelError(rv.PropertyName, rv.ErrorMessage); } } } } return View(new DefinitionFormViewModel(definition)); } I'm sorry about the long post, but I cant figure this out. I got no graphic errors or exceptions. My execution terminates in if (ModelState.IsValid). Thanks for your time George

    Read the article

  • EF4 Import/Lookup thousands of records - my performance stinks!

    - by Dennis Ward
    I'm trying to setup something for a movie store website (using ASP.NET, EF4, SQL Server 2008), and in my scenario, I want to allow a "Member" store to import their catalog of movies stored in a text file containing ActorName, MovieTitle, and CatalogNumber as follows: Actor, Movie, CatalogNumber John Wayne, True Grit, 4577-12 (repeated for each record) This data will be used to lookup an actor and movie, and create a "MemberMovie" record, and my import speed is terrible if I import more than 100 or so records using these tables: Actor Table: Fields = {ID, Name, etc.} Movie Table: Fields = {ID, Title, ActorID, etc.} MemberMovie Table: Fields = {ID, CatalogNumber, MovieID, etc.} My methodology to import data into the MemberMovie table from a text file is as follows (after the file has been uploaded successfully): Create a context. For each line in the file, lookup the artist in the Actor table. For each Movie in the Artist table, lookup the matching title. If a matching Movie is found, add a new MemberMovie record to the context and call ctx.SaveChanges(). The performance of my implementation is terrible. My expectation is that this can be done with thousands of records in a few seconds (after the file has been uploaded), and I've got something that times out the browser. My question is this: What is the best approach for performing bulk lookups/inserts like this? Should I call SaveChanges only once rather than for each newly created MemberMovie? Would it be better to implement this using something like a stored procedure? A snippet of my loop is roughly this (edited for brevity): while ((fline = file.ReadLine()) != null) { string [] token = fline.Split(separator); string Actor = token[0]; string Movie = token[1]; string CatNumber = token[2]; Actor found_actor = ctx.Actors.Where(a => a.Name.Equals(actor)).FirstOrDefault(); if (found_actor == null) continue; Movie found_movie = found_actor.Movies.Where( s => s.Title.Equals(title, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault(); if (found_movie == null) continue; ctx.MemberMovies.AddObject(new MemberMovie() { MemberProfileID = profile_id, CatalogNumber = CatNumber, Movie = found_movie }); try { ctx.SaveChanges(); } catch { } } Any help is appreciated! Thanks, Dennis

    Read the article

  • Need Help with Consolidating RoR Google Map Results

    - by Kevin
    I have a project that returns geocoded results within 20 miles of the user. I want these results grouped on the map by zip code, then within the info window show the individual results. The code posted below works, but for some reason it only displays the 1.png rather than looking at the results and using the correct .png icon associated with the number. When I look at the infowindows, it displays the correct png like "/images/2.png" or "/images/5.png" but the actual image is always 1. @ziptickets = Ticket.find(:all, :origin => coords, :select => 'DISTINCT zip, lat, lng', :within => @user.distance_to_travel, :conditions => "status_id = 1") for t in @ziptickets zips = Ticket.find(:all, :conditions => ["zip = ?", t.zip]) currentzip = t.zip.to_s tixinzip = zips.size.to_s imagelocation = "/images/" + tixinzip + ".png" shadowlocation = "/images/" + tixinzip + "s.png" @map.icon_global_init(GIcon.new(:image => imagelocation, :shadow => shadowlocation, :shadow_size => GSize.new(60,40), :icon_anchor => GPoint.new(20,20), :info_window_anchor => GPoint.new(9,2)), "test") newicon = Variable.new("test") new_marker = GMarker.new([t.lat, t.lng], :icon => newicon, :title => imagelocation, :info_window => currentzip) @map.overlay_init(new_marker) end I tried changing the last part of the mapicon from: :info_window_anchor => GPoint.new(9,2)), "test") newicon = Variable.new("test") to: :info_window_anchor => GPoint.new(9,2)), currentzip) newicon = Variable.new(currentzip) but the strangest thing is that any string that has numbers in it causes the map to fail to render in the view and just show a blank screen... same if I replace it with :info_window_anchor => GPoint.new(9,2)), "123") newicon = Variable.new("123") Any advice would be helpful... also it runs a bit slower than my previous code which just set up 4 standard icons and used them outside of the loop so any hints as to speed up execution would be appreciated greatly. Thanks!

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • Parameters not being passed into template when using the .Net transform classes

    - by Chris F
    I am using the .Net XslCompiledTranform to run some simple XSLT (see below for a simplified example). The example XSLT is meant to do simply show the value of the parameter that is passed in to the template. The output is what I expect it to be (i.e. <result xmlns:p1="http://www.doesnotexist.com"> <valueOfParamA>valueA</valueOfParamA> </result> when I use Saxon 9.0, but when I use XslCompiledTransform (XslTransform) in .net I get <result xmlns:p1="http://www.doesnotexist.com"> <valueOfParamA></valueOfParamA> </result> The problem is that that the parameter value of paramA is not being passed into the template when I use the .Net classes. I completely stumped as to why. when I step through in Visual Studio, the debugger says that the template will be called with paramA='valueA' but when execution switches to the template the value of paramA is blank. Can anyone explain why this is happening? Is this a bug in the MS implementation or (more likely) am I doing something that is forbidden in XSLT? Any help greatly appreciated. This is the XSLT that I am using <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:extfn="http://exslt.org/common" exclude-result-prefixes="extfn" xmlns:p1="http://www.doesnotexist.com"> <!-- Replace msxml with xmlns:extfn="http://exslt.org/common" xmlns:extfn="urn:schemas-microsoft-com:xslt" --> <xsl:output method="xml" indent="yes"/> <xsl:template match="/"> <xsl:variable name="resultTreeFragment"> <p1:foo> </p1:foo> </xsl:variable> <xsl:variable name="nodeset" select="extfn:node-set($resultTreeFragment)"/> <result> <xsl:apply-templates select="$nodeset" mode="AParticularMode"> <xsl:with-param name="paramA" select="'valueA'"/> </xsl:apply-templates> </result> </xsl:template> <xsl:template match="@* | node()" mode="AParticularMode"> <xsl:param name="paramA"/> <valueOfParamA> <xsl:value-of select="$paramA"/> </valueOfParamA> </xsl:template> </xsl:stylesheet>

    Read the article

  • ArithmeticException thrown during BigDecimal.divide

    - by polygenelubricants
    I thought java.math.BigDecimal is supposed to be The Answer™ to the need of performing infinite precision arithmetic with decimal numbers. Consider the following snippet: import java.math.BigDecimal; //... final BigDecimal one = BigDecimal.ONE; final BigDecimal three = BigDecimal.valueOf(3); final BigDecimal third = one.divide(three); assert third.multiply(three).equals(one); // this should pass, right? I expect the assert to pass, but in fact the execution doesn't even get there: one.divide(three) causes ArithmeticException to be thrown! Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result. at java.math.BigDecimal.divide It turns out that this behavior is explicitly documented in the API: In the case of divide, the exact quotient could have an infinitely long decimal expansion; for example, 1 divided by 3. If the quotient has a non-terminating decimal expansion and the operation is specified to return an exact result, an ArithmeticException is thrown. Otherwise, the exact result of the division is returned, as done for other operations. Browsing around the API further, one finds that in fact there are various overloads of divide that performs inexact division, i.e.: final BigDecimal third = one.divide(three, 33, RoundingMode.DOWN); System.out.println(three.multiply(third)); // prints "0.999999999999999999999999999999999" Of course, the obvious question now is "What's the point???". I thought BigDecimal is the solution when we need exact arithmetic, e.g. for financial calculations. If we can't even divide exactly, then how useful can this be? Does it actually serve a general purpose, or is it only useful in a very niche application where you fortunately just don't need to divide at all? If this is not the right answer, what CAN we use for exact division in financial calculation? (I mean, I don't have a finance major, but they still use division, right???).

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >