Search Results

Search found 14236 results on 570 pages for 'times square'.

Page 461/570 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • Inject runtime exception to pthread sometime fails. How to fix that?

    - by lionbest
    I try to inject the exception to thread using signals, but some times the exception is not get caught. For example the following code: void _sigthrow(int sig) { throw runtime_error(strsignal(sig)); } struct sigaction sigthrow = {{&_sigthrow}}; void* thread1(void*) { sigaction(SIGINT,&sigthrow,NULL); try { while(1) usleep(1); } catch(exception &e) { cerr << "Thread1 catched " << e.what() << endl; } }; void* thread2(void*) { sigaction(SIGINT,&sigthrow,NULL); try { while(1); } catch(exception &e) { cerr << "Thread2 catched " << e.what() << endl; //never goes here } }; If I try to execute like: int main() { pthread_t p1,p2; pthread_create( &p1, NULL, &thread1, NULL ); pthread_create( &p2, NULL, &thread2, NULL ); sleep(1); pthread_kill( p1, SIGINT); pthread_kill( p2, SIGINT); sleep(1); return EXIT_SUCCESS; } I get the following output: Thread1 catched Interrupt terminate called after throwing an instance of 'std::runtime_error' what(): Interrupt Aborted How can I make second threat catch exception? Is there better idea about injecting exceptions?

    Read the article

  • Null Pointer Exception in Primavera P6 8.1

    - by gwrichard
    I am getting a null pointer exception in a Primavera P6 8.1 installation. The exception only occurs in one section of the web-client: Application settings.P6 web and the P6 API are deployed on the same WebLogic (10.3.5) node running on a Windows Server 2008 R2 installation. I have done this installation using this same software stack a dozen times and don't have this issue on any of the other installs. Exact error below: Match: beginTraversal Match: digest selected JREDesc: JREDesc[version 1.6.0_20+, heap=-1--1, args=null, href=http://java.sun.com/products/autodl/j2se, sel=false, null, null], JREInfo: JREInfo for index 0: platform is: 1.7 product is: 1.7.0_17 location is: http://java.sun.com/products/autodl/j2se path is: C:\Program Files (x86)\Java\jre7\bin\javaw.exe args is: null native platform is: Windows, x86 [ x86, 32bit ] JavaFX runtime is: JavaFX 2.2.7 found at C:\Program Files (x86)\Java\jre7\ enabled is: true registered is: true system is: true Match: ignoring maxHeap: -1 Match: ignoring InitHeap: -1 Match: digesting vmargs: null Match: digested vmargs: [JVMParameters: isSecure: true, args: ] Match: JVM args after accumulation: [JVMParameters: isSecure: true, args: ] Match: digest LaunchDesc: http://localhost:7001/p6/action/jnlp/appletsjnlp.jnlp?mainClass=com.primavera.pvapplets.adminpreferences.AdminPreferencesApplet&classPath=adminpreferences.jar,prm-applets-common.jar,forms-1.0.7.jar,prm-guisupport.jar,prm-to.jar,jide.jar,tablesupport.jar,formsupport.jar,applets-bo.jar,commons-lang.jar,prm-common.jar,resource_strings.jar,prm-img.jar,commons-logging.jar&name=AdminPreferences&version=8.1.2.0.0602 Match: digest properties: [] Match: JVM args: [JVMParameters: isSecure: true, args: ] Match: endTraversal .. Match: JVM args final: Match: Running JREInfo Version match: 1.7.0.17 == 1.7.0.17 Match: Running JVM args match: have:<> satisfy want:<> Java Plug-in 10.17.2.02 Using JRE version 1.7.0_17-b02 Java HotSpot(TM) Client VM User home directory = C:\Users\gwrichard ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ----------------------------------------------------

    Read the article

  • Unknown and unreproducible crash causes App Store rejection

    - by Daniel Johnson
    After submitting our application several times, we continue to receive the following response: Thank you for submitting My App to the App Store. We've reviewed your application and determined that we cannot post this version of your iPad application to the App Store because My App is crashing on iPad running iPhone OS 3.2 and Mac OS X 10.6.2. My App crashes upon launch. Unfortunately, crash logs have not been generated. However, resigning the same build with the AdHoc entitlements and loading the build onto the device yields no such crash. After a number of attempts, the application simply does not crash as reported by the reviewer. Furthermore, the reviewer does not provide any useful logging that may have been generated by SpringBoard such as an exit status or event if it had worked properly for any other device. There are no calls to explicitly exit or quit the application in the code line and yet the application terminates on startup. What might cause an application to terminate in such a manner? Under what conditions is an application tested that might not be found under a development environment? Could it be a result of a signing issue that the submission validation system is simply unable to catch? Thanks in advance.

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • generation of random numbers in java

    - by S.PRATHIBA
    Hi all, I want to create 30 tables which consists of the following fields.For example, Service_ID Service_Type consumer_feedback 75 Computing 1 35 Printer 0 33 Printer -1 3 rows in set (0.00 sec) mysql select * from consumer2; Service_ID Service_Type consumer_feedback 42 data 0 75 computing 0 mysql select * from consumer3; Service_ID Service_Type consumer_feedback 43 data -1 41 data 1 72 computing -1 As you can infer from the above tables, i am getting the feedback values.I have generated these consumer_feedback values,Service_ID,Service_Type using the concept of random numbers .I have used the funtion int min1=31;//printer int max1=35;//the values are generated if the Service_Type is printer. int provider1 = (int) (Math.random() * (max1 - min1 + 1) ) + min1; int min2=41;//data int max2 =45 int provider2 = (int) (Math.random() * (max2 - min2 + 1) ) + min2; int min3=71;//computing int max3=75; int provider3 = (int) (Math.random() * (max3 - min3 + 1) ) + min3; int min5 = -1;//feedback values int max5 =1; int feedback = (int) (Math.random() * (max5 - min5 + 1) ) + min5; I need the Service_Types to be distributed uniformly in all the 30 tables.Similarly I need feedback value of 1 to be generated many times other than 0 and -1.Please Help me.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • How to select the first property with unknown name from JSON and how to select first item from array

    - by Oscar Godson
    I actually have two questions, both are probably simple, but for some odd reason I cant figure it out... I've worked with JSON 100s of times before too! but here is the JSON in question: {"69256":{"streaminfo":{"stream_ID":"1025","sourceowner_ID":"2","sourceowner_avatar":"http:\/\/content.nozzlmedia.com\/images\/sourceowner_avatar2.jpg","sourceownertype_ID":"1","stream_name":"Twitter","streamtype":"Social media","appsarray":[]},"item":{"headline":"Charboy","main_image":"http:\/\/content.nozzlmedia.com\/images\/author_avatar173212.jpg","summary":"ate a tomato and avocado for dinner...","nozzl_captured":"2010-05-12 23:02:12","geoarray":[{"state":"OR","county":"Multnomah","city":"Portland","neighborhood":"Downtown","zip":"97205","street":"462 SW 11th Ave","latitude":"45.5219","longitude":"-122.682"}],"full_content":"ate a tomato and avocado for dinner tonight. such tasty foods. just enjoyable.","body_text":"ate a tomato and avocado for dinner tonight. such tasty foods. just enjoyable.","author_name":"Charboy","author_avatar":"http:\/\/content.nozzlmedia.com\/images\/author_avatar173212.jpg","fulltext_url":"http:\/\/twitter.com\/charboy\/statuses\/13889868936","leftovers":{"twitter_id":"tag:search.twitter.com,2005:13889868936","date":"2010-05-13T02:59:59Z","location":"iPhone: 45.521866,-122.682262"},"wordarray":{"0":"ate","1":"tomato","2":"avocado","3":"dinner","4":"tonight","5":"tasty","6":"foods","7":"just","8":"enjoyable","9":"Charboy","11":"Twitter","13":"state:OR","14":"county:Multnomah, OR","15":"city:Portland, OR","16":"neighborhood:Downtown","17":"zip:97205"}}}} Question 1: How do I loop through each item (69256) when the number is random? e.g. item 1 is 123, item2 is 646? Like, for example, a normal JSON feed would have something like: {'item':{'blah':'lorem'},'item':{'blah':'ipsum'}} the JS would be like console.log(item.blah) to return lorem then ipsum in a loop How do I do it when i dont know the first item of the object? Question 2: How do I select items from the geoarray object? I tried: json.test.item.geoarray.latitude and json.test.item.geoarray['latitude']

    Read the article

  • iPhone: detect "touch-and-drag" gesture from UIBarButtonItem?

    - by Greg Maletic
    I have an "add" button that's represented by a UIBarButtonItem. Hitting the "add" button adds an object into a list that represents a moment in time. By default, that time is "now"...but I'd like to be able to use dragging behavior to let the user specify earlier times for the object. Here's the behavior I want to implement: If the user touches on the UIBarButtonItem and lets go quickly, an object is added to the list that represents "now." If the user touches on the UIBarButtonItem and drags, a little UI pops up that shows the time that the distance of their drag represents. The further they drag, the further back in time their touch will represent. When they let go, the object representing an earlier time will get added to the list. (Though the description of the behavior is complicated, I'm convinced this will be pretty intuitive for users of the app.) I haven't implemented code for anything but the most simple touches in the past, and I'm at a loss as to the best way to try this. Does anyone have any suggestions, or could point me towards some sample code that implements something like this? Thanks very much.

    Read the article

  • Multiple UIWebViewNavigationTypeBackForward not only false but make inferring the actual url impossi

    - by SG1
    I have a UIWebView and a UITextField for the url. Naturally, I want the textField to always show the current document url. This works fine for urls directly input in the field, but I also have some buttons attached to the view for reload, back, and forward. So I've added all the UIWebViewDelegate methods to my controller, so it can listen to whenever the webView navigates and change the url in the textField as needed. Here's how I'm using the shouldStartLoadWithRequest: method: - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { NSLog(@"navigated via %d", navigationType); //loads the user cares about if ( navigationType == UIWebViewNavigationTypeLinkClicked || navigationType == UIWebViewNavigationTypeBackForward ) { //URL setting [self setUrlQuietly:request.URL]; } return YES; } Now, my problem here is that an actual click will generate a single navigation of type "LinkClicked" followed by a dozen type "Other" (redirects and ad loads I assume), which gets handled correctly by the code, but a back/forward action will generate all its requests as back/forward requests. In other words, a click calls setUrlQuietly: once, but a back/forward calls it multiple times. I am trying to use this method to determine if the user actually initiated the action (and I'd like to catch page redirects too). But if the method has no way of distinguishing between an actual "back" and a "load initiated as a result of a back", how can I make this assessment? Without this, I am completely stumped as to how I can only show the actual url and not intermediate urls. Thank you!

    Read the article

  • Why is my program printing out the null termination character?

    - by Tyler Pfaff
    When I run this, it will SOMETIMES print out a null termination character. Most of the time it will, and probably 1/5 times it will print just the characters. void cryptogram::Encrypt(){ cout<<"encrypt"<tempS){ len=tempS.length(); int a=0; for(int j=0;j if(j!=len){ //if the word still has more characters j++; a=0; }else{ //if the word is done being scanned cout<<" "; } } } } } } } So that's it and this is the corresponding EXPECTED output that is printed SOMETIMES xvk bkikhxlr wggbtfkj wiylekgbdhx wjjm hko wigbtubxt xvk iwhj uedjkm glctb gvrmdiwhj iebbdielmeggtbx ctb xvtmk gbtubxvk wjjdxdthgbtubodll khvxvk imkbfdik xt xvk bkudth whj gbtfdjk hko tgxdthm whj tggtbxehdxdkm ctb mxejkhxmibdzdhtltur whj pemxdik mxejdkm mxdh cok wbk wlmt gbkgctb cteb hko zdh cgvrmdikjeiwhj qdhkmdtlturzzkjdydtivkzdmxbrw zdh zdjjlkkjeiwhj w jtixtbdh kjeiwjzdhdmxbittgkbodxv mjme whj eimj This is what normally prints though xvkÈ bkikhxlrÈ wggbtfkjÈ wiylekgbdhxÈ wjjmÈ hkoÈ wigbtubxtÈ xvkÈ iwhjÈ uedjkmÈ glctbÈ gvrmdiwhjÈ iebbdielmeggtbxÈ ctbÈ xvtmkÈ gbtubxvkÈ wjjdxdthgbtubodllÈ khvxvkÈ imkbfdikÈ xtÈ xvkÈ bkudthÈ whjÈ gbtfdjkÈ hkoÈ tgxdthmÈ whjÈ tggtbxehdxdkmÈ ctbÈ mxejkhxmibdzdhtlturÈ whjÈ pemxdikÈ mxejdkmÈ mxdhÈ cokÈ wbkÈ wlmtÈ gbkgctbÈ ctebÈ hkoÈ zdhÈ cgvrmdikjeiwhjÈ qdhkmdtlturzzkjdydtivkzdmxbrwÈ zdhÈ zdjjlkkjeiwhjÈ wÈ jtixtbdhÈ kjeiwjzdhdmxbittgkbodxvÈ mjmeÈ whjÈ eimj or some variation of an odd character at the end of each word This is what the cryptogram array is filled with by the way wyijkcuvdpqlzhtgabmxefonrs

    Read the article

  • key-words highlight in <textarea> (again)

    - by Halst
    Wait, I know! I know that this "syntax highlight in textarea"-question was raised like a million times on stackoverflow! But, please, listen. offtopic: I'm not a web-developer, and technically I'm not a programmer at all. I study mechatronics and deal mostly with control-engineering and digital-hardware. And I'm so pissed off that whenever I want to share some application (that would be helpful in my field) and embed it into the web, I need to know such a crazy amount of technologies, like html, css, javascript, flash, etc.. that takes time, which I could have been spending for the benefit of my own field. Right now I'm playing with hardware-description-languages and I'm writing some Python-libraries to convert one HDL into another. And I wanted to embed such feature on the web: http://xhdl2vhdl.appspot.com/ I wanted to implement some basic syntax highlighting (only keywords highlighting will be enough) so that the code could be readable. But the whole idea highlighting something in textarea is not trivial at all. The other difficulty is that the languages I work with are rare, and there are no out-of-box solutions for them. I tried to dig into these solutions, but they are very complicated for me: http://www.nicolarizzo.com/gamesroom/experimental/CodeEditor.html http://marijn.haverbeke.nl/codemirror/jstest.html and there is no clear descriptions how to use them (for my level of knowledge of web-development). So, is there a simple solution, just to highlight a bunch of key-words in textarea or perform something equivalent? Thank you.

    Read the article

  • What's the best way to resolve this scope problem?

    - by Peter Stewart
    I'm writing a program in python that uses genetic techniques to optimize expressions. Constructing and evaluating the expression tree is the time consumer as it can happen billions of times per run. So I thought I'd learn enough c++ to write it and then incorporate it in python using cython or ctypes. I've done some searching on stackoverflow and learned a lot. This code compiles, but leaves the pointers dangling. I tried 'this_node = new Node(...' . It didn't seem to work. And I'm not at all sure how I'd delete all the references as there would be hundreds. I'd like to use variables that stay in scope, but maybe that's not the c++ way. What is the c++ way? class Node { public: char *cargo; int depth; Node *left; Node *right; } Node make_tree(int depth) { depth--; if(depth <= 0) { Node tthis_node("value",depth,NULL,NULL); return tthis_node; } else { Node this_node("operator" depth, &make_tree(depth), &make_tree(depth)); return this_node; } };

    Read the article

  • XSLT: generate multiple object by incrementing attribute and value

    - by Daniel
    Hi, I have a xml as below that I'd like to copy n times while incrementing one of its element and one of its attribute. XML input: <Person position=1> <name>John</name> <number>1</number> <number>1</number> </Person> and I'd like something like below with the number of increment to be a variable. XML output: <Person position=1> <name>John</name> <number>1</number> </Person> <Person position=2> <name>John</name> <number>2</number> </Person> .... <Person position=n> <name>John</name> <number>n</number> </Person> Any clue

    Read the article

  • named_scope + average is causing the table to be specified more then once in the sql query run on po

    - by hadees
    I have a named scopes like so... named_scope :gender, lambda { |gender| { :joins => {:survey_session => :profile }, :conditions => { :survey_sessions => { :profiles => { :gender => gender } } } } } and when I call it everything works fine. I also have this average method I call... Answer.average(:rating, :include => {:survey_session => :profile}, :group => "profiles.career") which also works fine if I call it like that. However if I were to call it like so... Answer.gender('m').average(:rating, :include => {:survey_session => :profile}, :group => "profiles.career") I get... ActiveRecord::StatementInvalid: PGError: ERROR: table name "profiles" specified more than once : SELECT avg("answers".rating) AS avg_rating, profiles.career AS profiles_career FROM "answers" LEFT OUTER JOIN "survey_sessions" survey_sessions_answers ON "survey_sessions_answers".id = "answers".survey_session_id LEFT OUTER JOIN "profiles" ON "profiles".id = "survey_sessions_answers".profile_id INNER JOIN "survey_sessions" ON "survey_sessions".id = "answers".survey_session_id INNER JOIN "profiles" ON "profiles".id = "survey_sessions".profile_id WHERE ("profiles"."gender" = E'm') GROUP BY profiles.career Which is a little hard to read but says I'm including the table profiles twice. If I were to just remove the include from average it works but it isn't really practical because average is actually being called inside a method which gets passed the scoped. So there is some times gender or average might get called with out each other and if either was missing the profile include it wouldn't work. So either I need to know how to fix this apparent bug in Rails or figure out a way to know what scopes were applied to a ActiveRecord::NamedScope::Scope object so that I could check to see if they have been applied and if not add the include for average.

    Read the article

  • MySQL bindings for Rails 2.3.5 on Mac OS X 10.5.8

    - by lach
    I have a rails environment which I set-up with macports. I recently updated macports which seems to have had the side effect of breaking rails. When I try to boot a rails server I get: $ ./script/server => Booting WEBrick => Rails 2.3.5 application starting on http://0.0.0.0:3000 /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. /opt/local/lib/ruby/vendor_ruby/1.8/i686-darwin9/mysql.bundle: dlopen(/opt/local/lib/ruby/vendor_ruby/1.8/i686-darwin9/mysql.bundle, 9): Library not loaded: /opt/local/lib/mysql5/mysql/libmysqlclient.15.dylib (LoadError) Referenced from: /opt/local/lib/ruby/vendor_ruby/1.8/i686-darwin9/mysql.bundle Reason: image not found - /opt/local/lib/ruby/vendor_ruby/1.8/i686-darwin9/mysql.bundle I've tried reinstalling the mysql gem many times using various configurations I've found around the web but nothing seems to help. Also, when I try to use rake I get: rake db:migrate Rails requires RubyGems >= 1.3.1 (you have 1.0.1). Pleasegem update --systemand try again. Even though: gem --version 1.3.6 What's going on here?

    Read the article

  • Recursion in prepared statements

    - by Rob
    I've been using PDO and preparing all my statements primarily for security reasons. However, I have a part of my code that does execute the same statement many times with different parameters, and I thought this would be where the prepared statements really shine. But they actually break the code... The basic logic of the code is this. function someFunction($something) { global $pdo; $array = array(); static $handle = null; if (!$handle) { $handle = $pdo->prepare("A STATEMENT WITH :a_param"); } $handle->bindValue(":a_param", $something); if ($handle->execute()) { while ($row = $handle->fetch()) { $array[] = someFunction($row['blah']); } } return $array; } It looked fine to me, but it was missing out a lot of rows. Eventually I realised that the statement handle was being changed (executed with different param), which means the call to fetch in the while loop will only ever work once, then the function calls itself again, and the result set is changed. So I am wondering what's the best way of using PDO prepared statements in a recursive way. One way could be to use fetchAll(), but it says in the manual that has a substantial overhead. The whole point of this is to make it more efficient. The other thing I could do is not reuse a static handle, and instead make a new one every time. I believe that since the query string is the same, internally the MySQL driver will be using a prepared statement anyway, so there is just the small overhead of creating a new handle on each recursive call. Personally I think that defeats the point. Or is there some way of rewriting this?

    Read the article

  • Canvas scroll animation not working correctly

    - by pedalpete
    I'm building a gantt chart style timeline using html canvas element. I am currently attempting to add the functionality which allows the user to click a next/prev button to have the gantt chart scroll to display earlier or later times. The way I am doing this is to have a span.adjustTime where the id holds a value in seconds for the time to be adjusted (eg 86400 for one day). I am trying to animate the scrolling so it looks like a scroll, rather than jumping ahead by one day. I have a small problem in my timing calculation, but the script below is not animating, but rather jumping directly to the final time. I do have the draw function running on a separate setInterval which updates every second, so I'm hoping it isn't an issue of conflicting timers on the same function on the same element and data. jQuery('span.adjustTime').click(function() { var adjustBy = parseInt(jQuery(this).attr('id').replace('a', '')); var data = jQuery('img#logo').data(); for(var m = 1; m >= 30; m++) { gantt.startUnixTime = gantt.startUnixTime + (adjustBy * (m * 0.001)); var moveTimer = setTimeout(function() { draw(document.getElementById('gantt'), data, gantt); }, 1000); if (m == 30) { clearTimeout(moveTimer); } } });

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • Byte-Pairing for data compression

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Dave, what I read was: "The US patent office no longer grants patents on perpetual motion machines, but has recently granted at least two patents on a mathematically impossible process: compression of truly random data." I was not inferring the Patent Office was actually considering what I am inquiring about. I was merely commenting on the notion of a "mathematically impossible process." If someone has, in some way created a method of having a "single" data byte as a placeholder of 8 individual bytes of data, that would be a consideration for a patent. Now, about the mathematically impossibility of an 8 to 1 compression method, it is not so much a mathematically impossibility, but a series of rules and conditions that can be created. As long as there is the rule of 8 or 16 bit representation of storing data on a medium, there are ways to manipulate data that mirrors current methods, or creation by a new way of thinking.

    Read the article

  • MySQL query optimization - distinct, order by and limit

    - by Manuel Darveau
    I am trying to optimize the following query: select distinct this_.id as y0_ from Rental this_ left outer join RentalRequest rentalrequ1_ on this_.id=rentalrequ1_.rental_id left outer join RentalSegment rentalsegm2_ on rentalrequ1_.id=rentalsegm2_.rentalRequest_id where this_.DTYPE='B' and this_.id<=1848978 and this_.billingStatus=1 and rentalsegm2_.endDate between 1273631699529 and 1274927699529 order by rentalsegm2_.id asc limit 0, 100; This query is done multiple time in a row for paginated processing of records (with a different limit each time). It returns the ids I need in the processing. My problem is that this query take more than 3 seconds. I have about 2 million rows in each of the three tables. Explain gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 449904 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ I tried to remove the distinct and the query ran three times faster. explain without the query gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 451972 | Using where; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ As you can see, the Using temporary is added when using distinct. I already have an index on all fields used in the where clause. Is there anything I can do to optimize this query? Thank you very much!

    Read the article

  • Changing Apache2.2.11 httpd.conf has no effect

    - by Adrian
    Hi, Hopefully someone can help here. I recently installed wampserver ver 2.0 with Apache ver 2.2.11. My issue is, I have some large php scripts which timeout at the default 5 min (300 sec) browser limit (I'm using ie8). It is critcal I get this limit extended. I have tried changing the httpd.conf file to include the following: TimeOut 1200 My objective was to set the timeout at 1200 seconds, or 20 min. I had just chosen a random location to place this directive within the httpd.conf file as I cannot locate any documentation to suggest it belongs in a specific place within the file. Regardless, the changes I make appear in the httpd.conf file that can be found in the system tray for wampserver, however they have no effect - the browser still times out after 5 minutes. I thought perhaps I had the capitals incorrect, so I changed to: Timeout 1200 This change had no effect either. Can someone please help, this is very frustrating. Maybe the command can only be used within a specific module? If so, I have no idea which one, nor do I know the syntax to specify this. Regards Adrian.

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >