Search Results

Search found 74197 results on 2968 pages for 'part time'.

Page 371/2968 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Setting timeout for embedded Lua

    - by skyeagle
    I have embedded Lua in a C/C+= application. I want to be able to set a timeout value to prevent getting trapped with badly written scripts that can result in infinite loops (or even string searches that take an infinite time to complete). Basically, I want to be able to set a time interval and if the script fails to complete running at the end of that time interval, I want to be able to kill the Lua script engine (gracefully, if possible). Anyone knows of best practise way to do this?

    Read the article

  • Profiling and Graphics.updateDisplay0()

    - by AD
    I am profiling my BlackBerry application (a game) with the options 'In method only' and 'Time including native method'. Under the 'All methods' node (after a decent run of the game) the single most time consuming function is Graphics.updateDisplay0() with over 50% of the run time spent there. I don't quite know what is the purpose of that function or if I can do anything to reduce its processing time, so if anyone can shed some light on the matter, I would be grateful. Because from what I understand with the 'In method only' option, it means there is some processing within that function. ps. there is no information on that function in the documentation.

    Read the article

  • Convert a form_tag select_datetime to SQL datetime

    - by Mitchell
    Hi I am trying to make a simple search form that uses a startTime and endTime to specify a time range. The db has a datetime field time that is compared against. So far when i try to use params[:startTime] in the controller I get an array of values which wont work with :conditions = ['time < ?', params[:endTime]] Is there a simple solution to parse the form's datetime to SQL datetime?

    Read the article

  • problem in wordpress like delete rows in jquery

    - by Mac Taylor
    hey guys im trying to impelement a technique to delete stories with jquery animation exactly like wordpress this is my script part : $(function(){ $('#jqdelete').click(function() { $(this).parents('tr.box').animate( { backgroundColor: '#cb5555' }, 500).animate( { height: 0, paddingTop: 0, paddingBottom: 0 }, 500, function() { $(this).css( { 'display' : 'none' } ); }); }); }); and this is my html part <tr class='box'> <td> <a href=\"edit.php&pid=$pid\">$title</font></a></td> <td >$eki</td> <td> <a href='javascript:void(0)' id='jqdelete'> <img src=\"images/delete.gif\"></a> </td> </tr> but not working am i wrong in any part of my code ? Note : if i use div instead of tr and td , works fine , but i want to use this effect on my table rows thanx

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • How do I find the install directory of a Windows Service, using C#?

    - by endian
    I'm pretty sure that a Windows service gets C:\winnt (or similar) as its working directory when installed using InstallUtil.exe. Is there any way I can access, or otherwise capture (at install time), the directory from which the service was originally installed? At the moment I'm manually entering that into the app.exe.config file, but that's horribly manual and feels like a hack. Is there a programmatic way, either at run time or install time, to determine where the service was installed from?

    Read the article

  • Anything wrong with my cURL code (http status of 0)?

    - by Ilya
    Consistently getting a status of 0 even though if I copy and paste the url sent into my browser, I get a json object right back <?php $mainUrl = "https://api.xxxx.com/?"; $co = "xxxxx"; $pa = "xxxx"; $par = "xxxx"; $part= "xxxx"; $partn = "xxxx"; $us= "xxx"; $fields_string; $fields = array( 'co'=>urlencode($co), 'pa'=>urlencode($pa), 'par'=>urlencode($par), 'part'=>urlencode($part), 'partn'=>urlencode($partn), 'us'=>urlencode($us) ); foreach($fields as $key=>$value) { $fields_string .= $key . '=' . $value . '&' ;} $fields_string = rtrim($fields_string, "&"); $fields_string = "?" . $fields_string; $url = "https://api.xxxxx.com/" . $fields_string; $request = $url; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_TIMEOUT,'3'); $content = trim(curl_exec($ch)); $http_status = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); print $url; print $http_status; print $content; ?>

    Read the article

  • http connection timeout issues

    - by Mark
    I'm running into an issue when i try to use the HttpClient connecting to a url. The http connection is taking a longer time to timeout, even after i set a connection timeoout. int timeoutConnection = 5000; HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection); int timeoutSocket = 5000; HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket); It works perfect most of the time. However, every once in while, the http connection runs for ever and ignore the setconnectiontimeout, especailly when the phone is connected to wifi, and the phone was idling. So after the phone is idling, the first time i try to connect, the http connection ignores the setconnectiontimeout and runs forever, after i cancel it and try again, it works like charm everytime. But that one time that doesn't work it creates a threadtimeout error, i tried using a different thread, it works, but i know that the thread is running for long time. I understand that the wifi goes to sleep on idle, but i dont understand why its ignoring the setconnectiontimeout. Anyone can help, id really appreciated.

    Read the article

  • Selecting row in SSMS causes Entity Framework 4 to Fail

    - by Eric J.
    I have a simple Entity Framework 4 unit test that creates a new record, saves it, attempts to find it, then deletes it. All works great, unless... ... I open up SQL Server Management Studio while stopped at a breakpoint in the unit test and execute a SELECT statement that returns the row I just created (not SELECT FOR UPDATE, not WITH (updlock), no transaction, just a plain SELECT). If I do that before attempting to find the row I just created, I don't find the row. If I instead do that after finding the row but before deleting the row, I do find the row but get an OptimisticConcurrencyException. This is consistently repeatable. Unit Test: [TestMethod()] public void CreateFindDeleteActiveParticipantsTest() { // Setup this test Participant utPart = CreateUTParticipant(); ctx.Participants.AddObject(utPart); ctx.SaveChanges(); // External SELECT Point #1: // part is null // Find participant Participant part = ParticipantRepository.Find(UT_SURVEY_ID, UT_TOKEN); Assert.IsNotNull(part, "Expected to find a participant"); // External SELECT Point #2: // SaveChanges throws OptimisticConcurrencyException // Cleanup this test ctx.Participants.DeleteObject(utPart); ctx.SaveChanges(); }

    Read the article

  • Why Swift is 100 times slower than C in this image processing test?

    - by xiaobai
    Like many other developers I have been very excited at the new Swift language from Apple. Apple has boasted its speed is faster than Objective C and can be used to write operating system. And from what I learned so far, it's a very type-safe language and able to have precisely control over the exact data type (like integer length). So it does look like having good potential handling performance critical tasks, like image processing, right? That's what I thought before I carried out a quick test. The result really surprised me. Here is a much simplified image alpha blending code snippet in C: test.c: #include <stdio.h> #include <stdint.h> #include <string.h> uint8_t pixels[640*480]; uint8_t alpha[640*480]; uint8_t blended[640*480]; void blend(uint8_t* px, uint8_t* al, uint8_t* result, int size) { for(int i=0; i<size; i++) { result[i] = (uint8_t)(((uint16_t)px[i]) *al[i] /255); } } int main(void) { memset(pixels, 128, 640*480); memset(alpha, 128, 640*480); memset(blended, 255, 640*480); // Test 10 frames for(int i=0; i<10; i++) { blend(pixels, alpha, blended, 640*480); } return 0; } I compiled it on my Macbook Air 2011 with the following command: gcc -O3 test.c -o test The 10 frame processing time is about 0.01s. In other words, it takes the C code 1ms to process one frame: $ time ./test real 0m0.010s user 0m0.006s sys 0m0.003s Then I have a Swift version of the same code: test.swift: let pixels = UInt8[](count: 640*480, repeatedValue: 128) let alpha = UInt8[](count: 640*480, repeatedValue: 128) let blended = UInt8[](count: 640*480, repeatedValue: 255) func blend(px: UInt8[], al: UInt8[], result: UInt8[], size: Int) { for(var i=0; i<size; i++) { var b = (UInt16)(px[i]) * (UInt16)(al[i]) result[i] = (UInt8)(b/255) } } for i in 0..10 { blend(pixels, alpha, blended, 640*480) } The build command line is: xcrun swift -O3 test.swift -o test Here I use the same O3 level optimization flag to make the comparison hopefully fair. However, the resulting speed is 100 time slower: $ time ./test real 0m1.172s user 0m1.146s sys 0m0.006s In other words, it takes Swift ~120ms to processing one frame which takes C just 1 ms. I also verified the memory initialization time in both test code are very small compared to the blend processing function time. What happened?

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Stopping long-running requests in Pylons

    - by Jack
    I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.

    Read the article

  • load balance timeout SQL connection string

    - by george9170
    It seems that if there is a sql memory leak somewhere and you dont have time to find it you can use the load balance timeout option in a sql connection string to destory the connection after x seconds. Am i right in assuming I can set the load balance time out to 30-40 seconds and then hunt for the leak latter, while in the mean time the leak will not affect my application too much.

    Read the article

  • Write/read count to txt file

    - by Brian
    Hi, I need a batch file that writes the count number to a txt file. Next time the batch file is run, it should read the current count number from the txt file and add 1 to count and save this new value in the txt file. (nothing else is in the txt file) When count is 5 it should start from 1 again Example: Count.bat runs 1 time: count.txt has no count so Count.bat saves the value 1 in count.txt Count.bat is run 2 time: Count.bat reads 1 from count.txt and saves the new value 2 to count.txt When count.bat is run for the 6 time it should start over by saving the value 1 in count.txt I think this just be easy to do, but I'am not use to batch commands So hopefully someone here could help me.

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • Understanding Basic Prototyping & Updating Key/Value pairs

    - by JordanD
    First time poster, long time lurker. I'm trying to learn some more advanced features of .js, and have two ojectives based on the pasted code below: I would like to add methods to a parent class in a specific way (by invoking prototype). I intend to update the declared key/value pairs each time I make an associated method call. execMTAction as seen in TheSuper will execute each function call, regardless. This is by design. Here is the code: function TheSuper(){ this.options = {componentType: "UITabBar", componentName: "Visual Browser", componentMethod: "select", componentValue: null}; execMTAction(this.options.componentType, this.options.componentName, this.options.componentMethod, this.options.componentValue); }; TheSuper.prototype.tapUITextView = function(val1, val2){ this.options = {componentType: "UITextView", componentName: val1, componentMethod: "entertext", componentValue: val2}; }; I would like to execute something like this (very simple): theSuper.executeMTAction(); theSuper.tapUITextView("a", "b"); Unfortunately I am unable to overwrite the "this.options" in the parent, and the .tapUITextView method throws an error saying it cannot find executeMTAction. All I want to do, like I said, is to update the parameters in the parent, then have executeMTAction run each time I make any method call. That's it. Any thoughts? I understand this is basic but I'm coming from a long-time procedural career and .js seems to have this weird confluence of oo/procedural that I'm having a bit of difficulty with. Thanks for any input!

    Read the article

  • Always get the correct datetime?

    - by Jesper Mansa
    I was wandering if there is a way/site/link whith the correct time? Not the servers datetime or the clients datetime. I'm using datetime to count down on my gaming site for when it is the users time to make a play. Users come from all ower the world so using the users client time would not match if its from the US to Europe. Then normally I would use the servers time, but somehow it skips 1.2 hours sometimes? I would like to make sure that everbody makes a timestamp from the same source and that source always is correct! Hoping for help and thanks in advance.

    Read the article

  • cleartool question

    - by chuanose
    Lets say I have a directory at \testfolder, and the latest is currently at /main/10. I know that the operation resulting in testfolder@@/main/6 is to remove a file named test.txt. What's a sequence of cleartool operations that can be done in a script that will take "testfolder@@/main/6" and "test.txt" as input, and will cat out the contents of test.txt as of that time? One way I can think of is to get the time of /main/6 operation, create a view with config spec -time set to that time, and then cat the test.txt at the directory. But I'm wondering if I can do this in a easier way that doesn't involve manipulating config specs, perhaps through "cleartool find" and extended path names

    Read the article

  • Best way to do one-to-many "JOIN" in CouchDB

    - by mit
    There are CouchDB documents that are list elements: { "type" : "el", "id" : "1", "content" : "first" } { "type" : "el", "id" : "2", "content" : "second" } { "type" : "el", "id" : "3", "content" : "third" } There is one document that defines the list: { "type" : "list", "elements" : ["2","1"] , "id" : "abc123" } As you can see the third element was deleted, it is no longer part of the list. So it must not be part of the result. Now I want a view that returns the content elements including the right order. The result could be: { "content" : ["second", "first"] } In this case the order of the elements is already as it should be. Another possible result: { "content" : [{"content" : "first", "order" : 2},{"content" : "second", "order" : 1}] } I started writing the map function: map = function (doc) { if (doc.type === 'el') { emit(doc.id, {"content" : doc.content}); //emit the id and the content exit; } if (doc.type === 'list') { for ( var i=0, l=doc.elements.length; i<l; ++i ){ emit(doc.elements[i], { "order" : i }); //emit the id and the order } } } This is as far as I can get. Can you correct my mistakes and write a reduce function? Remember that the third document must not be part of the result. Of course you can write a different map function also. But the structure of the documents (one definig element document and an entry document for each entry) cannot be changed.

    Read the article

  • How do I parse data received in a memorystream?

    - by Kerberos42
    I'm new to using sockets. I have a very basic client that sends a request, and waits for a response. The response is one stream, but has two parts. The first part is prefixed with ANS and is a set of key/value pairs in this form: KEY:Value with each pair on a separate line. The second part of the response is prefixed by RCT and this is pre-formatted text that needs to be send directly to a printer. So what would be the best way to extract both parts of the response, and in the first part, get each Key:Value pair. I might not even need them all, but I have to look at each one to see what the values are then decide what to do with it. I'm currently writing the response out to a textbox just to understand what its doing, but now I need to actually do something with the data. Here's a data sample, as it is received: ANS Result: Data Received RCPRES:Q[81] TML:123 OPP: MRR:000000999999 <several dozen more KEY:Value pairs> RCTNov 05 2013 04:03 pm Trans# 123456 <pre-formatted text>

    Read the article

  • Makefile rule depending on change of number of files instead of change in content of files.

    - by goathens
    I'm using a makefile to automate some document generation. I have several documents in a directory, and one of my makefile rules will generate an index page of those files. The list of files itself is loaded on the fly using list := $(shell ls documents/*.txt) so I don't have to bother manually editing the makefile every time I add a document. Naturally, I want the index-generation rule to trigger when number/title of files in the documents directory changes, but I don't know how to set up the prerequisites to work in this way. I could use .PHONY or something similar to force the index-generation to run all the time, but I'd rather not waste the cycles. I tried piping ls to a file list.txt and using that as a prerequisite for my index-generation rule, but that would require either editing list.txt manually (trying to avoid it), or auto-generating it in the makefile (this changes the creation time, so I can't use list.txt in the prerequisite because it would trigger the rule every time).

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >