Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 169/2875 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Measuring execution time of a call to system() in C++

    - by jm1234567890
    I have found some code on measuring execution time here http://www.dreamincode.net/forums/index.php?showtopic=24685 However, it does not seem to work for calls to system(). I imagine this is because the execution jumps out of the current process. clock_t begin=clock(); system(something); clock_t end=clock(); cout<<"Execution time: "<<diffclock(end,begin)<<" s."<<endl; Then double diffclock(clock_t clock1,clock_t clock2) { double diffticks=clock1-clock2; double diffms=(diffticks)/(CLOCKS_PER_SEC); return diffms; } However this always returns 0 seconds... Is there another method that will work? Also, this is in Linux. Thanks!

    Read the article

  • Domino 9 / Dojo 1.8 - Date Time Picker without default value

    - by Julian Buss
    I want a Date Time Picker control WITHOUT a default value. Doesn't seem to be possible anymore :-( To reproduce, create a blank XPage and place a Date Time Picker control. Open the XPage in the browser and you will see that it defaults to today. I didn't found any way to set the default to an empty value. I tried setting all properties/data/default to 0, null, empty string and so on - no luck. I tried the data-dojo-probs attribute with value:'', this sets the default to 1970-1-1, but not to blank. Any ideas?

    Read the article

  • Creating a sub site in SharePoint takes a very long time

    - by denni
    Hi, I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes. Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there. What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care? Thanks a lot, all helps are appreciated.

    Read the article

  • Browsed Time Problem.

    - by aamir Fayyaz
    I want to display the browsed time of a user, But when i refresh it, it will be again start from 0:0:0. How can it handle? <?php $total_mints=($live_match['match_name']) * (60); ?> <script language="javascript"> display_c(<?=$total_mints?>,'ct'); </script> <script type="text/javascript"> function display_c(start,div){ window.start = parseFloat(start); var end = 0 // change this to stop the counter at a higher value var refresh=1000; // Refresh rate in milli seconds if(window.start >= end ){ mytime=setTimeout("display_ct('"+div+"')",refresh) } else {alert("Time Over ");} </script>

    Read the article

  • how to deal with international time?

    - by alex
    i build a new website.but the host is in USA.i am not in USA. i need get the time on the website page to compare with one local Variable. But because of time difference,it has 8 hous difference?how to solve this problom? my code SimpleDateFormat formatter = new SimpleDateFormat("HH:mm:ss"); java.util.Date currentTime = new java.util.Date(); String dateString = formatter.format(currentTime); ` how to revise these code ?

    Read the article

  • Time Zone conversion form stringdate

    - by Kishore
    Hi, I have a stringdate 16-MAY-2010 23:04:44 which i need to convert to gmt time zone that is the out put required is 17-May-2010 12:03:03. I used date formatters to convert but the result i am getting is not in the format i required.I am sending the code please let me know if i am doing correct or not Here is the code: NSString *timeStamp = [format stringFromDate:[NSDate date]]; NSString *output = [timeConv dateStringFromString:timeStamp]; (NSString *)dateStringFromString:(NSString *)sourceString { NSDateFormatter *dateFormatter = [[[NSDateFormatter alloc] init] autorelease]; [dateFormatter setFormatterBehavior:NSDateFormatterBehavior10_4]; [dateFormatter setDateFormat:@"dd-MMMM-yyyy HH:MM:ss"]; NSTimeZone *gmt = [NSTimeZone timeZoneWithAbbreviation:@"GMT"]; [dateFormatter setTimeZone:gmt]; NSDate *date = [dateFormatter dateFromString:sourceString]; return [dateFormatter stringFromDate:date]; } The out put i am getting is the same old time without conversion. so please let me know the correct solution.

    Read the article

  • AudioOutputUnitStart takes time

    - by tokentoken
    Hello, I'm making an iPhone game application using Core Audio, Extended Audio File Services. It works OK, but when I first call AudioOutputUnitStart, it takes about 1-2 seconds. After the second call, no problem. For a game application, 1-2 seconds is very noticeable. (I tested this on iPhone simulator, and iPhone 3GS) Also, if I leave the game for about 10 seconds, first call of AudioOutputUnitStart also takes time. Maybe I have to call AudioOutputUnitStart beginning of the application to prevent the start-up time?

    Read the article

  • iPhone OpenGL scrolling background jumps when texture is drawn for first time

    - by Magnum39
    I have been fighting a problem for a while now and would appreciate any help anybody could give. I have a sprite that moves within a landscape. The sprite remains in the center of the screen and the background moves to simulate that the sprite is moving within the landscape. I have split the landscape into sections so that I only draw the sections of the landscape that I need (are on screen). The Problem: As a new texture section of the screen appears on the screen (is drawn for the first time) the movement jumps. Almost like a frame is missed. I have done some timing experiments and I do not thinks a frame is missed. My processing is well below the 30fps that I have the animation set to. It only happens the first time the texture section is drawn. Is there something extra that is done the first time a texture is drawn? Here is the code: - (void) render{ // Sets up an array of values to use as the sprite vertices. const GLfloat sVerts[] = { -1.6f, -1.6f, 1.6f, -1.6f, -1.6f, 1.6f, 1.6f, 1.6f, }; static const GLfloat sTexCoords[] = { 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0 }; glDisableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Setup opengl to draw the object in correct orientation, size, position, etc glLoadIdentity(); // Enable use of the texture glEnable(GL_TEXTURE_2D); glVertexPointer(2, GL_FLOAT, 0, sVerts); glTexCoordPointer(2, GL_FLOAT, 0, sTexCoords); // draw the texture // set the position of the first tile float xOffset = -4.8; float yOffset = 4.8; int i; int y; int currentTexture = textureA; for(i=0; i<2; i++) { for(y=0; y<2; y++) { // test for the texture tile on the screen if not on screen then do not draw float localX = xOffset+(3.21*y); float localY = yOffset-(3.21*i); float xDiff = monkeyX - localX; float yDiff = monkeyY - localY; if(((xDiff < 3.2) && (xDiff > -3.2)) && ((yDiff <2.7) && (yDiff > -2.7))) { // bind the texture and set the vertex data pointers glBindTexture(GL_TEXTURE_2D, spriteTexture[currentTexture]); // move to draw position for the texture glLoadIdentity(); glTranslatef((localX+self.positionX), (localY+self.positionY), 0.0); //draw the texture glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } currentTexture++; } } }

    Read the article

  • Sending some byte at time

    - by user1417815
    I'm trying to figure out way to send some amount of text from the string ech time until it reach the end of the string, example: const char* the_string = "hello world, i'm happy to meet you all. Let be friends or maybe more, but nothing less" Output: hello world Output: , i'm happy to meet you all. Output: Let be friends or maybe more Output: , but nothing less stop: no more bytes to send. the problem i have searched google, but didn't understand the examples, i spent 4 days trying find a good way, also that sendt 5 bytes at time, but in case there is less, then send them until you are at the end of the string. please help me out guys, i will accept a C or C++ way, as long it works and well explained.

    Read the article

  • Part-time Programming Job

    - by Bluechip Solutions
    I am a student at Middlesex Universtity, London studying Information Technology. I really love software development and I have taught myself how to write HTML + CSS, JavaScript (I use jQuery and AngularJS) and Java (I learnt this in school). I have developed few apps (a desktop app in Java and a mobile app with AngularJS and PhoneGap) I am looking at applying for a part-time programming job to develop myself. Are there part time jobs available for someone like me and are my skill set enough to get me a job? I understand this topic may not be ideal here but this is the only place I know can provide me answers. Thank you!

    Read the article

  • ?asting String to Time makes 01:00:00

    - by kawtousse
    Hi everyone, when i do the following: String start = request.getParameter("startp"); SimpleDateFormat sdf = new SimpleDateFormat("hh:mm:ss"); long ms=0; try { ms = sdf.parse(start).getTime(); } catch (ParseException e1) { e1.printStackTrace(); } Time ts = new Time(ms); it is inserted with this value 01:00:00 witch is not the correct one (entered by user). I didn't undertstand the error here. Please help. Thanks

    Read the article

  • Build-time dependency resolving coming to Entity Framework. Now, how about those BI tools too?

    - by jamiet
    Three months ago I wrote a blog post entitled Some thoughts on Visual Studio database references and how they should be used for SQL Server BI where I shared some thoughts on a feature available to database developers in Visual Studio 2010 that I would love to see added to SQL Server Integration Services (SSIS), Analysis Services (SSAS) and Reporting Services (SSRS). In there I said: Over the past few weeks I have been making heavy use of the Database tools in Visual Studio 2010 and one of the features that has most impressed me has been database references.   Database references allow you to have stored procedures in your database project that refer to objects (tables, views, stored procedures etc…) that exist in other database projects and hence when you build your database project it is able to resolve those references.   It occurred to me that similar functionality would be incredibly useful for SQL Server Integration Services(SSIS), Analysis Services (SSAS) & Reporting Services (SSRS) projects. After all reports, packages and data source views are rife with references to database objects – why shouldn’t we be able to have design-time dependency checking in our BI projects the same way that database and .Net developers do? In that blog post I shared links to three Connect submissions where I requested this feature be added to SSIS, SSAS & SSRS. In addition I also submitted a request that the feature be extended to .Net projects so that any reference to a database object in a .Net assembly can be resolved at build time. That Connect submission is at [Entity FX] Use database references to constrain the EDM and overnight it received this comment from Microsoft: We have been working on this feature for a while and and will be available soon This is really good news - it improves the Microsoft developer ecosystem by ensuring invalid references to database references get caught at build time (ideally as part of a Continuous integration build) rather than run time. [Hopefully it might nip this code-first nonsense in the bud too (Ooo...way to incite flame comments :) ) ]. If you want to see this feature in action then check out a video from Teched Europe last month entitled SQL Server Developer Tools Code-named "Juneau" where it is demo'd by Lance Delano and Tim Laverty.   The point of this blog post though is not just to draw attention to this forthcoming feature for .Net developers, it is to ask you to petition Microsoft to get this feature added to SSIS/SSAS/SSRS too. After all, we already know (from the video above) that the feature is coming to this new code-name Juneau development environment plus we also know that Juneau will be the development environment for SSIS/SSAS/SSRS as well - is it really much of a stretch to expect the BI tools to have access to this great feature too? I don't think so and if you agree with me then I urge you to vote and add a comment to the Connection submissions that are requesting this feature. They are at: [SSAS] Declare Object Dependancies [SSRS] Declare Object Dependancies [SSIS] Declare Object Dependancies (Update, Apparently someone at Microsoft has deemed it necassary to set this to private and I am not able to change it back even though I submitted it. You can still vote on the other two though.) Let's close that SQL Developer Gap!   @Jamiet    

    Read the article

  • How meaningful is the Big-O time complexity of an algorithm?

    - by james creasy
    Programmers often talk about the time complexity of an algorithm, e.g. O(log n) or O(n^2). Time complexity classifications are made as the input size goes to infinity, but ironically infinite input size in computation is not used. Put another way, the classification of an algorithm is based on a situation that algorithm will never be in: where n = infinity. Also, consider that a polynomial time algorithm where the exponent is huge is just as useless as an exponential time algorithm with tiny base (e.g., 1.00000001^n) is useful. Given this, how much can I rely on the Big-O time complexity to advise choice of an algorithm?

    Read the article

  • When is the best time to do self learning in relation with software management?

    - by shankbond
    It all started from here. I have been following Software Estimation: Demystifying the Black Art (Best Practices (Microsoft)). The third chapter says that in Software Management: You cannot give too much time to software developers, if you give it to them, then it is likely that extra time given to them will be filled by some other tasks (in other words, the developers will eat that time :)) Parkinson's Law You can also not squeeze the time from their schedule because if you do that, it is likely that they will develop poor quality product, poor design and will hurt you in the long run, there will be a panic situation and total chaos in the project, lots of rework etc. My question is related to the first point. If you don't give enough time then will the typical software engineer learn his/her skills? The market is always coming with new technologies, you need to learn them. Even with the existing familiar technologies there are always best practices and dos and don'ts.

    Read the article

  • How can I generate signed distance fields (2D) in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time: There's something else:

    Read the article

  • How can I generate signed distance fields in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time:

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • How much time do you spend actually developing vs. infrastructure activites?

    - by Can't Tell
    When I'm working I feel like most of the time I'm not doing actual work. For example after making a change to the code in order to test it, I have to first build the project, and start the server(say JBoss). Upon testing, I find that there is another small issue. So I bring down the server, make the changes, build again and start up the server again.The building and bringing the server up/down is not very useful work. Also, the IDE (lets say Eclipse) does things such as updating Maven indexes and building the workspace which take some more time to get things done. Have you come across this kind of situation? Do you have tips on how to overcome/bypass this? Any features on the IDE/build tools that can be helpful? Any architecture/application design/technology that attempts to overcome this?

    Read the article

  • Salesforce/PHP - Bulk Outbound message (SOAP), Time out issue - See update #2

    - by Phill Pafford
    Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error? I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side: set_time_limit(0); // in the script max_execution_time = 360 ; Maximum execution time of each script, in seconds max_input_time = 360 ; Maximum amount of time each script may spend parsing request data memory_limit = 32M ; Maximum amount of memory a script may consume I used the high settings just for testing. Any thoughts as to why this is failing the ACK delivery back to Salesforce? Here is some of the code: This is how I accept and send the ACK file for the imcoming SOAP request $data = 'php://input'; $content = file_get_contents($data); if($content) { respond('true'); } else { respond('false'); } The respond function function respond($tf) { $ACK = <<<ACK <?xml version = "1.0" encoding = "utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <notifications xmlns="http://soap.sforce.com/2005/09/outbound"> <Ack>$tf</Ack> </notifications> </soapenv:Body> </soapenv:Envelope> ACK; print trim($ACK); } These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds. Could it be Apache? PHP? I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things. I show no errors in the apache log, the script runs fine. I did find some info on the Salesforce site but still no luck. Here is the link. Also I'm using the PHP Toolkit 11 (From Salesforce). Other forum with good SF help Thanks for any insight into this, --Phill UPDATE: If I receive the incoming message and print the response, should this happen first regardless if I do anything else after? Or does it wait for my process to finish and then print the response? UPDATE #2: okay I think I have the problem: PHP uses the single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there a way to make this a mutli thread process? Thread #1 - accept the incoming SOAP request and send back the ACK Thread #2 - Process the SOAP request I know I could break it up into like a DB table or flat file, but is there a way to accomplish this without doing that? I'm going to try to close the socket after the ACK submission and continue the processing, cross my fingers it will work.

    Read the article

  • Design Time Attribute For CSS Class in ASP.net Custom Server Control

    - by Jon P
    Hopefully some Custom Control Designers/Builders can help I'm attempting to build my first custom control that is essential a client detail collection form. There are to be a series of elements to this form that require various styles applied to them. Ideally I'd like the VS 2005/2008 properties interface to be able to apply the CSSClass as it does at the control level, i.e. with a dropdown list of available CSS Clases. Take for example the Class to be applied to the legend tag /// <summary>Css Class for Legend</summary> [Category("Appearance")] [Browsable(true)] [DefaultValue("")] //I am at a loss as to what goes in [Editor] [Editor(System.Web.UI.CssStyleCollection), typeof(System.Drawing.Design.UITypeEditor))] public string LegendCSSClass { get { return _LegendCSSClass; } set { _LegendCSSClass = value; } } I have tried a couple of options, as you can see from above, without much luck. Hopefully there is something simple I am missing. I'd also be happy for references pertaining to the [Editor] attribute

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >