Search Results

Search found 4159 results on 167 pages for 'deferred execution'.

Page 10/167 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • ... i just avoid GUID

    - by Tomaz.tsql
    Our partner was explaining to me that they are using GUID as primary key on all the tables. My immediate reaction was - why? and couple of basic doubts were: - since I can read uniqueidentifier, it does not tell me absolutely anything - if I will use my relational table, i sure will use other columns to get the information out - SQL is terrible when setting up clustered index on GUID columns (and hence performance problems) - why not use INT? it will save you space on disk, optimizer will be able...(read more)

    Read the article

  • Webcast: 12.2.4 Advanced Planning Command Center Enhancements

    - by ChristineS-Oracle
    Webcast: 12.2.4 Advanced Planning Command Center Enhancements Date: June 12, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This advisor webcast helps Functional Users and IT Analysts understand the new features introduced in Advanced Planning Command Center (APCC) as part of 12.2.4 release. These include custom hierarchies, custom measures, additional measures like projected on hand etc. Other new features include new reports like Build Plan, Order Details. It also includes new integration capabilities between APCC and DRP and support for Trade Planning in APCC. Topics will include: New Feature Introduction Feature Overview and Setup Steps Implementation Tips & Best Practices Details & Registration: Doc ID 1670447.1

    Read the article

  • ARC write-up on the OTM SIG

    - by John Murphy
    ARC write-up on the recent OTM SIG event. The Oracle Transportation Management Special Interest Group (OTM SIG) hosted its 6th annual user conference in Philadelphia, Pennsylvania, August 13-15, 2012. This independently run conference drew almost 400 attendees, predominantly Oracle Transportation Management (OTM) users. It featured four concurrent tracks that included both functionally and technically focused presentations. The tracks included a number of informative presentations by OTM users from various industries. These discussed the users' implementations, current usage, and future plans for OTM within their organizations. ARC Advisory Group found ConAgra's and Mutual Materials' presentations on OTM adoption and Kraft's presentation on the company's use of Fusion Transportation Intelligence particularly informative. Complete ARC write-up

    Read the article

  • SQL Sentry Plan Explorer : Version 1.1!

    - by AaronBertrand
    Last week, Microsoft offered up an early Christmas present: SQL Server 2005 SP4 . This week, it's SQL Sentry 's turn to play Santa Claus: several new features and fixes have been packaged up into SQL Sentry Plan Explorer 1.1 (build 6.0.67.0). So, what's new? Several wish list items have been fulfilled (hey, it is Christmas, after all). You can see the full change list here ; but I'll talk briefly about a few of my favorites: Parallel distribution The Plan Tree tab for a parallel operator now shows...(read more)

    Read the article

  • Webcast: Introduction To Causal Factors

    - by ChristineS-Oracle
    Webcast: Introduction To Causal Factors Date: June 11, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This one hour advisor webcast will provide an introduction to causal factors for Demand Management and AFDM. Pre-seeded causal factors will be discussed as well as when they are not appropriate. Scenarios of when to add causal factors will be covered and best practice method of adding and using. Topics will include: Causal factors in DM and AFDM Pre-seeded causal factors When to modify causal factor settings Best practice when working with causal factors Details & Registration: Doc ID 1664606.1

    Read the article

  • Mouse Clicks, Reactive Extensions and StreamInsight Mashup

    I had an hour spare this afternoon so I wanted to have another play with Reactive Extensions in .Net and StreamInsight.  I also didn’t want to simply use a console window as a way of gathering events so I decided to use a windows form instead. The task I set myself was this. Whenever I click on my form I want to subscribe to the event and output its location to the console window and also the timestamp of the event.  In addition to this I want to know for every mouse click I do, how many mouse clicks have happened in the last 5 seconds. The second point here is really interesting.  I have often found this when working with people on problems.  It is how you ask the question that determines how you tackle the problem.  I will show 2 ways of possibly answering the second question depending on how the question was interpreted. As a side effect of this example I will show how time in StreamInsight can stand still.  This is an important concept and we can see it in the output later. Now to the code.  I will break it all down in this blogpost but you can download the solution and see it all together. I created a Console application and then instantiate a windows form.   frm = new Form(); Thread g = new Thread(CallUI); g.SetApartmentState(ApartmentState.STA); g.Start();   Call UI looks like this   static void CallUI() { System.Windows.Forms.Application.Run(frm); frm.Activate(); frm.BringToFront(); }   Now what we need to do is create an observable from the MouseClick event on the form.  For this we use the Reactive Extensions.   var lblevt = Observable.FromEvent<MouseEventArgs>(frm, "MouseClick").Timestamp();   As mentioned earlier I have two objectives in this example and to solve the first I am going to again use the Reactive extensions.  Let’s subscribe to the MouseClick event and output the location and timestamp to the console. lblevt.Subscribe(evt => { Console.WriteLine("Clicked: {0}, {1} ", evt.Value.EventArgs.Location,evt.Timestamp); }); That should take care of obective #1 but what about the second objective.  For that we need some temporal windowing and this means StreamInsight.  First we need to turn our Observable collection of MouseClick events into a PointStream Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("MouseClicks"); var input = lblevt.ToPointStream( a, evt => PointEvent.CreateInsert( evt.Timestamp, new { loc = evt.Value.EventArgs.Location.ToString(), ts = evt.Timestamp.ToLocalTime().ToString() }), AdvanceTimeSettings.IncreasingStartTime);   Now that we have created out PointStream we need to do something with it and this is where we get to our second objective.  It is pretty clear that we want some kind of windowing but what? Here is one way of doing it.  It might not be what you wanted but again it is how the second objective is interpreted   var q = from i in input.TumblingWindow(TimeSpan.FromSeconds(5), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { CountOfClicks = i.Count() };   The above code creates tumbling windows of 5 seconds and counts the number of events in the windows.  If there are no events in the window then no result is output.  Likewise until an event (MouseClick) is issued then we do not see anything in the output (that is not strictly true because it is the CTI strapped to our MouseClick events that flush the events through the StreamInsight engine not the events themselves).  This approach is centred around the windows and not the events.  Until the windows complete and a CTI is issued then no events are pushed through. An alternate way of answering our second question is below   var q = from i in input.AlterEventDuration(evt => TimeSpan.FromSeconds(5)).SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { CountOfClicks = i.Count() };   In this code we extend the duration of each MouseClick to five seconds.  We then create  Snapshot Windows over those events.  Snapshot windows are discussed in detail here.  With this solution we are centred around the events.  It is the events that are driving the output.  Let’s have a look at the output from this solution as it may be a little confusing. First though let me show how we get the output from StreamInsight into the Console window. foreach (var x in q.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.CountOfClicks); }   Ok so now to the output.   The table at the top shows the output from our routine and the table at the bottom helps to explain the output.  One of the things that will help as well is, you will note that for our PointStream we set the issuing of CTIs to be IncreasingStartTime.  What this means is that the CTI is placed right at the start of the event so will not flush the event with which it was issued but will flush those prior to it.  In the bottom table the Blue fill is where we issued a click.  Yellow fill is the duration and boundaries of our events.  The numbers at the bottom indicate the count of events   Clicked 22:40:16                                 Clicked 23:40:18                                 1                                   Clicked 23:40:20                                 2                                   Clicked 23:40:22                                 3                                   2                                   Clicked 23:40:24                                 3                                   2                                   Clicked 23:40:32                                 3                                   2                                   1                                                                                                         secs 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32                                                                                                                                                                                                                         counts   1   2 3 2 3 2 3   2   1           What we can see here in the output is that the counts include all the end edges that have occurred between the mouse clicks.  If we look specifically at the mouse click at 22:40:32. then we see that 3 events are returned to us. These include the following End Edge count at 22:40:25 End Edge count at 22:40:27 End Edge count at 22:40:29 Another thing we notice is that until we actually issue a CTI at 22:40:32 then those last 3 snapshot window counts will never be reported. Hopefully this has helped to explain  a few concepts around StreamInsight and the IObservable() pattern.   You can download this solution from here and play.  You will need the Reactive Framework from here and StreamInsight 1.1

    Read the article

  • New Procurement Report for Transportation Sourcing

    - by John Murphy
    Welcome to our fourth annual transportation procurement benchmark report. American Shipper, in partnership with the Council of Supply Chain Management Professionals (CSCMP) and the Retail Industry Leaders Association (RILA), surveyed roughly 275 transportation buyers and sellers on procurement practices, processes, technologies and results. Some key findings: • Manual, spreadsheet-based procurement processes remain the most prevalent among transportation buyers, with 42 percent of the total • Another 25 percent of respondents use a hybrid platform, which presumably means these buyers are using spreadsheets for at least one mode and/or geography • Only 23 percent of buyers are using a completely systems-based approach of some kind • Shippers were in a holding pattern with regards to investment in procurement systems the past year • Roughly three-quarters of survey respondents report that transportation spend has increased in 2012, although the pace has declined slightly from last year’s increases • Nearly every survey respondent purchases multiple modes of transportation • The number of respondents with plans to address technology to support the procurement process has increased in 2012. About one quarter of respondents who do not have a system report they have a budget for this investment in the next two years.

    Read the article

  • Webcast: Flow Manufacturing Work Order-less Completion

    - by ChristineS-Oracle
    Webcast: Flow Manufacturing Work Order-less Completion Date: August 27, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This advisor webcast is intended for technical and functional users who want to understand the Flow Manufacturing Work Order-less Completion and its Transaction types. This presentation will include the required setups and provide details of the business process. Topics will include: Overview of Flow Manufacturing and Integration with Work In Process Overview of Work Order-less Completion Basic Setup Steps Transactions performed in Work Order-less Completion form Details & Registration: Doc ID 1906749.1

    Read the article

  • Create MSDB Folders Through Code

    You can create package folders through SSMS, but you may also wish to do this as part of a deployment process or installation. In this case you will want programmatic method for managing folders, so how can this be done? The short answer is, go and look at the table msdb.dbo. sysdtspackagefolders90. This where folder information is stored, using a simple parent and child hierarchy format. To add new folder directly we just insert into the table - INSERT INTO dbo.sysdtspackagefolders90 ( folderid ,parentfolderid ,foldername) VALUES ( NEWID() -- New GUID for our new folder ,<<Parent Folder GUID>> -- Lookup the parent folder GUID if a child or another folder, or use the root GUID 00000000-0000-0000-0000-000000000000 ,<<Folder Name>>) -- New folder name There are also some stored procedures - sp_dts_addfolder sp_dts_deletefolder sp_dts_getfolder sp_dts_listfolders sp_dts_renamefolder To add a new folder to the root we could call the sp_dts_addfolder to stored procedure - EXEC msdb.dbo.sp_dts_addfolder @parentfolderid = '00000000-0000-0000-0000-000000000000' -- Root GUID ,@name = 'New Folder Name The stored procedures wrap very simple SQL statements, but provide a level of security, as they check the role membership of the user, and do not require permissions to perform direct table modifications.

    Read the article

  • Enterprise Trade Compliance: Changing Trade Operations around the World

    - by John Murphy
    We live in a world of incredible bounty and speed where any product can be delivered anywhere on earth. However, our world is also filled with challenges for business – where volatility, uncertainty, risk, and chaos are our daily companions. To prosper amid the realities of this new world, organizations cannot rely on old strategies; they need new business models. Key trends within the global economy are mandating that companies fully integrate global trade management best practices within broader supply chain management strategies, rather than simply leaving it as a discrete event at the end of the order or procurement cycle. To explain, many companies face a complicated and changing compliance environment. This is directly linked to the speed and configuration of the supply chain, particularly with the explosion of new markets, shorter service cycles and ship times, accelerating rates of globalization and outsourcing, and increasing product complexity and regulation. Read More...

    Read the article

  • TAKE Solutions Implements Oracle Mobile Supply Chain Applications for Leading Housewares Manufacturer

    - by John Murphy
    TAKE Solutions Ltd. [BSE: 532890 | NSE: TAKE], a leader in the Supply Chain Management and Life Sciences domains, today announced the successful implementation of Oracle Mobile Supply Chain Applications (MSCA®) for a leading manufacturer of household goods. Leveraging TAKE’s more than 15 years of expertise with the Oracle® E-business Suite products, the customer has achieved real-time inventory visibility into manufacturing, put-away and customer shipments. TAKE also implemented location control and cycle counting to provide additional visibility and inventory accuracy. http://www.virtual-strategy.com/2012/06/05/take-solutions-implements-oracle-mobile-supply-chain-applications-leading-housewares-manu

    Read the article

  • Oracle Transportation User Conference Agenda Released

    - by John Murphy
    The Oracle Transportation Management (OTM) User Conference agenda is now available.   The event brings together users, implementers and prospective customers of OTM.   The event is held annually in Philadelphia with this year's event taking place August 12 - 15.   Follow one of the links to see the complete agenda and to register to attend.  http://otmconference.com/agenda.aspx

    Read the article

  • Connect Digest : 2011-03-12

    - by AaronBertrand
    Background Last year, I came to a very tough decision that I would cease publicizing Connect items in an attempt to drive up votes and get important issues fixed. This was almost entirely due to a couple of MVPs criticizing me for raising awareness of certain Connect items instead of letting them be found "naturally." I wasn't sure what world they were living in, where droves of everyday end users just happened to stumble upon Connect items without any prompting. I suppose it could be said that the...(read more)

    Read the article

  • Process Manufacturing (OPM) Actual Costing Analyzer Diagnostic Script

    - by ChristineS-Oracle
    The OPM Actual Costing Analyzer is a script which you can use proactively at any time to review Setups and pieces of data which are known to affect either the performance or the accuracy of either the OPM Actual Cost process, or Lot Costing.Each topic reviewed by this report has been specifically selected because it points to the solution used to resolve at least two Service Requests during a recent 3-month period. You can download this script from Doc ID 1629384.1, OPM Actual Costing Analyzer Diagnostic Script.

    Read the article

  • Troubleshooting .NET "Fatal Execution Engine Error"

    - by JYelton
    Summary: I periodically get a .NET Fatal Execution Engine Error on an application which I cannot seem to debug. The dialog that comes up only offers to close the program or send information about the error to Microsoft. I've tried looking at the more detailed information but I don't know how to make use of it. Error: The error is visible in Event Viewer under Applications and is as follows: .NET Runtime version 2.0.50727.3607 - Fatal Execution Engine Error (7A09795E) (80131506) The computer running it is Windows XP Professional SP 3. (Intel Core2Quad Q6600 2.4GHz w/ 2.0 GB of RAM) Other .NET-based projects that lack multi-threaded downloading (see below) seem to run just fine. Application: The application is written in C#/.NET 3.5 using VS2008, and installed via a setup project. The app is multi-threaded and downloads data from multiple web servers using System.Net.HttpWebRequest and its methods. I've determined that the .NET error has something to do with either threading or HttpWebRequest but I haven't been able to get any closer as this particular error seems impossible to debug. I've tried handling errors on many levels, including the following in Program.cs: // handle UI thread exceptions Application.ThreadException += new ThreadExceptionEventHandler(Application_ThreadException); // handle non-UI thread exceptions AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); // force all windows forms errors to go through our handler Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); More Notes and What I've Tried... Installed Visual Studio 2008 on the target machine and tried running in debug mode, but the error still occurs, with no hint as to where in source code it occurred. When running the program from its installed version (Release) the error occurs more frequently, usually within minutes of launching the application. When running the program in debug mode inside of VS2008, it can run for hours or days before generating the error. Reinstalled .NET 3.5 and made sure all updates are applied. Broke random cubicle objects in frustration. Rewritten parts of code that deal with threading and downloading in attempts to catch and log exceptions, though logging seemed to aggravate the problem (and never provided any data). Question: What steps can I take to troubleshoot or debug this kind of error? Memory dumps and the like seem to be the next step, but I'm not experienced at interpreting them. Perhaps there's something more I can do in the code to try and catch errors... It would be nice if the "Fatal Execution Engine Error" was more informative, but internet searches have only told me that it's a common error for a lot of .NET-related items.

    Read the article

  • Sandboxed Javascript Execution in an Internet Explorer Extension (BHO)

    - by TelegramSam
    Firefox has the Sandbox and evalInSandbox(). Chrome has sandboxed execution in their content scripts (they call it isolated execution). I'm looking for the same thing in an IE browser extension. I can load a javascript file, then call evalScript(), but the code executes in the same environment as javascript that exists on the page. I need a way to run my library (which includes and is based on jQuery) in an sandboxed/isolated environment, but still allow it to modify the DOM as if it were running on the page. Jint looks promising, but cannot currently evaluate jQuery. (They can parse it.) How can I do this?

    Read the article

  • Spring Webflow: cannot get flow execution url at action phase of portlet

    - by tabdulin
    The following exception is thrown: Caused by: java.lang.IllegalStateException: A flow execution action URL can only be obtained in a RenderRequest using a RenderResponse at org.springframework.webflow.context.portlet.PortletExternalContext.getFlowExecutionUrl(PortletExternalContext.java:2 06) at org.springframework.webflow.engine.impl.RequestControlContextImpl.getFlowExecutionUrl(RequestControlContextImpl.java :178) at org.springframework.webflow.mvc.view.AbstractMvcView.render(AbstractMvcView.java:172) at org.springframework.webflow.engine.ViewState.render(ViewState.java:282) at org.springframework.webflow.engine.ViewState.refresh(ViewState.java:241) at org.springframework.webflow.engine.ViewState.resume(ViewState.java:219) at org.springframework.webflow.engine.Flow.resume(Flow.java:545) at org.springframework.webflow.engine.impl.FlowExecutionImpl.resume(FlowExecutionImpl.java:259) ... 62 more It seems for me like resuming execution of flow at action phase tries to do render phase's stuff. Any ideas?

    Read the article

  • SWFUpload is it possible to upload multiple files to a single php script execution

    - by user176333
    Hello, I'm trying to implement SWFUpload into an existing PHP upload funcitonality. My current backend script however expects 2 fiels to be uploaded in a single php script execution. (e.g. it excepts the $_FILES parameters to contain 2 entries). So i'm queueing 2 files with SWFUpload and start uploading them. However it appears SWFLUpload calls the php backend script for each queued file. I'd rather modify SWFUpload to send the files with a single backend script execution instead on having to adjust the backend script. Is anyone familiar with this? I've searched various resources (like the SWFUploads docs and forum, but have not found similiar topics. Thanks in advance

    Read the article

  • Line of code doesnt follow sequential execution

    - by ryudice
    Hi, I'm having a problem with a code that doesnt follow sequential execution although I'm not using threading. My code calls one function and when I'm debugging inside the function, it returns to the line of code following the function call although the function hasnt finished executing, I have no idea why this would happen, any ideas? thanks in advance. workflow.SaveControlTiempo(solEntity, traId, Usuario.GetUsrId());// this is my function RadAjaxManager.GetCurrent(Page).RadAlert("Solicitud Transicionada con \u00c9xito"); // code execution continues here even if the function hasnt finished and since the function hasnt finished I get an exception var javascripFunction = "CloseWindow('Solicitud <b>{0}</b><br />Transicionada con \u00c9xito.<li> <b>Etapa Destino: </b>{1}<li><b>Usuario: </b>{2}');"; javascripFunction = string.Format(javascripFunction, solEntity.SOL_CODIGO, solEntity.WKF_ETP_ETAPAS.ETP_DES, DNNUtil.GetInstance().GetUserName(solEntity.USR_ID));

    Read the article

  • Why might SQL execute more quickly on SQL Server 2000 when NOT using a stored procedure?

    - by Kofi Sarfo
    I could see nothing wrong with the execution plan. Besides, as I understand it, SQL Server 2000 extended many of the performance benefits of stored procedures to all SQL statements by recognising new T-SQL statements against T-SQL statements of existing execution plans (by retaining execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans) It's a fairly straight forward SELECT statement with sensible table joins, no transactions included or linked servers being referenced within the query and WITH (NOLOCK) table hints applied. The stored procedure was created by dbo and the user has all the necessary permissions. So my question is this: What are the likely reasons for a query to take only a few seconds to run but then take several minutes when identical T-SQL is run via a stored procedure?

    Read the article

  • How to repeat a particular execution multiple times

    - by Joshua
    The following snippet generates create / drop sql for a particular database, whenever there is a modification to JPA entity classes. How do I perform something equivalent of a 'for' operation where-in the following code can be used to generate sql for all supported databases (e.g. H2, MySQL, Postgres) Currently I have to modify db.groupId, db.artifactId, db.driver.version everytime to generate the sql files <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>${hibernate3-maven-plugin.version}</version> <executions> <execution> <id>create schema</id> <phase>process-test-resources</phase> <goals> <goal>hbm2ddl</goal> </goals> <configuration> <componentProperties> <persistenceunit>${app.module}</persistenceunit> <drop>false</drop> <create>true</create> <outputfilename>${app.sql}-create.sql</outputfilename> </componentProperties> </configuration> </execution> <execution> <id>drop schema</id> <phase>process-test-resources</phase> <goals> <goal>hbm2ddl</goal> </goals> <configuration> <componentProperties> <persistenceunit>${app.module}</persistenceunit> <drop>true</drop> <create>false</create> <outputfilename>${app.sql}-drop.sql</outputfilename> </componentProperties> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>${hibernate-core.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j-api.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-nop</artifactId> <version>${slf4j-nop.version}</version> </dependency> <dependency> <groupId>${db.groupId}</groupId> <artifactId>${db.artifactId}</artifactId> <version>${db.driver.version}</version> </dependency> </dependencies> <configuration> <components> <component> <name>hbm2cfgxml</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2dao</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> <outputDirectory>src/main/sql</outputDirectory> </component> <component> <name>hbm2doc</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2hbmxml</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2java</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2template</name> <implementation>annotationconfiguration</implementation> </component> </components> </configuration> </plugin>

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • Show javascript execution progress

    - by Midhat
    I have some javascript functions that take about 1 to 3 seconds. (some loops or mooML templating code.) During this time, the browser is just frozen. I tried showing a "loading" animation (gif image) before starting the operation and hiding it afterwords. but it just doesnt work. The browser freezes before it could render the image and hides it immediately when the function ends. Is there anything I can do to tell the browser to update the screen before going into javascript execution., Something like Application.DoEvents or background worker threads. So any comments/suggestions about how to show javascript execution progress. My primary target browser is IE6, but should also work on all latest browsers

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >