Search Results

Search found 7097 results on 284 pages for 'calls'.

Page 19/284 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Two questions on ensuring EndInvoke() gets called on a list of IAsyncResult objects

    - by RobV
    So this question is regarding the .Net IAsyncResult design pattern and the necessity of calling EndInvoke as covered in this question Background I have some code where I'm firing off potentially many asynchronous calls to a particular function and then waiting for all these calls to finish before using EndInvoke() to get back all the results. Question 1 I don't know whether any of the calls has encountered an exception until I call EndInvoke() and in the event that an exception occurs in one of the calls the entire method should fail and the exception gets wrapped into an API specific exception and thrown upwards. So my first question is what's the best way then to ensure that the remainder of the async calls get properly terminated? Is a finally block which calls EndInvoke() on the remainder of the unterminated calls (and ignores any further exceptions) the best way to do this? Question 2 Secondly when I first fire off all my asyc calls I then call WaitHandle.WaitAll() on the array of WaitHandle instances that I've got from my IAsyncResult instances. The method which is firing all these async calls has a timeout to adhere to so I provide this to the WaitAll() method. Then I test whether all the calls have completed, if not then the timeout must have been reached so the method should also fail and throw another API specific exception. So my second question is what should I do in this case? I need to call EndInvoke() to terminate all these async calls before I throw the error but at the same time I don't want the code to get stuck since EndInvoke() is blocking. In theory at least if the WaitAll() call times out then all the async calls should themselves have timed out and thrown exceptions (thus completing the call) since they are also governed by a timeout but this timeout is potentially different from the main timeout

    Read the article

  • Update table without using cursor and on date

    - by Muhammad Kashif Nadeem
    Please copy and run following script DECLARE @Customers TABLE (CustomerId INT) DECLARE @Orders TABLE ( OrderId INT, CustomerId INT, OrderDate DATETIME ) DECLARE @Calls TABLE (CallId INT, CallTime DATETIME, CallToId INT, OrderId INT) ----------------------------------------------------------------- INSERT INTO @Customers SELECT 1 INSERT INTO @Customers SELECT 2 ----------------------------------------------------------------- INSERT INTO @Orders SELECT 10, 1, DATEADD(d, -20, GETDATE()) INSERT INTO @Orders SELECT 11, 1, DATEADD(d, -10, GETDATE()) ----------------------------------------------------------------- INSERT INTO @Calls SELECT 101, DATEADD(d, -19, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 102, DATEADD(d, -17, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 103, DATEADD(d, -9, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 104, DATEADD(d, -6, GETDATE()), 1, NULL INSERT INTO @Calls SELECT 105, DATEADD(d, -5, GETDATE()), 1, NULL ----------------------------------------------------------------- I want to update @Calls table and need following results. I am using the following query UPDATE @Calls SET OrderId = ( CASE WHEN (s.CallTime > e.OrderDate) THEN e.OrderId END ) FROM @Calls s INNER JOIN @Orders e ON s.CallToId = e.CustomerId and the result of my query is not what I need. Requirement: As you can see there are two orders. One is on 2010-12-12 and one is on 2010-12-22. I want to update @Calls table with relevant OrderId with respect to CallTime. In short If subsequent Orders are added, and there are further calls then we assume that a new call is associated with the most recent Order Note: This is sample data so this is not the case that I always have two Orders. There might be 10+ Orders and 100+ calls. Note2 I could not find good title for this question. Please change it if you think of any better. Thanks.

    Read the article

  • Should main method be only consists of object creations and method calls?

    - by crucified soul
    A friend of mine told me that, the best practice is class containing main method should be named Main and only contains main method. Also main method should only parse inputs, create other objects and call other methods. The Main class and main method shouldn't do anything else. Basically what he is saying that class containing main method should be like: public class Main { public static void main(String[] args) { //parse inputs //create other objects //call methods } } Is it the best practice?

    Read the article

  • how to serialize function depending on what instance of object calls it, if same instance call in a thread then do serialize else not

    - by LondonDreams
    I have a function which fetches and updates some record from db and I am trying to make sure each if the function is called by same instance of object(same Or different thread) then function should behave synchronized else its a call from different object instance function need not to be synchronized. I have tried it use a lock per client. That is, instead of synchronizing the method directly using explicit locking through lock objects using Map. function is like :- getAndUpdateMyHitCount(myObjId){ //go to db and get unique record by myObjId //fetch value , increment , save update } And this function may get call is same thread by different Or same object instance But as fetching and matching from Map is slow , Is there other optimized way to do this ? Found similar at this Question but dont feel that is optimized

    Read the article

  • Why can't I get 100% code coverage on a method that calls a constructor of a generic type?

    - by Martin Watts
    Today I came across a wierd issue in a Visual Studio 2008 Code Coverage Analysis. Consider the following method:  private IController GetController<T>(IContext context) where T : IController, new() {     IController controller = new T();     controller.ListeningContext = context;     controller.Plugin = this;     return controller; } This method is called in a unit test as follows (MenuController has an empty constructor): controller = plugin.GetController<MenuController>(null);  After calling this method from a Unit Test, the following code coverage report is generated: As you can see, Code Coverage is only 85%. Looking up the code results in the following: Apparently, the call to the constructor of the generic type is considered only partly covered. WHY? Google didn't help. And MSDN didn't help at all, of course. Anybody who does know?

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • How can I strip Python logging calls without commenting them out?

    - by cdleary
    Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks). I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so: [Leave ERROR and above] my_module.SomeClass.method_with_lots_of_warn_calls [Leave WARN and above] my_module.SomeOtherClass.method_with_lots_of_info_calls [Leave INFO and above] my_module.SomeWeirdClass.method_with_lots_of_debug_calls Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this? Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)

    Read the article

  • Using parameterized function calls in SELECT statements. SQL Server

    - by geekzlla
    I have taken over some code from a previous developer and have come across this SQL statement that calls several SQL functions. As you can see, the function calls in the select statement pass a parameter to the function. How does the SQL statement know what value to replace the variable with? For the below sample, how does the query engine know what to replace nDeptID with when it calls, fn_SelDeptName_DeptID(nDeptID) nDeptID IS a column in table Note. SELECT STATEMENT: SELECT nCustomerID AS [Customer ID], nJobID AS [Job ID], dbo.fn_SelDeptName_DeptID(nDeptID) AS Department, nJobTaskID AS JobTaskID, dbo.fn_SelDeptTaskDesc_OpenTask(nJobID, nJobTaskID) AS Task, nStandardNoteID AS StandardNoteID, dbo.fn_SelNoteTypeDesc(nNoteID) AS [Note Type], dbo.fn_SelGPAStandardNote(nStandardNoteID) AS [Standard Note], nEntryDate AS [Entry Date], nUserName as [Added By], nType AS Type, nNote AS Note FROM Note WHERE nJobID = 844261 ORDER BY nJobID, Task, [Entry Date] ====================== Function fn_SelDeptName_DeptID: ALTER FUNCTION [dbo].[fn_SelDeptName_DeptID] (@iDeptID int) RETURNS varchar(25) -- Used by DataCollection for Job Tracking -- if the Deptartment isnt found return an empty string BEGIN -- Return the Department name for the given DeptID. DECLARE @strDeptName varchar(25) IF @iDeptID = 0 SET @strDeptName = '' ELSE BEGIN SET @strDeptName = (SELECT dName FROM Department WHERE dDeptID = @iDeptID) IF (@strDeptName IS NULL) SET @strDeptName = '' END RETURN @strDeptName END ========================== Thanks in advance.

    Read the article

  • Using parameterized function calls in SELECT statements. MS SQL Server

    - by geekzlla
    I have taken over some code from a previous developer and have come across this SQL statement that calls several SQL functions. As you can see, the function calls in the select statement pass a parameter to the function. How does the SQL statement know what value to replace the variable with? For the below sample, how does the query engine know what to replace nDeptID with when it calls, fn_SelDeptName_DeptID(nDeptID)? nDeptID IS a column in table Note. SELECT STATEMENT: SELECT nCustomerID AS [Customer ID], nJobID AS [Job ID], dbo.fn_SelDeptName_DeptID(nDeptID) AS Department, nJobTaskID AS JobTaskID, dbo.fn_SelDeptTaskDesc_OpenTask(nJobID, nJobTaskID) AS Task, nStandardNoteID AS StandardNoteID, dbo.fn_SelNoteTypeDesc(nNoteID) AS [Note Type], dbo.fn_SelGPAStandardNote(nStandardNoteID) AS [Standard Note], nEntryDate AS [Entry Date], nUserName as [Added By], nType AS Type, nNote AS Note FROM Note WHERE nJobID = 844261 xORDER BY nJobID, Task, [Entry Date] ====================== Function fn_SelDeptName_DeptID: ALTER FUNCTION [dbo].[fn_SelDeptName_DeptID] (@iDeptID int) RETURNS varchar(25) -- Used by DataCollection for Job Tracking -- if the Deptartment isnt found return an empty string BEGIN -- Return the Department name for the given DeptID. DECLARE @strDeptName varchar(25) IF @iDeptID = 0 SET @strDeptName = '' ELSE BEGIN SET @strDeptName = (SELECT dName FROM Department WHERE dDeptID = @iDeptID) IF (@strDeptName IS NULL) SET @strDeptName = '' END RETURN @strDeptName END ========================== Thanks in advance.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Is there a cheaper alternative to Skype for VoIP to PSTN calls to Vietnam?

    - by Nick Bolton
    My dad uses Skype to make calls to Vietnam PSTN, but finds that the rates are a little on the pricey side. It's probably not relevant, but he's living in Thailand right now. Is there an application similar to Skype, which is cheaper for calling Vietnam? The answer may well be no, since maybe the international telecom peers in Vietnam are just generally expensive... Who knows? While I'm asking, maybe it's worth someone mentioning if there's a cheaper alternative to Skype in general? I'm thinking that maybe not as Skype's pretty cheap anyway, but it's worth mentioning.

    Read the article

  • Multiple calls to different page methods in same web page are not running in parallel (JQuery/Ajax/A

    - by Tony_Henrich
    I have several page methods defined in the code behind of an aspx page. I have several JS calls (see example below), one after the other, in the ready() method of JQuery to call these page methods. I noticed the javascript calls run asynchronously but the .NET page methods do not run in parallel. Page method 1 finishes first before page method 2 runs. Is there a way to get all the page methods to run all at the same time? My workaround is to put each method in its own aspx page or use iframes but I am looking for better solutions. $.ajax({ type: "POST", url: (page/methodname), data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { .... } } });

    Read the article

  • Ajax problem not displaying data using multiple javascript calls...

    - by Ronedog
    I'm writing an app that uses ajax to retrieve data from a mysql db using php. Because of the nature of the app, the user clicks an href link that has an "onclick" event used to call the javascript/ajax. I'm retrieving the data from mysql, then calling a separate php function which creates a small html table with the necessary data in it. The new table gets passed back to the responseText and is displayed inside a div tag. The tables only have around 10-20 rows of data in them. This functionality is working fine and displays the data in html form exactly as it needs to be on the page. The problem is this. the HREF "onclick" event needs to run multiple scripts one right after the other. The first script updates the "existing" data and inside the "update_existing" function is a call to refresh a section of the page with the updated HTML from the responseText. Then when that is done a "display_html" function is called which also updates a different section of the page with it's newly created HTML table. The event looks like this: Update This string gets built dynamically using php with parameters supplied, but for this example I simply took the parameters out so it didn't get confusing. The "update_existion() function actually calls the display_html() function which updates a section of the page as needed. I need to update a different section of the page on the same click of the mouse right after the update, which is why I'm calling the display_html() again, right after it. The problem is only the last call is being updated on my screen. In other words, the 2nd function call "display_html()" executes and displays the refreshed data just fine, but the previous call to update_existing() runs and updates the database properly, but doesn't display on the screen unless I press the browsers "refresh" button, which of course displays the new data exactly how I want it to, but I don't want the users to have to press the "refresh" button. I tried adding multiple "display_html() calls one right after the other, separating all of them with the semicolon and learned that only the very last function call actually refreshed the div element on the html page with the table information, although all the previous display_html() calls worked, they couldn't be seen on the page without a refresh of the browser. Is this a problem with javascript, or the ajax call, or is this a limitation in the DOM that only allows one element to be updated at a time. The ajax call is asynchroneous, but I've tried both, only async works period. This is the same in both Firefox and Internet Explorer Any ideas what's going on and how to get around it so I can run these multiple scripts?

    Read the article

  • Erlang: How to view output of io:format/2 calls in processes spawned on remote nodes.

    - by jkndrkn
    Hello, I am working on a decentralized Erlang application. I am currently working on a single PC and creating multiple nodes by initializing erl with the -sname flag. When I spawn a process using spawn/4 on its home node, I can see output generated by calls io:format/2 within that process in its home erl instance. When I spawn a process remotely by using spawn/4 in combination with register_name, output of io:format/2 is sometimes redirected back to the erl instance where the remote spawn/4 call was made, and sometimes remains completely invisible. Similarly, when I use rpc:call/4, output of io:format/2 calls is redirected back to the erl instance where the `rpc:call/4' call is made. How do you get a process to emit debugging output back to its parent erl instance?

    Read the article

  • Queue ExternalInterface calls to Flash Object in UpdatePanel - Needs Improvement?

    - by Laramie
    A Flash (actually Flex) object is created on an ASP.Net page within an Update Panel using a modified version of the embedCallAC_FL_RunContent.js script so it can be written in dynamically. It is re-created with this script with each partial postback to that panel. There are also other Update Panels on the page. With some postbacks (partial and full), External Interface calls such as $get('FlashObj').ExternalInterfaceFunc('arg1', 0, true); are prepared server-side and added to the page using ScriptManager.RegisterStartupScript. They're embedded in a function and stuffed into Sys.Application's load event, for example Sys.Application.add_load(funcContainingExternalInterfaceCalls). The problem is that because the Flash object's state state may change with each partial postback, the Flash (Flex) object and/or External Interface may not be ready or even exist yet in the DOM when the JavaScript - Flash External Interface call is made. It results in an "Object doesn't support this property or method" exception. I have a working strategy to make the ExternalInterface calls immediately if Flash is ready or else queue them until such time that Flash announces its readiness. //Called when the Flash object is initialized and can accept ExternalInterfaceCalls var flashReady = false; //Called by Flash when object is fully initialized function setFlashReady() { flashReady = true; //Make any queued ExternalInterface calls, then dequeue while (extIntQueue.length > 0) (extIntQueue.shift())(); } var extIntQueue = []; function callExternalInterface(flashObjName, funcName, args) { //reference to the wrapped ExternalInterface Call var wrapped = extWrap(flashObjName, funcName, args); //only procede with ExternalInterface call if the global flashReady variable has been set if (flashReady) { wrapped(); } else { //queue the function so when flashReady() is called next, the function is called and the aruments are passed. extIntQueue.push(wrapped); } } //bundle ExtInt call and hold variables in a closure function extWrap(flashObjName, funcName, args) { //put vars in closure return function() { var funcCall = '$get("' + flashObjName + '").' + funcName; eval(funcCall).apply(this, args); } } I set the flashReady var to dirty whenever I update the Update Panel that contains the Flash (Flex) object. ScriptManager.RegisterClientScriptBlock(parentContainer, parentContainer.GetType(), "flashReady", "flashReady = false;", true); I'm pleased that I got it to work, but it feels like a hack. I am still on the learning curve with respect to concepts like closures why "eval()" is apparently evil, so I'm wondering if I'm violating some best practice or if this code should be improved, if so how? Thanks.

    Read the article

  • What can be done to speed up synchronous WCF calls?

    - by Dimitri C.
    My performance measurements of synchronous WCF calls from within a Silverlight application showed I can make 7 calls/s on a localhost connection, which is very slow. Can this be speeded up, or is this normal? This is my test code: const UInt32 nrCalls = 100; ICalculator calculator = new CalculatorClient(); // took over from the MSDN calculator example for (double i = 0; i < nrCalls; ++i) { var call = calculator.BeginSubtract(i + 1, 1, null, null); call.AsyncWaitHandle.WaitOne(); double result = calculator.EndSubtract(call); } Remarks: CPU load is almost 0%. Apparently, the WCF module is waiting for something. I tested this both on Firefox 3.6 and Internet Explorer 7. I'm using Silverlight v3.0

    Read the article

  • Matlab postpones disp calls when doing demanding calculations why is that?

    - by Reed Richards
    I am implementing an algorithm in Matlab. Among other things it calculates shortest paths etc. so its quite demanding for my old computer. I've put out disp calls through out the program to see what's happening all the time. However when starting on a particulary heavy for loop the disp seemes not to be called until the loop is over even though it comes before the loop. Why is that? I though that Matlab was really linear or am I just choking it with to many calculations and the disp calls get the lowest priority?

    Read the article

  • Python: re-initialize a function's default value for subsequent calls to the function.

    - by Peter Stewart
    I have a function that calls itself to increment and decrement a stack. I need to call it a number of times, and I'd like it to work the same way in subsequent calls but, as expected, it doesn't re-use the default value. I've read that this is a newbie trap and I've seen suggested solutions, but I haven't been able to make any solution work. It would be nice to be able to "fun.reset" def a(x, stack = [None]): print x,' ', stack if x > 5: temp = stack.pop() if x <=5: stack.append(1) if stack == []: return a(x + 1) print a(0) print a(2) #second call print a(3) #third call I expected this to work, but it doesn't. print a(0, [None]) print a(2, [None]) #second call print a(3, [None]) #third call Can I reset the function to it's initial state? Any help would be appreciated.

    Read the article

  • How can I find the places of system calls of my program?

    - by Lucky Man
    From strace manual: -i Print the instruction pointer at the time of the system call. I straced my programm: strace -i prog As a result I got a lot of system calls. One of them: [000da49c] open("./rabbit.o", O_RDONLY) = 3 But disassembled instruction at this address of prog doesn't call any syscall (hte editor): da49c ! mov r7, ip What is wrong? How can I find the places of system calls of my program? P.S. Architecture of my device doesn't support GDB-command catch syscall.

    Read the article

  • What's the best way to measure and track performance over various calls at runtime?

    - by bitcruncher
    Hello. I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime? Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time? Many thanks. Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >