Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 287/1086 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Function parameters evaluation order: is undefined behaviour if we pass reference?

    - by bolov
    This is undefined behaviour: void feedMeValue(int x, int a) { cout << x << " " << a << endl; } int main() { int a = 2; int &ra = a; feedMeValue(ra = 3, a); return 0; } because depending on what parameter gets evaluated first we could call (3, 2) or (3, 3). However this: void feedMeReference(int x, int const &ref) { cout << x << " " << ref << endl; } int main() { int a = 2; int &ra = a; feedMeReference(ra = 3, a); return 0; } will always output 3 3 since the second parameter is a reference and all parameters have been evaluated before the function call, so even if the second parameter is evaluated before of after ra = 3, the function received a reference to a wich will have a value of 2 or 3 at the time of the evaluation, but will always have the value 3 at the time of the function call. Is the second example UB? It is important to know because the compiler is free to do anything if he detects undefined behaviour, even if I know it would always yield the same results. *Note: I think that feedMeReference(a = 3, a) is the exact same situation as feedMeReference(ra = 3, a). However it seems not everybody agrees, in the addition to having 2 completely different answers.

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • There was no endpoint listening at net.pipe://localhost/...

    - by virsum
    I have two WCF services hosted in a single Windows Service on a Windows Server 2003 machine. If the Windows service needs to access either of the WCF services (like when a timed event occurs), it uses one of the five named pipe endpoints exposed (different service contracts). The service also exposes HTTP MetadataExchange endpoints for each of the two services, and net.tcp endpoints for consumers external to the server. Usually things work great, but every once in a while I get an error message that looks something like this: System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at net.pipe://localhost/IPDailyProcessing that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. --- System.IO.PipeException: The pipe endpoint 'net.pipe://localhost/IPDailyProcessing' could not be found on your local machine. --- End of inner exception stack trace --- Server stack trace: at System.ServiceModel.Channels.PipeConnectionInitiator.GetPipeName(Uri uri) at System.ServiceModel.Channels.NamedPipeConnectionPoolRegistry.NamedPipeConnectionPool.GetPoolKey(EndpointAddress address, Uri via) at System.ServiceModel.Channels.ConnectionPoolHelper.EstablishConnection(TimeSpan timeout) at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOpenOnce.System.ServiceModel.Channels.ServiceChannel.ICallOnce.Call(ServiceChannel channel, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade) at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) It doesn't happen reliably, which is maddening because I can't repeat it when I want to. In my windows service I also have some timed events and some file listeners, but these are fairly infrequent events. Does anyone have any ideas why I might be encountering an issue? Any help would be greatly appreciated.

    Read the article

  • Closures in Ruby

    - by Isaac Cambron
    I'm kind of new to Ruby and some of the closure logic has me a confused. Consider this code: array = [] for i in (1..5) array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] This makes sense to me because i is bound outside the loop, so the same variable is captured by each trip through the loop. It also makes sense to me that using an each block can fix this: array = [] (1..5).each{|i| array << lambda {i}} array.map{|f| f.call} => [1, 2, 3, 4, 5] ...because i is now being declared separately for each time through. But now I get lost: why can't I also fix it by introducing an intermediate variable? array = [] for i in 1..5 j = i array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] Because j is new each time through the loop, I'd think a different variable would be captured on each pass. For example, this is definitely how C# works, and how -- I think-- Lisp behaves with a let. But in Ruby not so much. It almost looks like = is aliasing the variable instead of copying the reference, but that's just speculation on my part. What's really happening?

    Read the article

  • Forcing Kernel::method_name to be called in Ruby

    - by Peter
    I want to add a foo method to Ruby's Kernel module, so I can write foo(obj) anywhere and have it do something to obj. Sometimes I want a class to override foo, so I do this: module Kernel private # important; this is what Ruby does for commands like 'puts', etc. def foo x if x.respond_to? :foo x.foo # use overwritten method. else # do something to x. end end end this is good, and works. but, what if I want to use the default Kernel::foo in some other object that overwrites foo? Since I've got an instance method foo, I've lost the original binding to Kernel::foo. class Bar def foo # override behaviour of Kernel::foo for Bar objects. foo(3) # calls Bar::foo, not the desired call of Kernel::foo. Kernel::foo(3) # can't call Kernel::foo because it's private. # question: how do I call Kernel::foo on 3? end end Is there any clean way to get around this? I'd rather not have two different names, and I definitely don't want to make Kernel::foo public.

    Read the article

  • where did the _syscallN macros go in <linux/unistd.h>?

    - by Evan Teran
    It used to be the case that if you needed to make a system call directly in linux without the use of an existing library, you could just include <linux/unistd.h> and it would define a macro similar to this: #define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \ type name(type1 arg1,type2 arg2,type3 arg3) \ { \ long __res; \ __asm__ volatile ("int $0x80" \ : "=a" (__res) \ : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \ "d" ((long)(arg3))); \ if (__res>=0) \ return (type) __res; \ errno=-__res; \ return -1; \ } Then you could just put somewhere in your code: _syscall3(ssize_t, write, int, fd, const void *, buf, size_t, count); which would define a write function for you that properly performed the system call. It seems that this system has been superseded by something (i am guessing that "[vsyscall]" page that every process gets) more robust. So what is the proper way (please be specific) for a program to perform a system call directly on newer linux kernels? I realize that I should be using libc and let it do the work for me. But let's assume that I have a decent reason for wanting to know how to do this :-).

    Read the article

  • Creating a function in Postgresql that does not return composite values

    - by celenius
    I'm learning how to write functions in Postgresql. I've defined a function called _tmp_myfunction() which takes in an id and returns a table (I also define a table object type called _tmp_mytable) -- create object type to be returned CREATE TYPE _tmp_mytable AS ( id integer, cost double precision ); -- create function which returns query CREATE OR REPLACE FUNCTION _tmp_myfunction( id integer ) RETURNS SETOF _tmp_mytable AS $$ BEGIN RETURN QUERY SELECT id, cost FROM sales WHERE id = sales.id; END; $$ LANGUAGE plpgsql; This works fine when I use one id and call it using the following approach: SELECT * FROM _tmp_myfunction(402); What I would like to be able to do is to call it, but to use a column of values instead of just one value. However, if I use the following approach I end up with all values of the table in one column, separated by commas: -- call function using all values in a column SELECT _tmp_myfunction(t.id) FROM transactions as t; I understand that I can get the same result if I use SELECT _tmp_myfunction(402); instead of SELECT * FROM _tmp_myfunction(402); but I don't know how to construct my query in such a way that I can separate out the results.

    Read the article

  • Server returns 500 error only when called by Java client using urlConnection/httpUrlConnection

    - by user455889
    Hi - I'm having a very strange problem. I'm trying to call a servlet (JSP) with an HTTP GET and a few parameters (http://mydomain.com/method?param1=test&param2=123). If I call it from the browser or via WGET in a bash session, it works fine. However, when I make the exact same call in a Java client using urlConnection or httpURLConnection, the server returns a 500 error. I've tried everything I have found online including: urlConn.setRequestProperty("Accept-Language", "en-us,en;q=0.5"); Nothing I've tried, however, has worked. Unfortunately, I don't have access to the server I'm calling so I can't see the logs. Here's the latest code: private String testURLConnection() { String ret = ""; String url = "http://localhost:8080/TestService/test"; String query = "param1=value1&param2=value2"; try { URLConnection connection = new URL(url + "?" + query).openConnection(); connection.setRequestProperty("Accept-Charset", "UTF-8"); connection.setRequestProperty("Accept-Language", "en-us,en;q=0.5"); BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line; StringBuilder content = new StringBuilder(); while ((line = bufferedReader.readLine()) != null) { content.append(line + "\n"); } bufferedReader.close(); metaRet = content.toString(); log.debug(methodName + " return = " + metaRet); } catch (Exception ex) { log.error("Exception: " + ex); log.error("stack trace: " + getStackTrace(ex)); } return metaRet; } Any help would be greatly appreciated!

    Read the article

  • Rate Limit Calls To Api Using Cache

    - by namtax
    Hi I am using coldfusion to call the last.fm api, using a cfc bundle sourced from here I am concerned about going over the request limit, which is 5 requests per originating IP address per second, averaged over a 5 minute period. The cfc bundle has a central component which calls all the other components, which are split up into sections like "artist", "track" etc...This central component "lastFmApi.cfc." is initiated in my application, and persisted for the lifespan of the application // Application.cfc example <cffunction name="onApplicationStart"> <cfset var apiKey = '[your api key here]' /> <cfset var apiSecret = '[your api secret here]' /> <cfset application.lastFm = CreateObject('component', 'org.FrankFusion.lastFm.lastFmApi').init(apiKey, apiSecret) /> </cffunction> Now if I want to call the api through a handler/controller, for example my artist handler...I can do this <cffunction name="artistPage" cache="5 mins"> <cfset qAlbums = application.lastFm.user.getArtist(url.artistName) /> </cffunction> I am a bit confused towards caching, but am caching each call to the api in this handler for 5 mins, but does this make any difference, because each time someone hits a new artist page wont this still count as a fresh hit against the api? Wondering how best to tackle this Thanks

    Read the article

  • Mapping Vectors

    - by Dan Snyder
    Is there a good way to map vectors? Here's an example of what I mean: vec0 = [0,0,0,0,0,0,0,0,0,0,0] vec1 = [1,4,2,7,3,2] vec2 = [0,0,0,0,0,0,0,0,0] vec2 = [7,2,7,9,9,6,1,0,4] vec4 = [0,0,0,0,0,0] mainvec = [0,0,0,0,0,0,0,0,0,0,0,1,4,2,7,3,2,0,0,0,0,0,0,0,0,0,7,2,7,9,9,6,1,0,4,0,0,0,0,0,0] Lets say mainvec doesn't exist (I'm just showing it to you so you can see the general data structure in mind. Now say I want mainvec(12) which would be 4. Is there a good way to map the call of these vectors without just stitching them together into a mainvec? I realize I could make a bunch of if statements that test the index of mainvec and I can then offset each call depending on where the call is within one of the vectors, so for instance: mainvec(12) = vec1(1) which I could do by: mainvec(index) if (index >=13) vect1(index-11); I wonder if there's a concise way of doing this without if statements. Any Ideas?

    Read the article

  • Python modules not updating after restarting the main module.

    - by Ian
    I've recently come back to a project having had to stop for about 6 months, and after reinstalling my operating system and coming back to it I'm having all kinds of crazy things happen. I made sure to install the same version(2.6) of python that I was using before. It started by giving me strange tkinter error that I hadn't had trouble with before, the program is relatively simple and the 2 or 3 bugs that were left when i quit, I had documented and weren't related to the interface. Things got even weirder when the same error would pop up even after I had removed the offending section of code. In fact, the traceback pointed to a line that didn't even exist in the module it was referencing, eg: line 262 when the module was only 200 lines long. After just starting a completely new file for the main module and copy/pasting it finally recognized that the offending code was gone and I stopped getting the error only to find that any updates to the code I made in another module didn't show up when I restarted the program through the shell. (I didn't forget to save.) After fiddling with this, of course, the old interface error came back, only in a different section of code that had been working previously. In fact, if I revert back to the files I had six months ago, the program works fine. As soon as I change anything in the main module, however, the interface bug comes back. Here's the original error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__ return self.func(*args) File "C:\PyStuff\interface.py", line 202, in dispOne __main__.top.destroy() File "C:\Python26\lib\lib-tk\Tkinter.py", line 1938, in destroy self.tk.call('destroy', self._w) TclError: can't invoke "destroy" command: application has been destroyed I'm guessing something else is going on here other than my own poor programming. Anyone have any ideas?

    Read the article

  • Problem executing trackPageview with Google Analytics.

    - by dmrnj
    I'm trying to capture the clicks of certain download links and track them in Google Analytics. Here's my code var links = document.getElementsByTagName("a"); for (var i = 0; i < links.length; i++) { linkpath = links[i].pathname; if( linkpath.match(/\.(pdf|xls|ppt|doc|zip|txt)$/) || links[i].href.indexOf("mode=pdf") >=0 ){ //this matches our search addClickTracker(links[i]); } } function addClickTracker(obj){ if (obj.addEventListener) { obj.addEventListener('click', track , true); } else if (obj.attachEvent) { obj.attachEvent("on" + 'click', track); } } function track(e){ linkhref = (e.srcElement) ? e.srcElement.pathname : this.pathname; pageTracker._trackPageview(linkhref); } Everything up until the pageTracker._trackPageview() call works. In my debugging linkhref is being passed fine as a string. No abnormal characters, nothing. The issue is that, watching my http requests, Google never makes a second call to the tracking gif (as it does if you call this function in an "onclick" property). Calling the tracker from my JS console also works as expected. It's only in my listener. Could it be that my listener is not deferring the default action (loading the new page) before it has a chance to contact Google's servers? I've seen other tracking scripts that do a similar thing without any deferral.

    Read the article

  • Multiple generic parameters on a html helper extension method

    - by WestDiscGolf
    What I'm trying to do is create an extension method for the HtmlHelper to create a specific output and associated details like TextBoxFor<. What I want to do is specify the property from the model class as per TextBoxFor<, then an associated controller action and other parameters. So far the signature of the method looks like: public static MvcHtmlString Create<TModel, TProperty, TController>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, Expression<Action<TController>> action, object htmlAttributes) where TController : Controller where TModel : class The issue occurs when I go to call it. In my view if I call it as per the TextBoxFor without specifying the Model type I am able to specify the lambda expression to set the property which it's for, but when I go to specify the action I am unable to. However, when I specify the controller type Html.Create<HomeController>( ... ) I am unable to specify the model property that the control is to be created for. I want to be able to call it like <%= Html.Create(x => x.Title, controller => controller.action, null) %> I've been hitting my head for a few hours now on this issue over the past day, can anyone point me in the right direction?

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • JAVA Procedure Error

    - by Sam....
    java.sql.SQLException: [Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]Procedure 'STP_Insert_tblReceipt' expects parameter '@CPVFlag', which was not supplied. I m getting error at This Point when trying to call procedure... Everything is perfect ,,,Count of Question marks are similar to parameter provided cs = conn.prepareCall("{call STP_Insert_tblReceipt(?,?,?, ?,?,?, ?,?,?, ?,?,?, ?,?,?, ?,?,?, ?,?,?, ?,?,?, ?,?,?)}"); // cs = conn.prepareCall("{call STP_Receipt_Form_Insertion_Trial(?,?,?, ?,?,?, ?,?,?, ?,?,?, ?)}"); cs.setLong(1, Long.parseLong(txtMobileNo.getText())); cs.setString(2, String.valueOf(cboDistributor.getSelectedItem())); cs.setLong(3, Long.parseLong(txtBoxNo.getText())); cs.setInt(4, Integer.parseInt(txtFileNo.getText())); cs.setString(5, pickUp_date); cs.setString(6, rec_date); cs.setString(7, String.valueOf(cmbCtrlNo.getSelectedItem())); cs.setString(8, UserName); cs.setString(9, rec_date); cs.setString(10, RegionLocation); cs.setString(11, txtRemark.getText().trim()); cs.setString(12, txtSimNo.getText().trim()); cs.setInt(13, 2); cs.setString(14, String.valueOf(cmbAryanRegion.getSelectedItem())); cs.setString(15, String.valueOf(cboPickUpType.getSelectedItem())); cs.setString(16, String.valueOf(txtCafNo.getText())); cs.setString(17, distributorId); //cs.setString(18, circleName); cs.setString(18, cboCircle.getSelectedItem().toString()); cs.registerOutParameter(19, java.sql.Types.INTEGER); cs.setString(20, auditorName); cs.setString(21, retailerName); cs.setString(22, retailerCode); cs.setInt(23, mappedFlag); //cs.setString(24, distCode); cs.setString(24, cboDistCode.getSelectedItem().toString()); //cs.setString(25, zoneName); cs.setString(25, cboZone.getSelectedItem().toString()); cs.setString(26, comment); **cs.setInt(27, 1);** **this is for CPV Flag** After this cs.execute();

    Read the article

  • Xcode: Display Login View in applicationDidBecomeActive

    - by Patrick
    In my app I would like to show a login screen - which will be displayed when the app starts and when the app becomes active. For reference, I am using storyboards, ARC and it is a tabbed bar application. First off, I have this method which returns the topViewController. - (UIViewController *)topViewController:(UIViewController *)rootViewController { if (rootViewController.presentedViewController == nil) { return rootViewController; } if ([rootViewController.presentedViewController isMemberOfClass:[UINavigationController class]]) { UINavigationController *navigationController = (UINavigationController *)rootViewController.presentedViewController; UIViewController *lastViewController = [[navigationController viewControllers] lastObject]; return [self topViewController:lastViewController]; } UIViewController *presentedViewController = (UIViewController *)rootViewController.presentedViewController; return [self topViewController:presentedViewController]; } And I call this method here: - (void)applicationDidBecomeActive:(UIApplication *)application { if ( ... ) { // if the user needs to login PasswordViewController *passwordView = [[PasswordViewController alloc] init]; UIViewController *myView = [self topViewController:self.window.rootViewController]; [myView presentModalViewController:passwordView animated:NO]; } } To an extent this does work - I can call a method in viewDidAppear which shows an alert view to allow the user to log in. However, this is undesirable and I would like to have a login text box and other ui elements. If I do not call my login method, nothing happens and the screen stays black, even though I have put a label and other elements on the view. Does anyone know a way to resolve this? My passcode view is embedded in a Navigation Controller, but is detached from the main storyboard.

    Read the article

  • How can my facebook application post message to a wall?

    - by Thomas Dekiere
    i already found out how to post something to a wall with the graph api on behalf of the facebook user. But now i want to post something in the name of my application. Here is how i'm trying to do this: protected void btn_submit_Click(object sender, EventArgs e) { Dictionary<string, string> data = new Dictionary<string, string>(); data.Add("message", "Testing"); // i'll add more data later here (picture, link, ...) data.Add("access_token", FbGraphApi.getAppToken()); FbGraphApi.postOnWall(ConfigSettings.getFbPageId(), data); } FbGraphApi.getAppToken() // ... private static string graphUrl = "https://graph.facebook.com"; //... public static string getAppToken() { MyWebRequest req = new MyWebRequest(graphUrl + "/" + "oauth/access_token?type=client_cred&client_id=" + ConfigSettings.getAppID() + "&client_secret=" + ConfigSettings.getAppSecret(), "GET"); return req.GetResponse().Split('=')[1]; } FbGraphApi.postOnWall() public static void postOnWall(string id, Dictionary<string,string> args) { call(id, "feed", args); } FbGraphApi.call() private static void call(string id, string method, Dictionary<string,string> args ) { string data = ""; foreach (KeyValuePair<string, string> arg in args) { data += arg.Key + "=" + arg.Value + "&"; } MyWebRequest req = new MyWebRequest(graphUrl +"/" + id + "/" + method, "POST", data.Substring(0, data.Length - 1)); req.GetResponse(); // here i get: "The remote server returned an error: (403) Forbidden." } Does anyone see where this i going wrong? I'm really stuck on this. Thanks!

    Read the article

  • Best way to catch a WCF exception in Silverlight?

    - by wahrhaft
    I have a Silverlight 2 application that is consuming a WCF service. As such, it uses asynchronous callbacks for all the calls to the methods of the service. If the service is not running, or it crashes, or the network goes down, etc before or during one of these calls, an exception is generated as you would expect. The problem is, I don't know how to catch this exception. Because it is an asynchronous call, I can't wrap my begin call with a try/catch block and have it pick up an exception that happens after the program has moved on from that point. Because the service proxy is automatically generated, I can't put a try/catch block on each and every generated function that calls EndInvoke (where the exception actually shows up). These generated functions are also surrounded by External Code in the call stack, so there's nowhere else in the stack to put a try/catch either. I can't put the try/catch in my callback functions, because the exception occurs before they would get called. There is an Application_UnhandledException function in my App.xaml.cs, which captures all unhandled exceptions. I could use this, but it seems like a messy way to do it. I'd rather reserve this function for the truly unexpected errors (aka bugs) and not end up with code in this function for every circumstance I'd like to deal with in a specific way. Am I missing an obvious solution? Or am I stuck using Application_UnhandledException? [Edit] As mentioned below, the Error property is exactly what I was looking for. What is throwing me for a loop is that the fact that the exception is thrown and appears to be uncaught, yet execution is able to continue. It triggers the Application_UnhandledException event and causes VS2008 to break execution, but continuing in the debugger allows execution to continue. It's not really a problem, it just seems odd.

    Read the article

  • RhinoMocks: how to test if method was called when using PartialMock

    - by epitka
    I have a clas that is something like this public class MyClass { public virtual string MethodA(Command cmd) { //some code here} public void MethodB(SomeType obj) { // do some work MethodA(command); } } I mocked MyClass as PartialMock (mocks.PartialMock<MyClass>) and I setup expectation for MethodA var cmd = new Command(); //set the cmd to expected state Expect.Call(MyClass.MethodA(cmd)).Repeat.Once(); Problem is that MethodB calls actual implementation of MethodA instead of mocking it up. I must be doing something wrong (not very experienced with RhinoMocks). How do I force it to mock MetdhodA? Here is the actual code: var cmd = new SetBaseProductCoInsuranceCommand(); cmd.BaseProduct = planBaseProduct; var insuredType = mocks.DynamicMock<InsuredType>(); Expect.Call(insuredType.Code).Return(InsuredTypeCode.AllInsureds); cmd.Values.Add(new SetBaseProductCoInsuranceCommand.CoInsuranceValues() { CoInsurancePercent = 0, InsuredType = insuredType, PolicySupplierType = ppProvider }); Expect.Call(() => service.SetCoInsurancePercentages(cmd)).Repeat.Once(); mocks.ReplayAll(); //act service.DefaultCoInsurancesFor(planBaseProduct); //assert service.AssertWasCalled(x => x.SetCoInsurancePercentages(cmd),x=>x.Repeat.Once());

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • Trouble with building up a string in Clojure

    - by Aki Iskandar
    Hi gang - [this may seem like my problem is with Compojure, but it isn't - it's with Clojure] I've been pulling my hair out on this seemingly simple issue - but am getting nowhere. I am playing with Compojure (a light web framework for Clojure) and I would just like to generate a web page showing showing my list of todos that are in a PostgreSQL database. The code snippets are below (left out the database connection, query, etc - but that part isn't needed because specific issue is that the resulting HTML shows nothing between the <body> and </body> tags). As a test, I tried hard-coding the string in the call to main-layout, like this: (html (main-layout "Aki's Todos" "Haircut<br>Study Clojure<br>Answer a question on Stackoverfolw")) - and it works fine. So the real issue is that I do not believe I know how to build up a string in Clojure. Not the idiomatic way, and not by calling out to Java's StringBuilder either - as I have attempted to do in the code below. A virtual beer, and a big upvote to whoever can solve it! Many thanks! ============================================================= ;The master template (a very simple POC for now, but can expand on it later) (defn main-layout "This is one of the html layouts for the pages assets - just like a master page" [title body] (html [:html [:head [:title title] (include-js "todos.js") (include-css "todos.css")] [:body body]])) (defn show-all-todos "This function will generate the todos HTML table and call the layout function" [] (let [rs (select-all-todos) sbHTML (new StringBuilder)] (for [rec rs] (.append sbHTML (str rec "<br><br>"))) (html (main-layout "Aki's Todos" (.toString sbHTML))))) ============================================================= Again, the result is a web page but with nothing between the body tags. If I replace the code in the for loop with println statements, and direct the code to the repl - forgetting about the web page stuff (ie. the call to main-layout), the resultset gets printed - BUT - the issue is with building up the string. Thanks again. ~Aki

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • Performance of C# method polymorphism with generics

    - by zildjohn01
    I noticed in C#, unlike C++, you can combine virtual and generic methods. For example: using System.Diagnostics; class Base { public virtual void Concrete() {Debug.WriteLine("base concrete");} public virtual void Generic<T>() {Debug.WriteLine("base generic");} } class Derived : Base { public override void Concrete() {Debug.WriteLine("derived concrete");} public override void Generic<T>() {Debug.WriteLine("derived generic");} } class App { static void Main() { Base x = new Derived(); x.Concrete(); x.Generic<PerformanceCounter>(); } } Given that any number of versions of Generic<T> could be instantiated, it doesn't look like the standard vtbl approach could be used to resolve method calls, and in fact it's not. Here's the generated code: x.Concrete(); mov ecx,dword ptr [ebp-8] mov eax,dword ptr [ecx] call dword ptr [eax+38h] x.Generic<PerformanceCounter>(); push 989A38h mov ecx,dword ptr [ebp-8] mov edx,989914h call 76A874F1 mov dword ptr [ebp-4],eax mov ecx,dword ptr [ebp-8] call dword ptr [ebp-4] The extra code appears to be looking up a dynamic vtbl according to the generic parameters, and then calling into it. Has anyone written about the specifics of this implementation? How well does it perform compared to the non-generic case?

    Read the article

  • C# / IronPython Interop with shared C# Class Library

    - by Adam Haile
    I'm trying to use IronPython as an intermediary between a C# GUI and some C# libraries, so that it can be scripted post compile time. I have a Class library DLL that is used by both the GUI and the python and is something along the lines of this: namespace MyLib { public class MyClass { public string Name { get; set; } public MyClass(string name) { this.Name = name; } } } The IronPython code is as follows: import clr clr.AddReferenceToFile(r"MyLib.dll") from MyLib import MyClass ReturnObject = MyClass("Test") Then, in C# I would call it as follows: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = null; scope = engine.CreateScope(); ScriptSource source = engine.CreateScriptSourceFromFile("Script.py"); source.Execute(scope); MyClass mc = scope.GetVariable<MyClass>("ReturnObject ") When I call this last bit of code, source.Execute(scope) runs returns successfully, but when I try the GetVariable call, it throw the following exception Microsoft.Scripting.ArgumentTypeException: expected MyClass , got MyClass So, you can see that the class names are exactly the same, but for some reason it thinks they are different. The DLL is in a different directory than the .py file (I just didn't bother to write out all the path setup stuff), could it be that there is an issue with the interpreter for IronPython seeing these objects as difference because it's somehow seeing them as being in a different context or scope?

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >