Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 401/1086 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Apache CXF REST Services w/ Spring AOP

    - by jconlin
    I'm trying to get Apache CXF JAX-RS services working with Spring AOP. I've created a simple logging class: public class AOPLogger{ public void logBefore(){ System.out.println("Logging Before!"); } } My Spring configuration (beans.xml): <aop:config> <aop:aspect id="aopLogger" ref="test.aop.AOPLogger"> <aop:before method="logBefore" pointcut="execution(* test.rest.RestService(..))"/> </aop:aspect> </aop:config> <bean id="aopLogger" class="test.aop.AOPLogger"/> I always get an NPE in RestService when a call is made to a Method getServletRequest(), which has: return messageContext.getHttpServletRequest(); If I remove the aop configuration or comment it out from my beans.xml, everything works fine. All of my actual Rest services extend test.rest.RestService (which is a class) and call getServletRequest(). I'm just trying to just get AOP up and running based off of the example in the CXF JAX-RS documentation. Does anyone have any idea what I'm doing wrong? Thanks!

    Read the article

  • Altering lazy-loaded object's private variables

    - by Kevin Pang
    I'm running into an issue with private setters when using NHibernate and lazy-loading. Let's say I have a class that looks like this: public class User { public int Foo {get; private set;} public IList<User> Friends {get; set;} public void SetFirstFriendsFoo() { // This line works in a unit test but does nothing during a live run with // a lazy-loaded Friends list Users(0).Foo = 1; } } The SetFirstFriendsFoo call works perfectly inside a unit test (as it should since objects of the same type can access each others private properties). However, when running live with a lazy-loaded Friends list, the SetFirstFriendsFoo call silently fails. I'm guessing the reason for this is because at run-time, the Users(0).Foo object is no longer of type User, but of a proxy class that inherits from User since the Friends list was lazy-loaded. My question is this: shouldn't this generate a run-time exception? You get compile-time exceptions if you try to access another class's private properties, but when you run into a situation like this is looks like the app just ignores you and continues along its way.

    Read the article

  • Save a form in an XML file using Ajax and JSP

    - by novellino
    Hello, I want to create a simple form with a name and an email and save these data in an XML file. So far I found that using Ajax with jQuery is quite easy. So I used the usual code: //dataString have the values taken from the form var dataString = 'name='+ name + '&email=' + email; $.ajax({ type: "POST", url: "users.xml", data: dataString, dataType: "xml", success: function() { .... } }); If I understood well, in the url I should add the name of the XML file that will be created. When the user clicks a button I call the function with the Ajax request, and then I should call somewhere a function for generating the xml. I am using also two beans. One is for setting the elements of the user and the other is for saving the data in the XML. I am using the XStream library for the xml although I don't know if is the best solution. The problem now it that I can not connect all these together in order to save the data in the XML. Does anyone know what should I do? Thanks a lot!

    Read the article

  • MonoRail ActiveRecord - The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE

    - by Justin
    Hey all, I'm new to MonoRail and ActiveRecord and have inherited an application that I need to modify. I have a Category table (Id, Name) and I'm adding a ParentId FK which references the Id in the same table so that you can have parent/child category relationships. I tried adding this to my Category model class: [Property] public int ParentId { get; set; } I also tried doing it this way: [BelongsTo("ParentId")] public Category Parent { get; set; } When I call a method to get all parent categories (they have a null ParentId), it works fine: public static Category[] GetParentCategories() { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", null)); } However, when I call a method to get all child categories within a specific category, it errors out: public static Category[] GetChildCategories(int parentId) { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", parentId)); } The error is: "The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE constraint \"FK_Category_ParentId\". The conflict occurred in database \"UCampus\", table \"dbo.Category\", column 'Id'.\r\nThe statement has been terminated." I'm hard-coding in the parentId parameter as 1 and I'm 100% sure it exists as an id in the Category table so I don't know why it'd give this error. Also, I'm doing a select, not an update, so what is going on here?? Thanks for any input on this, Justin

    Read the article

  • SHGetFolderPath returns path with question marks in it

    - by Colen
    Hi, Our application calls ShGetFolderPath when it runs, to get the My Documents folder. This normally works great. However, for three users - ???????, Jörg and Jörgen (see if you can spot the pattern!) - the call returns some very strange results. For example, for ???????, the call returns: c:\Users\???????\Documents I assume there's some sort of character encoding shenanigan going on here, possibly related to Unicode, but I don't have any experience with that sort of thing. How can I get a useful path to the folder (and other related folders) out of windows, without grovelling through registry keys for the information? In an email to me, ??????? ("Dmitry"), told me his "my documents" folder was actually located here: C:\Users\43D6~1\Documents So I know there's a way to get a "normal" version of the path out of Windows, I just don't know what it is. Background: Our application is not unicode-aware, and uses standard "char *" strings. How can we get the "normal" path? I'm not opposed to calling the "unicode" version of the function, then converting it to "normal" text, if that's possible. Converting the application entirely to use unicode is not an option here (we don't have the time). Thanks.

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

  • twisted deferred/callbacks and asynchronous execution

    - by NetSkay
    hey guys, quick question about twisted and python... im trying to figure out how can i make my code more asynchronous using twisted and ive come to sort of a dead end, if a function of mine returns a deferred object, then i add a list of callbacks, the first callback will be called after the deferred function provides some result through deferred_obj.callback, then, in the chain of callbacks, the first callback will do something with the data and call the second callback and etc. however chained callbacks will not be considered asynchronous because they're chained and the event loop will keep firing each one of them concurrently until there is no more, right? however, if i have a deferred object, and i attach as its callback the deferred_obj.callback as in d.addCallback(deferred_obj.callback) then this will be considered asynchronous, because the deferred_obj is waiting for the data, and then the method that will pass the data is waiting on data as well, however once i d.callback 'd' object processes the data then it call deferred_obj.callback however since this object is deferred, unlike the case of chained callbacks, it will execute asynchronously... correct? meaning chained callbacks are NOT asynchronous while chained deferreds are, correct? thank you PS: assuming all of my code is non-blocking

    Read the article

  • Weird behavior of std::vector

    - by Nima
    I have a class like this: class OBJ{...}; class A { public: vector<OBJ> v; A(int SZ){v.clear(); v.reserve(SZ);} }; A *a = new A(123); OBJ something; a->v.push_back(something); This is a simplified version of my code. The problem is in debug mode it works perfect. But in release mode it crashes at "push_back" line. (with all optimization flags OFF) I debugged it in release mode and the problem is in the constructor of A. the size of the vector is something really big with dummy values and when I clear it, it doesn't change... Do you know why? Thanks,

    Read the article

  • SOLVED: Lisp: macro calling a function works in interpreter, fails in compiler (SBCL + CMUCL)

    - by ttsiodras
    As suggested in a macro-related question I recently posted to SO, I coded a macro called "fast" via a call to a function (here is the standalone code in pastebin): (defun main () (progn (format t "~A~%" (+ 1 2 (* 3 4) (+ 5 (- 8 6)))) (format t "~A~%" (fast (+ 1 2 (* 3 4) (+ 5 (- 8 6))))))) This works in the REPL, under both SBCL and CMUCL: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (load "bug.cl") 22 22 $ Unfortunately, however, the code no longer compiles: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (compile-file "bug.cl") ... ; during macroexpansion of (FAST (+ 1 2 ...)). Use *BREAK-ON-SIGNALS* to ; intercept: ; ; The function COMMON-LISP-USER::CLONE is undefined. So it seems that by having my macro "fast" call functions ("clone","operation-p") at compile-time, I trigger issues in Lisp compilers (verified in both CMUCL and SBCL). Any ideas on what I am doing wrong and/or how to fix this?

    Read the article

  • Question about creating device-compatible bitmaps in C#

    - by MusiGenesis
    I am storing bitmap-like data in a two-dimensional int array. To convert this array into a GDI-compatible bitmap (for use with BitBlt), I am using this function: public IntPtr GetGDIBitmap(int[,] data) { int w = data.GetLength(0); int h = data.GetLength(1); IntPtr ret = IntPtr.Zero; using (Bitmap bmp = new Bitmap(w, h)) { for (int x = 0; x < w; x++) { for (int y = 0; y < h; y++) { Color color = Color.FromArgb(data[x, y]); bmp.SetPixel(x, y, color); } } ret = bmp.GetHbitmap(); } return ret; } This works as expected, but the call to bmp.GetHbitmap() has to allocate memory for the returned bitmap. I'd like to modify this method in two (probably related) ways: I'd like to remove the intermediate Bitmap from the above code entirely, and go directly from my int[,] array to the device-compatible bitmap (i.e. the IntPtr). I presume this would involve calling CreateCompatibleBitmap, but I don't know how to go from that call to actually manipulating the pixel values. This should logically follow from the answer to the first, but I'd also like my method to re-use existing GDI bitmap handles (instead of creating a new bitmap each time). How can I do this? NOTE: I don't really use Bitmap.SetPixel(), as its performance could best be described as "glacial". The code is just for illustration.

    Read the article

  • [OpenGL] I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Then how do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • Return and Save XML Object From Sharepoint List Web Service

    - by HurnsMobile
    I am trying to populate a variable with an XML response from an ajax call on page load so that on keyup I can filter through that list without making repeated get requests (think very rudimentary autocomplete). The trouble that I am having seems to be potentially related to variable scoping but I am fairly new to js/jQuery so I am not quite certain. The following code doesn't do anything on key up and adding alerts to it tells me that it is executing leadResults() on keyup and that the variable is returning an XML response object but it appears to be empty. The strange bit is that if I move the leadResults() call into the getResults() function the UL is populated with the results correctly. Im beating my head against the wall on this one, please help! var resultsXml; $(document).ready( function() { var leadLookupCaml = "<Query> \ <Where> \ <Eq> \ <FieldRef Name=\"Lead_x0020_Status\"/> \ <Value Type=\"Text\">Active</Value> \ </Eq> \ </Where> \ </Query>" $().SPServices({ operation: "GetListItems", webURL: "http://sharepoint/departments/sales", listName: "Leads", CAMLQuery: leadLookupCaml, CAMLRowLimit: 0, completefunc: getResults }); }) $("#lead_search").keyup( function(e) { leadResults(); }) function getResults(xData, status) { resultsXml = xData; } function leadResults() { xData = resultsXml; $("#lead_results li").remove(); $(xData.responseXML).find("z\\:row").each(function() { var selectHtml = "<li>" + "<a href=\"http://sharepoint/departments/sales/Lists/Lead%20Groups/DispForm.aspx?ID=" + $(this).attr("ows_ID") + ">" + $(this).attr("ows_Title")+" : " + $(this).attr("ows_Phone") + "</a>\ </li>"; $("#lead_results").append(selectHtml); }); }

    Read the article

  • Python IOError: Not a gzipped file (Gzip and Blowfish Encrypt/Compress)

    - by notbad.jpeg
    I'm having some problems with python's built-in library gzip. Looked through almost every other stack question about it, and none of them seem to work. MY PROBLEM IS THAT WHEN I TRY TO DECOMPRESS I GET THE IOError I'm Getting: Traceback (most recent call last): File "mymodule.py", line 61, in return gz.read() File "/usr/lib/python2.7/gzip.py", line 245, readself._read(readsize) File "/usr/lib/python2.7/gzip.py", line 287, in _readself._read_gzip_header() File "/usr/lib/python2.7/gzip.py", line 181, in _read_gzip_header raise IOError, 'Not a gzipped file'IOError: Not a gzipped file This is my code to send it over SMB, it might not make sense why i do things, but it's normally in a while loop and memory efficient, I just simplified it. buffer = cStringIO.StringIO(output) #output is from a subprocess call small_buffer = cStringIO.StringIO() small_string = buffer.read() #need a string to write to buffer gzip_obj = gzip.GzipFile(fileobj=small_buffer,compresslevel=6, mode='wb') gzip_obj.write(small_string) compressed_str = small_buffer.getvalue() blowfish = Blowfish.new('abcd', Blowfish.MODE_ECB) remainder = '|'*(8 - (len(compressed_str) % 8)) compressed_str += remainder encrypted = blowfish.encrypt(compressed_str) #i send it over smb, then retrieve it later Then this is the code that retrieves it: #buffer is a cStringIO object filled with data from smb retrieval decrypter = Blowfish.new('abcd', Blowfish.MODE_ECB) value = buffer.getvalue() decrypted = decrypter.decrypt(value) buff = cStringIO.StringIO(decrypted) buff.seek(0) gz = gzip.GzipFile(fileobj=buff) return gz.read() Here's the problem return gz.read()

    Read the article

  • How to TDD Asynchronous Events?

    - by Padu Merloti
    The fundamental question is how do I create a unit test that needs to call a method, wait for an event to happen on the tested class and then call another method (the one that we actually want to test)? Here's the scenario if you have time to read further: I'm developing an application that has to control a piece of hardware. In order to avoid dependency from hardware availability, when I create my object I specify that we are running in test mode. When that happens, the class that is being tested creates the appropriate driver hierarchy (in this case a thin mock layer of hardware drivers). Imagine that the class in question is an Elevator and I want to test the method that gives me the floor number that the elevator is. Here is how my fictitious test looks like right now: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += TestElevatorArrived; elevator.GoToFloor(5); //Here's where I'm getting lost... I could block //until TestElevatorArrived gives me a signal, but //I'm not sure it's the best way int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } Edit: Thanks for all the answers. This is how I ended up implementing it: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += (s, e) => { Monitor.Pulse(this); }; lock (this) { elevator.GoToFloor(5); if (!Monitor.Wait(this, Timeout)) Assert.Fail("Elevator did not reach destination in time"); int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } }

    Read the article

  • My code works in Debug mode, but not in Release mode.

    - by Nima
    Hi, I have a code in Visual Studio 2008 in C++ that works with files just by fopen and fclose. Everything works perfect in Debug mode. and I have tested with several datasets. But it doesn't work in release mode. It crashes all the time. I have turned off all the optimizations, also there is no dependency to anything(in the linker), and also I have set these: Optimization: Disabled(/Od) Keep Unreferenced Data. Do Not Remove Redundant Optimize for Windows98: NO I still keep wondering how it should not work under these circumstances. What else should I turn off to let it works as in debug mode? I think if it works in release mode but not in debug mode, it might be a coding fault but the other way looks weird. isn't it? I appreciate any help. --Nima

    Read the article

  • VBScript: Disable caching of response from server to HTTP GET URL request

    - by Rob
    I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this? When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time. This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem): With CreateObject("MSXML2.XMLHTTP") .open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False .send nsrresponse =.responseText End With Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL? I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read. I am also trying to force not using the cache using http header settings and html document header meta data: Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0. print $httpGetCGIRequest-header( -type = 'text/html', -expires = '+0s', ); Http header settings in response sent back to client: <html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head> <body> response message generated from server </body> </html> The above http header and html document head settings haven't worked, hence my question.

    Read the article

  • Objective-C function dispatch collisions; Or, how to achieve "namespaces"?

    - by fbrereto
    I have an application for Mac OS X that supports plugins that are intended to be loaded at the same time. Some of these plugins are built on top of a Cocoa framework that may receive updates in one plugin but not another. Given Objective-C's current method for function dispatching, any call from any plugin to a given Objective-C routine will go to the same routine every time. That means plugin A can find itself inside plugin B with a trivial Objective-C call! Obviously what we're looking for is for each plugin to interact with its own version of the framework upon which it was built. I have been reading some on Objective-C and this particular need, but haven't found a definitive solution for it yet. Update: My use of the word "framework" above is misleading: the framework is a statically-linked library, built into the plugin(s) that need it. The way Objective-C handles dispatching, however, even these statically linked pieces of disparate code will co-mingle in the Objective-C dispatcher, leading to unintended consequences. Update 2: I'm still a bit fuzzy on the answer provided here, as it doesn't seem to propose a solution as much as an unproven hypothesis.

    Read the article

  • VBA: How go I get the total width from all controls in an MS-Access form?

    - by Stefan Åstrand
    Hi, This is probably very basic stuff, but please bear in mind I am completely new to these things. I am working on a procedure for my Access datasheet forms that will: Adjust the width of each column to fit content Sum the total width of all columns and subtract it from the size of the window's width Adjust the width of one of the columns to fit the remaining space This is the code that adjusts the width of each column to fit content (which works fine): Dim Ctrl As Control Dim Path As String Dim ClmWidth As Integer 'Adjust column width to fit content For Each Ctrl In Me.Controls If TypeOf Ctrl Is TextBox Then Path = Ctrl.Name Me(Path).ColumnWidth = -2 End If Next Ctrl How should I write the code so I get the total width of all columns? Thanks a lot! Stefan Solution This is the code that makes an Access datasheet go from this: To this: Sub AdjustColumnWidth(frm As Form, clm As String) On Error GoTo HandleError Dim intWindowWidth As Integer ' Window width property Dim ctrl As Control ' Control Dim intCtrlWidth As Integer ' Control width property Dim intCtrlSum As Integer ' Control width property sum Dim intCtrlAdj As Integer ' Control width property remaining after substracted intCtrSum 'Adjust column width to standard width For Each ctrl In frm.Controls If TypeOf ctrl Is TextBox Or TypeOf ctrl Is CheckBox Or TypeOf ctrl Is ComboBox Then Path = ctrl.Name frm(Path).ColumnWidth = 1500 End If Next ctrl 'Get total column width For Each ctrl In frm.Controls If TypeOf ctrl Is TextBox Or TypeOf ctrl Is CheckBox Or TypeOf ctrl Is ComboBox Then Path = ctrl.Name intCtrlWidth = frm(Path).ColumnWidth If Path <> clm Then intCtrlSum = intCtrlSum + intCtrlWidth End If End If Next ctrl 'Adjust column to fit window intWindowWidth = frm.WindowWidth - 270 intCtrlAdj = intWindowWidth - intCtrlSum frm.Width = intWindowWidth frm(clm).ColumnWidth = intCtrlAdj Debug.Print "Totalt (Ctrl): " & intCtrlSum Debug.Print "Totalt (Window): " & intWindowWidth Debug.Print "Totalt (Remaining): " & intCtrlAdj Debug.Print "clm : " & clm HandleError: GeneralErrorHandler Err.Number, Err.Description Exit Sub End Sub Code to call procedure: Private Sub Form_Load() Call AdjustColumnWidth(Me, "txtDescription") End Sub

    Read the article

  • Question about effective logging in C#

    - by MartyIX
    I've written a simple class for debugging and I call the method Debugger.WriteLine(...) in my code like this: Debugger.WriteLine("[Draw]", "InProgress", "[x,y] = " + x.ToString("0.00") + ", " + y.ToString("0.00") + "; pos = " + lastPosX.ToString() + "x" + lastPosY.ToString() + " -> " + posX.ToString() + "x" + posY.ToString() + "; SS = " + squareSize.ToString() + "; MST = " + startTime.ToString("0.000") + "; Time = " + time.ToString() + phase.ToString(".0000") + "; progress = " + progress.ToString("0.000") + "; step = " + step.ToString() + "; TimeMovementEnd = " + UI.MovementEndTime.ToString()); The body of the procedure Debugger.WriteLine is compiled only in Debug mode (directives #if, #endif). What makes me worry is that I often need ToString() in Debugger.WriteLine call which is costly because it creates still new strings (for changing number for example). How to solve this problem? A few points/questions about debugging/tracing: I don't want to wrap every Debugger.WriteLine in an IF statement or to use preprocessor directives in order to leave out debugging methods because it would inevitable lead to a not very readable code and it requires too much typing. I don't want to use any framework for tracing/debugging. I want to try to program it myself. Are Trace methods (http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.aspx) left out if compiling in release mode? If it is so is it possible that my methods would behave similarly? http://msdn.microsoft.com/en-us/library/fht0f5be.aspx output = String.Format("You are now {0} years old.", years); Which seems nice. Is it a solution for my problem with ToString()?

    Read the article

  • wsimport generate a client with cookies

    - by dierre
    I'm generating a client for a SOAP 1.2 service using wsimport from the jaxws-maven-plugin in maven with the following execution: <groupId>org.jvnet.jax-ws-commons</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <sourceDestDir>${project.basedir}/src/main/java</sourceDestDir> <wsdlUrls> <wsdlUrl>${webservice.url}</wsdlUrl> </wsdlUrls> <extension>true</extension> </configuration> </execution> The first time the client call the proxy, the load balancer generate a cookie and sends it back. The client should send it back so the load balancer knows where (which server) is dedicated to a specific client (the idea is that the first time the client get a server and the cookie identifies the server, then the load balancer sends the client to the same server for every call) Now, is there a way to tell to the plugin to enable automatically the cookie handling?

    Read the article

  • Setting processor affinity on CSC.exe launched by CoreCompile MSBuild Task

    - by Hardy
    I am wondering if there is simple way to ensure that when a c# project is compiled the CSC.exe launched inherits the parent processor affinity settings, or perhaps of a way where by i can supply this. I have been trying to accomplish this by launching a bat file from vs.net cmd prompt like start /affinity 01 custombuild.cmd and inside my custombuild.cmd i have @echo off msbuild Libraries.sln /t:rebuild /p:Configuration=Release;platform=x64 /m:1 :END The command line call to Csc.exe this generates looks like the following C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. What i 'd like to see is the CSC.exe to inherit the processor affinity or a simple way to be able to override how csc.exe call is generated so i can make it into a start /affinity 01 C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. I also noticed that CoreCompile target is defined in Microsoft.CSharp.targets, should i be considering overriding MSBuildToolsPath variable so i can sneak in my own version. This feels rather hacky. Any help would be much appreciated.

    Read the article

  • Pass Result of ASIHTTPRequest "requestFinished" Back to Originating Method

    - by Intelekshual
    I have a method (getAllTeams:) that initiates an HTTP request using the ASIHTTPRequest library. NSURL *httpURL = [[[NSURL alloc] initWithString:@"/api/teams" relativeToURL:webServiceURL] autorelease]; ASIHTTPRequest *request = [[[ASIHTTPRequest alloc] initWithURL:httpURL] autorelease]; [request setDelegate:self]; [request startAsynchronous]; What I'd like to be able to do is call [WebService getAllTeams] and have it return the results in an NSArray. At the moment, getAllTeams doesn't return anything because the HTTP response is evaluated in the requestFinished: method. Ideally I'd want to be able to call [WebService getAllTeams], wait for the response, and dump it into an NSArray. I don't want to create properties because this is disposable class (meaning it doesn't store any values, just retrieves values), and multiple methods are going to be using the same requestFinished (all of them returning an array). I've read up a bit on delegates, and NSNotifications, but I'm not sure if either of them are the best approach. I found this snippet about implementing callbacks by passing a selector as a parameter, but it didn't pan out (since requestFinished fires independently). Any suggestions? I'd appreciate even just to be pointed in the right direction. NSArray *teams = [[WebService alloc] getAllTeams]; (currently doesn't work, because getAllTeams doesn't return anything, but requestFinished does. I want to get the result of requestFinished and pass it back to getAllTeams:)

    Read the article

  • comparing strings from two different sources in javascript

    - by andy-score
    This is a bit of a specific request unfortunately. I have a CMS that allows a user to input text into a tinymce editor for various posts they have made. The editor is loaded via ajax to allow multiple posts to be edited from one page. I want to be able to check if there were edits made to the main text if cancel is clicked. Currently I get the value of the text from the database during the ajax call, json_encode it, then store it in a javascript variable during the callback, to be checked against later. When cancel is clicked the current value of the hidden textarea (used by tinymce to store the data for submission) is grabbed using jquery.val() and checked against the stored value from the previous ajax call like this: if(stored_value!=textarea.val()) { return true } It currently always returns true, even if no changes have been made. The issue seems to be that the textarea.val() uses html entities, whereas the ajax jsoned version doesn't. the response from ajax in firebug looks like this: <p>some text<\/p>\r\n<p>some more text<\/p> the textarea source code looks like this: &lt;p&gt;some text&lt;/p&gt; &lt;p&gt;some more text&lt;/p&gt; these are obviously different, but how can I get them to be treated as the same when evaluated? Is there a function that compares the final output of a string or a way to convert one string to the other using javascript? I tried using html entities in the ajax page, but this returned the string with html entities intact when alerted, I assume because json_encoding it turned them into characters. Any help would be greatly appreciated.

    Read the article

  • How to associate static entity instances in a Session without database retrieval

    - by Michael Hedgpeth
    I have a simple Result class that used to be an Enum but has evolved into being its own class with its own table. public class Result { public static readonly Result Passed = new Result(StatusType.Passed) { Id = [Predefined] }; public static readonly Result NotRun = new Result(StatusType.NotRun) { Id = [Predefined] }; public static readonly Result Running = new Result(StatusType.Running) { Id = [Predefined] }; } Each of these predefined values has a row in the database at their predefined Guid Id. There is then a failed result that has an instance per failure: public class FailedResult : Result { public FailedResult(string description) : base(StatusType.Failed) { . . . } } I then have an entity that has a Result: public class Task { public Result Result { get; set; } } When I save a Task, if the Result is a predefined one, I want NHibernate to know that it doesn't need to save that to the database, nor does it need to fetch it from the database; I just want it to save by Id. The way I get around this is when I am setting up the session, I call a method to load the static entities: protected override void OnSessionOpened(ISession session) { LockStaticResults(session, Result.Passed, Result.NotRun, Result.Running); } private static void LockStaticResults(ISession session, params Result[] results) { foreach (var result in results) { session.Load(result, result.Id); } } The problem with the session.Load method call is it appears to be fetching to the database (something I don't want to do). How could I make this so it does not fetch the database, but trusts that my static (immutable) Result instances are both up to date and a part of the session?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >