Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 401/1086 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • MonoRail ActiveRecord - The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE

    - by Justin
    Hey all, I'm new to MonoRail and ActiveRecord and have inherited an application that I need to modify. I have a Category table (Id, Name) and I'm adding a ParentId FK which references the Id in the same table so that you can have parent/child category relationships. I tried adding this to my Category model class: [Property] public int ParentId { get; set; } I also tried doing it this way: [BelongsTo("ParentId")] public Category Parent { get; set; } When I call a method to get all parent categories (they have a null ParentId), it works fine: public static Category[] GetParentCategories() { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", null)); } However, when I call a method to get all child categories within a specific category, it errors out: public static Category[] GetChildCategories(int parentId) { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", parentId)); } The error is: "The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE constraint \"FK_Category_ParentId\". The conflict occurred in database \"UCampus\", table \"dbo.Category\", column 'Id'.\r\nThe statement has been terminated." I'm hard-coding in the parentId parameter as 1 and I'm 100% sure it exists as an id in the Category table so I don't know why it'd give this error. Also, I'm doing a select, not an update, so what is going on here?? Thanks for any input on this, Justin

    Read the article

  • Visual Studio: How to attach a debugger dynamically to a specific process

    - by Jeff Cyr
    I am building an internal dev tool to manage different processes commonly used in our development environment. The tool show the list the monitored processes, indicate their running state and allow to start or stop each process. I'd like to add the functionality of attaching a debugger to a monitored process from my tool instead of going in 'Debug-Attach to process' in visual studio and finding the process. My goal is to have something like Debugger.Launch() that would show a list of the available visual studio. I can't use Debugger.Launch() because it lauches the debugger on the process that make the call. I would need something like Debugger.Launch(processId). Does anyone know how to acheive this functionality? A solution could be to implement a command in each monitored process to call Debugger.Launch() when the command is received from the monitoring tool, but I would prefer something that does not require to modify the code of the monitored processes. Side question: When using Debugger.Launch(), instances of Visual Studio that already have a debugger attached are not listed. Visual Studio is not limited to one attached debugger, you can attach on multiple process when using 'Debug - Attach to process'. Anyone know how to bypass this limitation when using Debugger.Launch() or an alternative?

    Read the article

  • undefined references

    - by Brandon
    Hello, I'm trying to compile some fortran code and I'm running into some confusing linking errors. I have some code that I compile and place into a static library: >gfortran -c -I../../inc -o bdout.o bdout.F >ar rv libgeo.a bdout.o I then try to compile against that library with some simple test code and get the following: >gfortran -o mytest -L -lgeo mytest.F /tmp/cc4uvcsj.o: In function `MAIN__': mytest.F:(.text+0xb0): undefined reference to `ncwrite1_' collect2: ld returned 1 exit status It's not in the object naming because everything looks fine: >nm -u libgeo.a bdout.o: U _gfortran_exit_i4 U _gfortran_st_write U _gfortran_st_write_done U _gfortran_transfer_character U _gfortran_transfer_integer U ncobjcl_ U ncobjwrp_ U ncopencr_ U ncopenshcr_ U ncopenwr_ U ncwrite1_ U ncwrite2_ U ncwrite3_ U ncwrite4_ U ncwritev_ I can check the original object file too: >nm -u bdout.o U _gfortran_exit_i4 U _gfortran_st_write U _gfortran_st_write_done U _gfortran_transfer_character U _gfortran_transfer_integer U ncobjcl_ U ncobjwrp_ U ncopencr_ U ncopenshcr_ U ncopenwr_ U ncwrite1_ U ncwrite2_ U ncwrite3_ U ncwrite4_ U ncwritev_ The test code simply contains a single call to a function defined in bdout.o: program hello print *,"Hello World!" call ncwrite1( istat, f, ix2, ix3, ix4, ix5, ih ) end program hello I can't figure out what the problem is. Does anyone have any suggestions? Maybe even just a way to track the problem down? Cheers.

    Read the article

  • jQuery - Callback failing if there is no options parameter

    - by user249950
    Hi, I'm attempting to build a simple plugin like this (function($) { $.fn.gpSlideOut = function(options, callback) { // default options - these are used when no others are specified $.fn.gpSlideOut.defaults = { fadeToColour: "#ffffb3", fadeToColourSpeed: 500, slideUpSpeed: 400 }; // build main options before element iteration var o = $.extend({}, $.fn.gpSlideOut.defaults, options); this.each(function() { $(this) .animate({backgroundColor: o.fadeToColour},o.fadeToColourSpeed) .slideUp(o.SlideUpSpeed, function(){ if (typeof callback == 'function') { // make sure the callback is a function callback.call(this); // brings the scope to the callback } }); }); return this; }; // invoke the function we just created passing it the jQuery object })(jQuery); The confusion I'm having is that normally on jQuery plugins you can call something like this: $(this_moveable_item).gpSlideOut(function() { // Do stuff }); Without the options parameter, but it misses the callback if I do it like that so I have to always have var options = {} $(this_moveable_item).gpSlideOut(options, function() { // Do stuff }); Even if I only want to use the defaults. Is there anyway to make sure the callback function is called whether or not the options parameter is there? Cheers.

    Read the article

  • Can you help with this assembly language code?

    - by Mugen
    Hi, I've been looking through a piece of code of a pc game that I'm trying to "improve". (ok so maybe I suck at the game but I still want to play it). Could you please look into the following code: fld dword ptr[ebp+00007B1C] fsub dword ptr[esp+64] fst dword ptr[ebp+00007B1C] call 004A2E48 This code is called every second for the level countdown timer. I need to stay on a particular level for a few minutes. If I can modify the above code so that the value pushed into the address [ebp+00007B1C] is 0 then the game level will always time out and it will save me playing those crazy "survival" minigames. I'll explain what I understand from this code. Dont worry, you dont have to go deep into this. In the first line we get the timer value. For example if 97 seconds are remaining then it is here that this value is loaded. In the second line a value (1 second) is subtracted from 97. In the third line 96 is again moved to memory. And finally we have the function call that will do other processing based on the time remaining. Now all I need to do is patch this piece of code somehow so that the value that is pushed is 0 (in the third step). Can you please help me out with this?

    Read the article

  • Save a form in an XML file using Ajax and JSP

    - by novellino
    Hello, I want to create a simple form with a name and an email and save these data in an XML file. So far I found that using Ajax with jQuery is quite easy. So I used the usual code: //dataString have the values taken from the form var dataString = 'name='+ name + '&email=' + email; $.ajax({ type: "POST", url: "users.xml", data: dataString, dataType: "xml", success: function() { .... } }); If I understood well, in the url I should add the name of the XML file that will be created. When the user clicks a button I call the function with the Ajax request, and then I should call somewhere a function for generating the xml. I am using also two beans. One is for setting the elements of the user and the other is for saving the data in the XML. I am using the XStream library for the xml although I don't know if is the best solution. The problem now it that I can not connect all these together in order to save the data in the XML. Does anyone know what should I do? Thanks a lot!

    Read the article

  • PySide Qt script doesn't launch from Spyder but works from shell

    - by Maxim Zaslavsky
    I have a weird bug in my project that uses PySide for its Qt GUI, and in response I'm trying to test with simpler code that sets up the environment. Here is the code I am testing with: http://stackoverflow.com/a/6906552/130164 When I launch that from my shell (python test.py), it works perfectly. However, when I run that script in Spyder, I get the following error: Traceback (most recent call last): File "/home/test/Desktop/test/test.py", line 31, in <module> app = QtGui.QApplication(sys.argv) RuntimeError: A QApplication instance already exists. If it helps, I also get the following warning: /usr/lib/pymodules/python2.6/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. Why does that code work when launched from my shell but not from Spyder? Update: Mata answered that the problem happens because Spyder uses Qt, which makes sense. For now, I've set up execution in Spyder using the "Execute in an external system terminal" option, which doesn't cause errors but doesn't allow debugging, either. Does Spyder have any built-in workarounds to this?

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

  • How to TDD Asynchronous Events?

    - by Padu Merloti
    The fundamental question is how do I create a unit test that needs to call a method, wait for an event to happen on the tested class and then call another method (the one that we actually want to test)? Here's the scenario if you have time to read further: I'm developing an application that has to control a piece of hardware. In order to avoid dependency from hardware availability, when I create my object I specify that we are running in test mode. When that happens, the class that is being tested creates the appropriate driver hierarchy (in this case a thin mock layer of hardware drivers). Imagine that the class in question is an Elevator and I want to test the method that gives me the floor number that the elevator is. Here is how my fictitious test looks like right now: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += TestElevatorArrived; elevator.GoToFloor(5); //Here's where I'm getting lost... I could block //until TestElevatorArrived gives me a signal, but //I'm not sure it's the best way int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } Edit: Thanks for all the answers. This is how I ended up implementing it: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += (s, e) => { Monitor.Pulse(this); }; lock (this) { elevator.GoToFloor(5); if (!Monitor.Wait(this, Timeout)) Assert.Fail("Elevator did not reach destination in time"); int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } }

    Read the article

  • Python IOError: Not a gzipped file (Gzip and Blowfish Encrypt/Compress)

    - by notbad.jpeg
    I'm having some problems with python's built-in library gzip. Looked through almost every other stack question about it, and none of them seem to work. MY PROBLEM IS THAT WHEN I TRY TO DECOMPRESS I GET THE IOError I'm Getting: Traceback (most recent call last): File "mymodule.py", line 61, in return gz.read() File "/usr/lib/python2.7/gzip.py", line 245, readself._read(readsize) File "/usr/lib/python2.7/gzip.py", line 287, in _readself._read_gzip_header() File "/usr/lib/python2.7/gzip.py", line 181, in _read_gzip_header raise IOError, 'Not a gzipped file'IOError: Not a gzipped file This is my code to send it over SMB, it might not make sense why i do things, but it's normally in a while loop and memory efficient, I just simplified it. buffer = cStringIO.StringIO(output) #output is from a subprocess call small_buffer = cStringIO.StringIO() small_string = buffer.read() #need a string to write to buffer gzip_obj = gzip.GzipFile(fileobj=small_buffer,compresslevel=6, mode='wb') gzip_obj.write(small_string) compressed_str = small_buffer.getvalue() blowfish = Blowfish.new('abcd', Blowfish.MODE_ECB) remainder = '|'*(8 - (len(compressed_str) % 8)) compressed_str += remainder encrypted = blowfish.encrypt(compressed_str) #i send it over smb, then retrieve it later Then this is the code that retrieves it: #buffer is a cStringIO object filled with data from smb retrieval decrypter = Blowfish.new('abcd', Blowfish.MODE_ECB) value = buffer.getvalue() decrypted = decrypter.decrypt(value) buff = cStringIO.StringIO(decrypted) buff.seek(0) gz = gzip.GzipFile(fileobj=buff) return gz.read() Here's the problem return gz.read()

    Read the article

  • Efficiency of checking for null varbinary(max) column ?

    - by Moe Sisko
    Using SQL Server 2008. Example table : CREATE table dbo.blobtest (id int primary key not null, name nvarchar(200) not null, data varbinary(max) null) Example query : select id, name, cast((case when data is null then 0 else 1 end) as bit) as DataExists from dbo.blobtest Now, the query needs to return a "DataExists" column, that returns 0 if the blob is null, else 1. This all works fine, but I'm wondering how efficient it is. i.e. does SQL server need to read in the whole blob to its memory, or is there some optimization so that it just does enough reads to figure out if the blob is null or not ? (FWIW, the sp_tableoption "large value types out of row" option is set to OFF for this example).

    Read the article

  • Qt Socket blocking functions required to run in QThread where created. Any way past this?

    - by Alexander Kondratskiy
    The title is very cryptic, so here goes! I am writing a client that behaves in a very synchronous manner. Due to the design of the protocol and the server, everything has to happen sequentially (send request, wait for reply, service reply etc.), so I am using blocking sockets. Here is where Qt comes in. In my application I have a GUI thread, a command processing thread and a scripting engine thread. I create the QTcpSocket in the command processing thread, as part of my Client class. The Client class has various methods that boil down to writing to the socket, reading back a specific number of bytes, and returning a result. The problem comes when I try to directly call Client methods from the scripting engine thread. The Qt sockets randomly time out and when using a debug build of Qt, I get these warnings: QSocketNotifier: socket notifiers cannot be enabled from another thread QSocketNotifier: socket notifiers cannot be disabled from another thread Anytime I call these methods from the command processing thread (where Client was created), I do not get these problems. To simply phrase the situation: Calling blocking functions of QAbstractSocket, like waitForReadyRead(), from a thread other than the one where the socket was created (dynamically allocated), causes random behaviour and debug asserts/warnings. Anyone else experienced this? Ways around it? Thanks in advance.

    Read the article

  • My code works in Debug mode, but not in Release mode.

    - by Nima
    Hi, I have a code in Visual Studio 2008 in C++ that works with files just by fopen and fclose. Everything works perfect in Debug mode. and I have tested with several datasets. But it doesn't work in release mode. It crashes all the time. I have turned off all the optimizations, also there is no dependency to anything(in the linker), and also I have set these: Optimization: Disabled(/Od) Keep Unreferenced Data. Do Not Remove Redundant Optimize for Windows98: NO I still keep wondering how it should not work under these circumstances. What else should I turn off to let it works as in debug mode? I think if it works in release mode but not in debug mode, it might be a coding fault but the other way looks weird. isn't it? I appreciate any help. --Nima

    Read the article

  • SHGetFolderPath returns path with question marks in it

    - by Colen
    Hi, Our application calls ShGetFolderPath when it runs, to get the My Documents folder. This normally works great. However, for three users - ???????, Jörg and Jörgen (see if you can spot the pattern!) - the call returns some very strange results. For example, for ???????, the call returns: c:\Users\???????\Documents I assume there's some sort of character encoding shenanigan going on here, possibly related to Unicode, but I don't have any experience with that sort of thing. How can I get a useful path to the folder (and other related folders) out of windows, without grovelling through registry keys for the information? In an email to me, ??????? ("Dmitry"), told me his "my documents" folder was actually located here: C:\Users\43D6~1\Documents So I know there's a way to get a "normal" version of the path out of Windows, I just don't know what it is. Background: Our application is not unicode-aware, and uses standard "char *" strings. How can we get the "normal" path? I'm not opposed to calling the "unicode" version of the function, then converting it to "normal" text, if that's possible. Converting the application entirely to use unicode is not an option here (we don't have the time). Thanks.

    Read the article

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • VBScript: Disable caching of response from server to HTTP GET URL request

    - by Rob
    I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this? When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time. This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem): With CreateObject("MSXML2.XMLHTTP") .open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False .send nsrresponse =.responseText End With Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL? I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read. I am also trying to force not using the cache using http header settings and html document header meta data: Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0. print $httpGetCGIRequest-header( -type = 'text/html', -expires = '+0s', ); Http header settings in response sent back to client: <html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head> <body> response message generated from server </body> </html> The above http header and html document head settings haven't worked, hence my question.

    Read the article

  • VBA: How go I get the total width from all controls in an MS-Access form?

    - by Stefan Åstrand
    Hi, This is probably very basic stuff, but please bear in mind I am completely new to these things. I am working on a procedure for my Access datasheet forms that will: Adjust the width of each column to fit content Sum the total width of all columns and subtract it from the size of the window's width Adjust the width of one of the columns to fit the remaining space This is the code that adjusts the width of each column to fit content (which works fine): Dim Ctrl As Control Dim Path As String Dim ClmWidth As Integer 'Adjust column width to fit content For Each Ctrl In Me.Controls If TypeOf Ctrl Is TextBox Then Path = Ctrl.Name Me(Path).ColumnWidth = -2 End If Next Ctrl How should I write the code so I get the total width of all columns? Thanks a lot! Stefan Solution This is the code that makes an Access datasheet go from this: To this: Sub AdjustColumnWidth(frm As Form, clm As String) On Error GoTo HandleError Dim intWindowWidth As Integer ' Window width property Dim ctrl As Control ' Control Dim intCtrlWidth As Integer ' Control width property Dim intCtrlSum As Integer ' Control width property sum Dim intCtrlAdj As Integer ' Control width property remaining after substracted intCtrSum 'Adjust column width to standard width For Each ctrl In frm.Controls If TypeOf ctrl Is TextBox Or TypeOf ctrl Is CheckBox Or TypeOf ctrl Is ComboBox Then Path = ctrl.Name frm(Path).ColumnWidth = 1500 End If Next ctrl 'Get total column width For Each ctrl In frm.Controls If TypeOf ctrl Is TextBox Or TypeOf ctrl Is CheckBox Or TypeOf ctrl Is ComboBox Then Path = ctrl.Name intCtrlWidth = frm(Path).ColumnWidth If Path <> clm Then intCtrlSum = intCtrlSum + intCtrlWidth End If End If Next ctrl 'Adjust column to fit window intWindowWidth = frm.WindowWidth - 270 intCtrlAdj = intWindowWidth - intCtrlSum frm.Width = intWindowWidth frm(clm).ColumnWidth = intCtrlAdj Debug.Print "Totalt (Ctrl): " & intCtrlSum Debug.Print "Totalt (Window): " & intWindowWidth Debug.Print "Totalt (Remaining): " & intCtrlAdj Debug.Print "clm : " & clm HandleError: GeneralErrorHandler Err.Number, Err.Description Exit Sub End Sub Code to call procedure: Private Sub Form_Load() Call AdjustColumnWidth(Me, "txtDescription") End Sub

    Read the article

  • Weird behavior of std::vector

    - by Nima
    I have a class like this: class OBJ{...}; class A { public: vector<OBJ> v; A(int SZ){v.clear(); v.reserve(SZ);} }; A *a = new A(123); OBJ something; a->v.push_back(something); This is a simplified version of my code. The problem is in debug mode it works perfect. But in release mode it crashes at "push_back" line. (with all optimization flags OFF) I debugged it in release mode and the problem is in the constructor of A. the size of the vector is something really big with dummy values and when I clear it, it doesn't change... Do you know why? Thanks,

    Read the article

  • Question about effective logging in C#

    - by MartyIX
    I've written a simple class for debugging and I call the method Debugger.WriteLine(...) in my code like this: Debugger.WriteLine("[Draw]", "InProgress", "[x,y] = " + x.ToString("0.00") + ", " + y.ToString("0.00") + "; pos = " + lastPosX.ToString() + "x" + lastPosY.ToString() + " -> " + posX.ToString() + "x" + posY.ToString() + "; SS = " + squareSize.ToString() + "; MST = " + startTime.ToString("0.000") + "; Time = " + time.ToString() + phase.ToString(".0000") + "; progress = " + progress.ToString("0.000") + "; step = " + step.ToString() + "; TimeMovementEnd = " + UI.MovementEndTime.ToString()); The body of the procedure Debugger.WriteLine is compiled only in Debug mode (directives #if, #endif). What makes me worry is that I often need ToString() in Debugger.WriteLine call which is costly because it creates still new strings (for changing number for example). How to solve this problem? A few points/questions about debugging/tracing: I don't want to wrap every Debugger.WriteLine in an IF statement or to use preprocessor directives in order to leave out debugging methods because it would inevitable lead to a not very readable code and it requires too much typing. I don't want to use any framework for tracing/debugging. I want to try to program it myself. Are Trace methods (http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.aspx) left out if compiling in release mode? If it is so is it possible that my methods would behave similarly? http://msdn.microsoft.com/en-us/library/fht0f5be.aspx output = String.Format("You are now {0} years old.", years); Which seems nice. Is it a solution for my problem with ToString()?

    Read the article

  • Return and Save XML Object From Sharepoint List Web Service

    - by HurnsMobile
    I am trying to populate a variable with an XML response from an ajax call on page load so that on keyup I can filter through that list without making repeated get requests (think very rudimentary autocomplete). The trouble that I am having seems to be potentially related to variable scoping but I am fairly new to js/jQuery so I am not quite certain. The following code doesn't do anything on key up and adding alerts to it tells me that it is executing leadResults() on keyup and that the variable is returning an XML response object but it appears to be empty. The strange bit is that if I move the leadResults() call into the getResults() function the UL is populated with the results correctly. Im beating my head against the wall on this one, please help! var resultsXml; $(document).ready( function() { var leadLookupCaml = "<Query> \ <Where> \ <Eq> \ <FieldRef Name=\"Lead_x0020_Status\"/> \ <Value Type=\"Text\">Active</Value> \ </Eq> \ </Where> \ </Query>" $().SPServices({ operation: "GetListItems", webURL: "http://sharepoint/departments/sales", listName: "Leads", CAMLQuery: leadLookupCaml, CAMLRowLimit: 0, completefunc: getResults }); }) $("#lead_search").keyup( function(e) { leadResults(); }) function getResults(xData, status) { resultsXml = xData; } function leadResults() { xData = resultsXml; $("#lead_results li").remove(); $(xData.responseXML).find("z\\:row").each(function() { var selectHtml = "<li>" + "<a href=\"http://sharepoint/departments/sales/Lists/Lead%20Groups/DispForm.aspx?ID=" + $(this).attr("ows_ID") + ">" + $(this).attr("ows_Title")+" : " + $(this).attr("ows_Phone") + "</a>\ </li>"; $("#lead_results").append(selectHtml); }); }

    Read the article

  • SOLVED: Lisp: macro calling a function works in interpreter, fails in compiler (SBCL + CMUCL)

    - by ttsiodras
    As suggested in a macro-related question I recently posted to SO, I coded a macro called "fast" via a call to a function (here is the standalone code in pastebin): (defun main () (progn (format t "~A~%" (+ 1 2 (* 3 4) (+ 5 (- 8 6)))) (format t "~A~%" (fast (+ 1 2 (* 3 4) (+ 5 (- 8 6))))))) This works in the REPL, under both SBCL and CMUCL: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (load "bug.cl") 22 22 $ Unfortunately, however, the code no longer compiles: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (compile-file "bug.cl") ... ; during macroexpansion of (FAST (+ 1 2 ...)). Use *BREAK-ON-SIGNALS* to ; intercept: ; ; The function COMMON-LISP-USER::CLONE is undefined. So it seems that by having my macro "fast" call functions ("clone","operation-p") at compile-time, I trigger issues in Lisp compilers (verified in both CMUCL and SBCL). Any ideas on what I am doing wrong and/or how to fix this?

    Read the article

  • dynamically created controls and responding to control events

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • [OpenGL] I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Then how do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • Objective-C function dispatch collisions; Or, how to achieve "namespaces"?

    - by fbrereto
    I have an application for Mac OS X that supports plugins that are intended to be loaded at the same time. Some of these plugins are built on top of a Cocoa framework that may receive updates in one plugin but not another. Given Objective-C's current method for function dispatching, any call from any plugin to a given Objective-C routine will go to the same routine every time. That means plugin A can find itself inside plugin B with a trivial Objective-C call! Obviously what we're looking for is for each plugin to interact with its own version of the framework upon which it was built. I have been reading some on Objective-C and this particular need, but haven't found a definitive solution for it yet. Update: My use of the word "framework" above is misleading: the framework is a statically-linked library, built into the plugin(s) that need it. The way Objective-C handles dispatching, however, even these statically linked pieces of disparate code will co-mingle in the Objective-C dispatcher, leading to unintended consequences. Update 2: I'm still a bit fuzzy on the answer provided here, as it doesn't seem to propose a solution as much as an unproven hypothesis.

    Read the article

  • How to associate static entity instances in a Session without database retrieval

    - by Michael Hedgpeth
    I have a simple Result class that used to be an Enum but has evolved into being its own class with its own table. public class Result { public static readonly Result Passed = new Result(StatusType.Passed) { Id = [Predefined] }; public static readonly Result NotRun = new Result(StatusType.NotRun) { Id = [Predefined] }; public static readonly Result Running = new Result(StatusType.Running) { Id = [Predefined] }; } Each of these predefined values has a row in the database at their predefined Guid Id. There is then a failed result that has an instance per failure: public class FailedResult : Result { public FailedResult(string description) : base(StatusType.Failed) { . . . } } I then have an entity that has a Result: public class Task { public Result Result { get; set; } } When I save a Task, if the Result is a predefined one, I want NHibernate to know that it doesn't need to save that to the database, nor does it need to fetch it from the database; I just want it to save by Id. The way I get around this is when I am setting up the session, I call a method to load the static entities: protected override void OnSessionOpened(ISession session) { LockStaticResults(session, Result.Passed, Result.NotRun, Result.Running); } private static void LockStaticResults(ISession session, params Result[] results) { foreach (var result in results) { session.Load(result, result.Id); } } The problem with the session.Load method call is it appears to be fetching to the database (something I don't want to do). How could I make this so it does not fetch the database, but trusts that my static (immutable) Result instances are both up to date and a part of the session?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >