Search Results

Search found 21960 results on 879 pages for 'program termination'.

Page 729/879 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • What off-the-shelf licensing system will meet my needs?

    - by Anders Pedersen
    I'm looking for an off-the-shelf license system for desktop software. After some research on the net -- and of course here on StackOverflow -- I haven't found one the suits our needs. I have a couple of must-have features and some would-be-nice features: Must have: Encrypted unlock key Possibility to automate the unlock key generation on my website User info in key so that I can show name and company in an about box and perhaps in reports Nice to have: License managing tools Online activation Nice upgrade possibilities to a version with concurrent license model and subscription model I have looked at Manco, but I find them difficult to work with and the documentation is minimal. Further, I couldn't get the name in the key. Also, the automatic generation of a key on my website has to be done with an application web service, but I would rather program against a DLL. Next I looked at xheo. It is easier to use and the documentation is better, but the price is substantially higher and here you can only get the user name in the license file that you then have to provide together with the unlock key. Could anyone share their experiences on what you are using and how it is working for you?

    Read the article

  • Turing Model Vs Von Neuman model

    - by Santhosh
    First some background (based on my understanding).. The Von-Neumann architecture describes the stored-program computer where instructions and data are stored in memory and the machine works by changing it's internal state, i.e an instruction operated on some data and modifies the data. So inherently, there is state msintained in the system. The Turing machine architecture works by manipulating symbols on a tape. i.e A tape with infinite number of slots exists, and at any one point in time, the Turing machine is in a particular slot. Based on the symbol read at that slot, the machine change the symbol and move to a different slot. All of this is deterministic. My questions are Is there any relation between these two models (Was the Von Neuman model based on or inspired by the Turing model)? Can we say that Turing model is a superset of Von Newman model? Does functional Programming fit into Turing model. If so how? (I assume FP does not lend itself nicely to the Von Neuman model)

    Read the article

  • Matlab crashes on library initialize when called from Java

    - by David Sauter
    Hello everyone. The setup I have is I'm using a Java application to call native C-code with JNI, which in turn starts up the MATLAB runtime and calls functions on it (I know there are other solutions to call MATLAB methods from Java). The problem is that the MATLAB engine crashes at some point during the initialization and I don't know what's causing it exactly. The crash causes my jvm to terminate, I assume it's some kind of memory corruption. The C++ code calling MATLAB functions that is actually crashing is JNIEXPORT void JNICALL some_jni_vodoo_initializeLibrary(JNIEnv* env, jclass thisClass) { try { if (!mclInitializeApplication(NULL, 0)) { THROW_EXCEPTION(env, "Could not initialize the application properly."); return; } if (!<library>Initialize()) { THROW_EXCEPTION(env, "Could not initialize the library."); return; } } ... The function <library>Initialize() crashes here, the Java error log reads Stack Trace: [0] jmi.dll:0x793f4175(0x7934cdca, 1, 0x7937e67c "à;.y`[email protected] in C:\BUILD_ARE..", 0x792d6a32) [1] jvm.dll:0x792df9a5(0xc0000005, 0x79356791, 0x4961b400 "Ð\8y", 0x6d8b29de) [2] jvm.dll:0x792e0431(0x8b515008, 0x70f0e8ce, 0x8b5ffffa, 0xc25d5ec6) ------------------------------------------------------------------------ Fatal Java Exception detected at Fri Apr 30 11:08:08 2010 ------------------------------------------------------------------------ Configuration: MATLAB Version: 7.8.0.347 (R2009a) MATLAB License: unknown Operating System: Microsoft Windows Vista Window System: Version 6.0 (Build 6002: Service Pack 2) Processor ID: x86 Family 6 Model 10 Stepping 5, GenuineIntel Virtual Machine: Java is not enabled Default Encoding: windows-1252 Java is not enabled I really have no idea what could be wrong. Is there not enough memory from the jvm? I guess the problem is somehow related to Java, since calling the JNI functions from a simple test C++ program works fine... Thanks

    Read the article

  • Debugging Key-Value-Observing overflow.

    - by Paperflyer
    I wrote an audio player. Recently I started refactored some of the communication flow to make it fully MVC-compliant. Now it crashes, which in itself is not surprising. However, it crashes after a few seconds inside the Cocoa key-value-observing routines with a HUGE stack trace of recursive calls to NSKeyValueNotifyObserver. Obviously, it is recursively observing a value and thus overflowing the NSArray that holds pending notifications. According to the stack trace, the program loops from observeValueForKeyPath to setMyValue and back. Here is the according code: - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if ([keyPath isEqual:@"myValue"] && object == myModel && [self myValue] != [myModel myValue]) { [self setMyValue:[myModel myValue]; } } and - (void)setMyValue:(float)value { myValue = value; [myModel setMyValue:value]; } myModel changes myValue every 0.05 seconds and if I log the calls to these two functions, they get called only every 0.05 seconds just as they should be, so this is working properly. The stack trace looks like this: -[MyDocument observeValueForKeyPath:ofObject:change:context:] NSKeyValueNotifyObserver NSKeyValueDidChange -[NSObject(NSKeyValueObserverNotification) didChangeValueForKey:] -[MyDocument setMyValue:] _NSSetFloatValueAndNotify …repeated some ~8k times until crash Do you have any idea why I could still be spamming the KVO queue?

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • Checking for corrupt war file after mvn install

    - by Tauren
    Is there a maven command that will verify that a WAR file is valid and not corrupt? Or is there some other program or technique to validate zip files? I'm on Ubuntu 9.10, so a linux solution is preferred. On occasion, I end up with a corrupt WAR file after doing mvn clean and mvn install on my project. If I extract the WAR file to my hard drive, an error occurs and the file doesn't get extracted. I believe this happens when my system is in a low-memory condition, because this tends to happen only when lots of memory is in use. After rebooting, doing a mvn install always gives a valid WAR file. Because this happens infrequently, I don't typically test the file by uncompressing it. I transfer the 50MB war file to my server and then restart Jetty with it as the root webapp. But when the file is corrupt, I get a java.util.zip.ZipException: invalid block type error. So I'm looking for a quick way to validate the file as soon as mvn install is completed. Is there a maven command to do this? Any other ideas?

    Read the article

  • Displaying an inverted pyramid of asterisks

    - by onyebuoke
    I help in my for my program. I am required 2 create a inverted pyramid of stars which rows depends on the number of stars the user keys, but I've done it so it does not give an inverted pyramid, it gives a regular pyramid. #include <stdio.h> #include <conio.h> void printchars(int no_star, char space); int getNo_of_rows(void); int main(void) { int numrows, rownum; rownum=0; numrows=getNo_of_rows(); for(rownum=0;rownum<=numrows;rownum++) { printchars(numrows-rownum, ' '); printchars((2*rownum-1), '*'); printf("\n"); } _getche(); return 0; } void printchars(int no_star, char space) { int cnt; for(cnt=0;cnt<no_star;cnt++) { printf("%c",space); } } int getNo_of_rows(void) { int no_star; printf("\n Please enter the number of stars you want to print\n"); scanf("%d",&no_star); while(no_star<1) { printf("\n number incorrect, please enter correct number"); scanf("%d",&no_star); } return no_star; }

    Read the article

  • jsf and huge web site

    - by darko petreski
    Hi All, I need to program a very huge web site. The site should contain public part with search engine, user registration forms, cart and all other stuff. The private part should contain administrative web application with a lot of data grids, complex edit forms, security checks, services and so on. Also I need very good support for html mailing and pdf exports. Web services are also required. I am considering to use jsf. Can you recommend me books or other stuff where I can find all this info for building what i need? Is seam good option for this? I have looked their site and page seam on production. The sites listed there are far from professional and enterprise. Is there any book that explains how to build complete web site with jsf technology from login, security, sending mails, pdf conversion, time dependent background processes and everything else that big sites needs. I do not have time reading 20 books. In every php book all this stuff is explained but I have not seen a single java book that at least from high view will explain all of this. Regards

    Read the article

  • Creating files with french characters and encoding.

    - by Kevin
    HI, I am creating a file like so. FileStream temp = File.Create( this.FileName ); Then putting data in the file like so. this.Writer = new StreamWriter( this.Stream ); this.Writer.WriteLine( strMessage ); That code is encapsulated in a class hierarchy but that is the meat and potatoes of it. My problem is this. MSDN says that the default encoding for creating a file this way is UTF8. And when I write a french character such as é Textpad interprets the file as a UTF 8 file, but notepad++ says it's "ANSI as UTF8" or maybe it's an ansi file but is reading it as UTF8. When I create a file the same way without the french character both textpad and notepad++ read the file as an ansi file even though according to msdn it should be a utf 8 file still. Which program should be trusted. Notepad++ or textpad - Notepad++ seems to be more consistant, but is still the oppossite to what MSDN says it should be. My problem is that we create files that get sent off to another company and depending on whether there are french characters the encoding seems to keep changing. Or is there a better way to determine the encoding of a file. I've read about byte order marks and preambles but as far as I understand neither are guaranteed to be there. We initially thought that all the files we were building were ansi. Also please note that both ansi and utf8 should handle the french characters appropriately as the characters are part of both character sets.

    Read the article

  • How important is managing memory in Objective-C?

    - by Alex Mcp
    Background: I'm (jumping on the bandwagon and) starting learning about iPhone/iPad development and Objective-C. I have a great background in web development and most of my programming is done in javascript (no libraries), Ruby, and PHP. Question: I'm learning about allocating and releasing memory in Objective-C, and I see it as quite a tricky task to layer on top of actually getting the farking thing to run. I'm trying to get a sense of applications that are out there and what will happen with a poorly memory-managed program. A) Are apps usually released with no memory leaks? Is this a feasible goal, or do people more realistically just excise the worst offenders and that's ok? B) If I make an NSString for a title of a view, let's say, and forget to deallocate it it, does this really only become a problem if I recreate that string repeatedly? I imagine what I'm doing is creating an overhead of the memory needed to store that string, so it's probably quite piddling (a few bytes?) However if I have a rapidly looping cycle in a game that 'leaks' an int every cycle or something, that would overflow the app quite quickly. Are these assumptions correct? Sorry if this isn't up the community-wiki alley, I'm just trying to get a handle on how to think about memory and how careful I'll need to be. Any anecdotes or App Store-submitted app experiences would be awesome to hear as well.

    Read the article

  • Malloc corrupting already malloc'd memory in C

    - by Kyte
    I'm currently helping a friend debug a program of his, which includes linked lists. His list structure is pretty simple: typedef struct nodo{ int cantUnos; char* numBin; struct nodo* sig; }Nodo; We've got the following code snippet: void insNodo(Nodo** lista, char* auxBin, int auxCantUnos){ printf("*******Insertando\n"); int i; if (*lista) printf("DecInt*%p->%p\n", *lista, (*lista)->sig); Nodo* insert = (Nodo*)malloc(sizeof(Nodo*)); if (*lista) printf("Malloc*%p->%p\n", *lista, (*lista)->sig); insert->cantUnos = auxCantUnos; insert->numBin = (char*)malloc(strlen(auxBin)*sizeof(char)); for(i=0 ; i<strlen(auxBin) ; i++) insert->numBin[i] = auxBin[i]; insert-numBin[i] = '\0'; insert-sig = NULL; Nodo* aux; [etc] (The lines with extra indentation were my addition for debug purposes) This yields me the following: *******Insertando DecInt*00341098->00000000 Malloc*00341098->2832B6EE (*lista)-sig is previously and deliberately set as NULL, which checks out until here, and fixed a potential buffer overflow (he'd forgotten to copy the NULL-terminator in insert-numBin). I can't think of a single reason why'd that happen, nor I've got any idea on what else should I provide as further info. (Compiling on latest stable MinGW under fully-patched Windows 7, friend's using MinGW under Windows XP. On my machine, at least, in only happens when GDB's not attached.) Any ideas? Suggestions? Possible exorcism techniques? (Current hack is copying the sig pointer to a temp variable and restore it after malloc. It breaks anyways. Turns out the 2nd malloc corrupts it too. Interestingly enough, it resets sig to the exact same value as the first one).

    Read the article

  • What is the best Binary Decision Diagram library for Java?

    - by reprogrammer
    A Binary Decision Diagram (BDD) is a data structure to represent boolean functions. I'd like use this data structure in a Java program. My search for Java based BDD libraries resulted into the following packages. Java Decision Diagram Libraries JavaBDD JDD JBDD bddbddb If you know of any other BDD libraries available for Java programs, please let me know so that I add it to the list above. If you have used any of these libraries, please tell me about your experience with the library. In particular, I'd like you to compare the available libraries along the following dimensions. Quality. Is the library mature and reasonably bug free? Performance. How do you evaluate the performance of the library? Support. Could you easily get support whenever you encountered a problem with the library? Was the library well documented? Ease of use. Was the API well designed? Could you install and use the library quickly and easily? Please mention the version of the library that you are evaluating.

    Read the article

  • RSA Decrypting a string in C# which was encrypted with openssl in php 5.3.2

    - by panny
    maybe someone can clear me up. I have been surfing on this a while now. I used openssl from console to create a root certificate for me (privatekey.pem, publickey.pem, mycert.pem, mycertprivatekey.pfx). See the end of this text on how. The problem is still to get a string encrypted on the PHP side to be decrypted on the C# side with RSACryptoServiceProvider. Any ideas? PHP side I used the publickey.pem to read it into php: $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate"

    Read the article

  • Setting System.Drawing.Color through .NET COM Interop

    - by Maxim
    I am trying to use Aspose.Words library through COM Interop. There is one critical problem: I cannot set color. It is supposed to work by assigning to DocumentBuilder.Font.Color, but when I try to do it I get OLE error 0x80131509. My problem is pretty much like this one: http://bit.ly/cuvWfc update: Code Sample: from win32com.client import Dispatch Doc = Dispatch("Aspose.Words.Document") Builder = Dispatch("Aspose.Words.DocumentBuilder") Builder.Document = Doc print Builder.Font.Size print Builder.Font.Color Result: 12.0 Traceback (most recent call last): File "aaa.py", line 6, in <module> print Builder.Font.Color File "D:\Python26\lib\site-packages\win32com\client\dynamic.py", line 501, in __getattr__ ret = self._oleobj_.Invoke(retEntry.dispid,0,invoke_type,1) pywintypes.com_error: (-2146233079, 'OLE error 0x80131509', None, None) Using something like Font.Color = 0xff0000 fails with same error message While this code works ok: using Aspose.Words; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Document doc = new Document(); DocumentBuilder builder = new DocumentBuilder(doc); builder.Font.Color = System.Drawing.Color.Blue; builder.Write("aaa"); doc.Save("c:\\1.doc"); } } } So it looks like COM Interop problem.

    Read the article

  • Building error in Eclipse with build.xml

    - by Zachary
    I am working on a Java project with Eclipse. This project requires a second project (not mine), named sams in its build-path. The sams is provided with a build.xml file and it should generate some code using Apache CXF when building it. When I use Apache ANT on Eclipse and run the cxf.generated command from its build file I get the following error: Buildfile: C:\Docs\ZacRocha\Desktop\sams\build.xml cxf.generated: [echo] Generating code using Apache CXF wsdl2java... [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivesoftware.wsdl} (The system cannot find the file specified) [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivehardware.wsdl} (The system cannot find the file specified) BUILD SUCCESSFUL Total time: 4 seconds I am used to program on Eclipse and I know very little about building with Apache ANT. Can someone tell me where exactly the problem may be? Thanks in advance!

    Read the article

  • WinForms Application Form "Shakes" When Audio Playing

    - by ikurtz
    I have a C# game program that i'm developing. it uses sound samples and winsock. when i test run the game most of the audio works fine but from time to time if it is multiple samples being played sequentially the application form shakes a little bit and then goes back to its old position. how do i go about debugging this or present it to you folks in a manageable manner? i'm sure no one is going to want the whole app code in fear of virus attacks. please guide me.. EDIT: i have not been able to pin down any code section that produces this result. it just does and i cannot explain it. EDIT: no the x/y position are not changing. the window like shakes around a few pixels and then goes back to the position were it was before the shake. if (audio) { Stream stream; SoundPlayer player; stream = Properties.Resources.ResourceManager.GetStream("_home"); player = new System.Media.SoundPlayer(stream); player.PlaySync(); player.Dispose(); string ShipID = fireResult.DestroyedShipType.ToString(); stream = Properties.Resources.ResourceManager.GetStream("_" + ShipID); player = new System.Media.SoundPlayer(stream); player.PlaySync(); player.Dispose(); stream = Properties.Resources.ResourceManager.GetStream("_destroyed"); player = new System.Media.SoundPlayer(stream); player.PlaySync(); player.Dispose(); } can you see anything in the above code that would produce this shake?

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • According to MSDN ReadFile() Win32 function may incorrectly report read operation completion. When?

    - by Martin Dobšík
    The MSDN states in its description of ReadFile() function (http://msdn.microsoft.com/en-us/library/aa365467%28VS.85%29.aspx): “If hFile is opened with FILE_FLAG_OVERLAPPED, the lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that the read operation is complete.” I have some applications that are violating the above recommendation and I would like to know the severity of the problem. I mean the program uses named pipe that has been created with FILE_FLAG_OVERLAPPED, but it reads from it using the following call: ReadFile(handle, &buf, n, &n_read, NULL); That means it passes NULL as the lpOverlapped parameter. That call should not work correctly in some circumstances according to documentation. I have spent a lot of time trying to reproduce the problem, but I was unable to! I always got all data in right place at right time. I was testing only Named Pipes though. Would anybody know when can I expect that ReadFile() will incorrectly return and report successful completion even the data are not yet in the buffer? What would have to happen in order to reproduce the problem? Does it happen with files, pipes, sockets, consoles, or other devices? Do I have to use particular version of OS? Or particular version of reading (like register the handle to I/O completion port)? Or particular synchronization of reading and writing processes/threads? Or when would that fail? It works for me :/ Please help! With regards Martin

    Read the article

  • Replacing XML reserved characters in SQL Server 2005

    - by Barn
    I'm working on a system that takes relational data from a sql server DB and uses SSIS to produce an XML extract using sql server 2005's 'FOR XML PATH' command and a schema. The problem lies with replacing the XML reserved characters. 'FOR XML PATH' is only replacing <, , and &, not ' and ", so I need a way of replacing these myself. I've tried pre-processing the fields in the database to replace XML reserved characters with their entitised equivalents (e.g. & becomes &amp;), but once these fields are used to construct XML using FOR XML the leading & is replaced with &amp;, so I end up with &amp;amp; where I should have &amp;. What I've tried so far is altering the element's contents after the XML has been constructed using XQuery inside SQL server like so: DECLARE @data VARCHAR(MAX) SET @data = CONVERT(VARCHAR(MAX), [my xml column].query(' data(/root/node_i_want)') SELECT @data = [function to replace quotes etc](@data) SET [my xml column].modify('replace value of (/root/node_i_want)[1] with sql:variable("@data")') but I get the same problem. Essentially, is there something wrong I'm doing with the above, or a way to tell FOR XML to entitise other characters, or something like that? Basically anything short of having to write a program to change the XML after it has been assembled in large batches and saved to files!

    Read the article

  • Odd difference between Python 2.5 and Python 2.6 on MacOS 10.6 using ctypes and libproc proc_pidinfo

    - by cemasoniv
    I'm trying to determine the current working directory of a process given its PID. The command-line utility lsof does something similar. Here's the source to the python script: import ctypes from ctypes import util import sys PROC_PIDVNODEPATHINFO = 9 proc = ctypes.cdll.LoadLibrary(util.find_library("libproc")) print(proc.proc_pidinfo) class vnode_info(ctypes.Structure): _fields_ = [('data', ctypes.c_ubyte * 152)] class vnode_info_path(ctypes.Structure): _fields_ = [('vip_vi', vnode_info), ('vip_path', ctypes.c_char * 1024)] class proc_vnodepathinfo(ctypes.Structure): _fields_ = [('pvi_cdir', vnode_info_path), ('pvi_rdir', vnode_info_path)] inst = proc_vnodepathinfo() pid = int(sys.argv[1]) ret = proc.proc_pidinfo( pid, PROC_PIDVNODEPATHINFO, 0, ctypes.byref(inst), ctypes.sizeof(inst) ) print(ret, inst.pvi_cdir.vip_path) However, even though this script behaves as expected on Python 2.6, it does not work in Python 2.5: host:dir user$ sudo /usr/bin/python2.6 script.py 2698 <_FuncPtr object at 0x100419ae0> (2352, '/') host:dir user$ sudo /usr/bin/python2.5 script.py 2698 <_FuncPtr object at 0x19fdc0> (0, '') (PID 2698 is "Activity Monitor.app"). Note the different return values. Since this program strongly based on ctypes, I can't imagine any difference in Python itself that would cause this. The same behavior (as Python 2.5) occurs with my self-built Python 3.2. I'm not sure what versioning information I can give to help track down the weirdness -- or even come up with a solution for 2.5 -- but here's some stuff: host:dir user$ otool -L /usr/bin/python2.6 /usr/bin/python2.6: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ otool -L /usr/bin/python2.5 /usr/bin/python2.5 (architecture i386): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) /usr/bin/python2.5 (architecture ppc7400): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ uname -a Darwin host.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks to anyone that has a clue about what's going on here:)

    Read the article

  • Error Ant Build/deploy to websphere 7.0

    - by adisembiring
    Hi I'm trying to build/deploy war to websphere process server 7.0. and I run on windows environment. I use http://illegalargumentexception.blogspot.com/2008/08/ant-automated-deployment-to-websphere.html as my reference. and http://illegalargumentexception.googlecode.com/svn/trunk/code/java/WebSphereAntFiles/ as my sample code to deployed. this is my buil.properies is ? #build properties mywebappear=D:/data/code/WebSphereAntFiles/scripts/test/mywebappEAR.ear #WAS6 install directory was_home=C:/IBM/WID7_WTE/runtimes/bi_v7 #server name (see cell/node/server; e.g. "server1") was_server=server1 #user + password; for use when security is enabled was_user=admin was_password=admin #stops scripts on problem was_failonerror=true #virtual host was_virtualhost=default_host #Absolute path to EAR file #was_ear=fooEAR.ear #Name of the enterprise application #was_appname=fooEAR this is my console while I trying to build with ws_ant.bat [wsDefaultBindings] mywebapp.war [wsDefaultBindings] <virtual-host> --> default_host [wsDefaultBindings] [wsDefaultBindings] ------------------------ [wsDefaultBindings] Saving EAR File to directory [wsDefaultBindings] Saved EAR File to directory Successfully test_wsStartServer: WAS_wsStartServer: depCheck: depCheck: [startServer] ADMU0116I: Tool information is being logged in file [startServer] C:\IBM\WID7_WTE\runtimes\bi_v7\profiles\qwps\logs\server1\startServer.log [startServer] ADMU0128I: Starting tool with the qwps profile [startServer] ADMU3100I: Reading configuration for server: server1 [startServer] ADMU3028I: Conflict detected on port 8880. Likely causes: a) An instance of [startServer] the server server1 is already running b) some other process is [startServer] using port 8880 [startServer] ADMU3027E: An instance of the server may already be running: server1 [startServer] ADMU0111E: Program exiting with error: [startServer] com.ibm.websphere.management.exception.AdminException: ADMU3027E: An [startServer] instance of the server may already be running: server1 [startServer] ADMU1211I: To obtain a full trace of the failure, use the -trace option. [startServer] ADMU0211I: Error details may be seen in the file: [startServer] C:/IBM/WID7_WTE/runtimes/bi_v7/profiles/qwps\logs\server1\startServer.log BUILD FAILED D:\data\code\WebSphereAntFiles\scripts\test\build.xml:68: The following error occurred while executing this line: D:\data\code\WebSphereAntFiles\scripts\was\wsStartServer.xml:49: Java returned: -1

    Read the article

  • fork() within a fork()

    - by codingfreak
    Hi Is there any way to differentiate the child processes created by different fork() functions within a program. global variable i; SIGCHLD handler function() { i--; } handle() { fork() --> FORK2 } main() { while(1) { if(i<5) { i++; if( (fpid=fork())==0) --> FORK1 handle() else (fpid>0) ..... } } } Is there any way I can differentiate between child processes created by FORK1 and FORK2 ?? because I am trying to decrement the value of global variable 'i' in SIGCHLD handler function and it should be decremented only for the processes created by FORK1 .. I tried to use an array and save the process id [this code is placed in fpid0 part] of the child processes created by FORK1 and then decrement the value of 'i' only if the process id of dead child is within the array ... But this didn't work out as sometimes child processes dead so fastly that updating the array is not done perfectly and everything messed up. So is there any better solution for this problem ??

    Read the article

  • Using Git with shared (symbolic link) directories

    - by chmike
    I'm working on a big Qt application with multiple widgets which are quite complex. One of this widget is a webcam stream viewer. The application is organized so that each program module (i.e. widgets) is stored in its own directory with a .pri file. All these are stored in one main directory grouping all the widget directories. Next to this main project directory I also have application directories. Let say one for each widget. In these directory I have a symbolic link (alias on windows) to the module directory in the main project folder. This application has then the necessary code to build a standalone application showing only the widget. So for instance I have a webcam viewer application, another to control some devices, etc. This source code organization works well and allows me to develop and test the widgets in independent applications while sharing the code with the main application. Currently only the main project directory is under version control using subversion. Now I would like to start using git and would like to know if this shared directory model would work with it or if there is a better way to do it.

    Read the article

  • why is this C++ Code not doing his job

    - by hamza
    i want to create a program that write all the primes in a file ( i know that its a popular algorithm but m trying to make it by my self ) , but it still showing all the numbers instead of just the primes , & i dont know why someone pleas tell me why #include <iostream> #include <stdlib.h> #include <stdio.h> void afficher_sur_un_ficher (FILE* ficher , int nb_bit ); int main() { FILE* p_fich ; char tab[4096] , mask ; int nb_bit = 0 , x ; for (int i = 0 ; i < 4096 ; i++ ) { tab[i] = 0xff ; } for (int i = 0 ; i < 4096 ; i++ ) { mask = 0x01 ; for (int j = 0 ; j < 8 ; j++) { if ((tab[i] & mask) != 0 ) { x = nb_bit ; while (( x > 1 )&&(x < 16384)) { tab[i] = tab[i] ^ mask ; x = x * 2 ; } afficher_sur_un_ficher (p_fich , nb_bit ) ; } mask = mask<<1 ; nb_bit++ ; } } system ("PAUSE"); return 0 ; } void afficher_sur_un_ficher (FILE* ficher , int nb_bit ) { ficher = fopen("res.txt","a+"); fprintf (ficher ,"%d \t" , nb_bit); int static z ; z = z+1 ; if ( z%10 == 0) fprintf (ficher , "\n"); fclose(ficher); }

    Read the article

  • why does text from socket server erase previously written text?

    - by mix
    This is strange enough I'm not sure how to search for an answer. I have a program in Python that communicates via TCP/IP sockets to a telnet-based server. If I telnet in manually and type commands like this: SET MDI G0 X0 Y0 the server will spit back a line like this: SET MDI ACK Pretty standard stuff. Here's the weird part. If, in my code, I precede my printing of each of these lines with some text, the returned line erases what I'm trying to print before it. So for example, if I write the code so it should look like this: SENT: SET MDI G0 X0 Y0 READ: SET MDI ACK What I get instead is: SENT: SET MDI G0 X0 Y0 SET MDI ACK Now, if I make the "READ: " text a bit longer, I can get a better idea of what's happening. Let's say I change READ: to 12345678901234567890, so that it should read as: 12345678901234567890: SET MDI ACK What I get instead is: SET MDI ACK234567890: So it seems like whatever text I'm getting back from the server is somehow deleting what I'm trying to precede it with. I tried saving all of my saved lines in a list, and then printing them out at the end, but it does exactly the same thing. Any ideas on what's going on, or even on how to debug this? Is there a way to get Python to show me any hidden chars in a string, for example? thx!

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >