Search Results

Search found 21960 results on 879 pages for 'program termination'.

Page 757/879 | < Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >

  • Logback: What would cause DEBUG - WARN to append to file but NOT ERROR?

    - by Zombies
    I am running a Java program from Eclipe, and I am using the logback console plugin. I can see the ERROR level statements being appended to console (as well as all of the others). But for some reason my file, which is correctly recieving new DEBUG-WARN statements, is NOT recieving the ERROR level ones. Here is my logback.xml: <consolePlugin /> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>logs/main.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</Pattern> </layout> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%msg%n</Pattern> </layout> </appender> <logger name="WebsiteChecker"> <appender-ref ref="FILE" /> </logger> <root> <level value="debug" /> <!--<appender-ref ref="STDOUT" />--> <appender-ref ref="FILE" /> </root> </configuration>

    Read the article

  • Querying datetime.datetime on appengine acts different then dev server help!

    - by Alon Carmel
    Hey, I'm having some trouble with stuff that work locally and dont work on the app engine python environment: Basically, i want to get a program from an epg between ranges of date and time. i know i cannot do two where < so i saw a suggestion to save the dates as list as datetime.datetime which i did. [datetime.datetime(2010, 5, 10, 14, 25), datetime.datetime(2010, 5, 10, 15, 0)] This is ok. but when i try to compare to it: progranon = get_object(Programs2Channel, 'channel_id =', channelobj.key(), 'endstartdate >', programstart_minex, 'endstartdate <', programstart_minex ) This for some reason works locally, but fails to retrieve the data on the app engine. *Im using Google app engine django patch which uses the get_object to retrieve data in transactions. Please help. Here are more details: this is the LIST: [datetime.datetime(2010, 5, 13, 10, 45), datetime.datetime(2010, 5, 13, 11, 30)] #this is the query: programstart = ""+year+"-"+month+"-"+day+" "+hour+":"+minute programstart_minex = datetime.strptime(programstart, "%Y-%m-%d %H:%M") progranon = Programs2Channel.gql('WHERE channel_id = :channelid AND endstartdate > :programstartx AND endstartdate < :programstartx',channelid = channelobj.key(),programstartx=programstart_minex).get()

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • C# 4.0 'dynamic' doesn't set ref/out arguments

    - by Buu Nguyen
    I'm experimenting with DynamicObject. One of the things I try to do is setting the values of ref/out arguments, as shown in the code below. However, I am not able to have the values of i and j in Main() set properly (even though they are set correctly in TryInvokeMember()). Does anyone know how to call a DynamicObject object with ref/out arguments and be able to retrieve the values set inside the method? class Program { static void Main(string[] args) { dynamic proxy = new Proxy(new Target()); int i = 10; int j = 20; proxy.Wrap(ref i, ref j); Console.WriteLine(i + ":" + j); // Print "10:20" while expect "20:10" } } class Proxy : DynamicObject { private readonly Target target; public Proxy(Target target) { this.target = target; } public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { int i = (int) args[0]; int j = (int) args[1]; target.Swap(ref i, ref j); args[0] = i; args[1] = j; result = null; return true; } } class Target { public void Swap(ref int i, ref int j) { int tmp = i; i = j; j = tmp; } }

    Read the article

  • Select all points in a matrix within 30m of another point

    - by pinnacler
    So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane. We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab. Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those? I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab. I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway. I'm thinking of mirroring two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel. Cartesian Plane GIS

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • AI navigation around a 2d map - Avoiding obstacles.

    - by Curt Walker
    Hey there, I know my question seems pretty vague but I can't think of a better way to put it so I'll start off by explaining what I'm trying to do. I'm currently working on a project whereby I've been given a map and I'm coding a 'Critter' that should be able to navigate it's way around the map, the critter has various other functions but are not relevant to the current question. The whole program and solution is being written in C#. I can control the speed of the critter, and retrieve it's current location on the map by returning it's current X and Y position, I can also set it's direction when it collides with the terrain that blocks it. The only problem I have is that I can't think of a way to intelligently navigate my way around the map, so far I've been basing it around what direction the critter is facing when it collides with the terrain, and this is in no way a good way of moving around the map! I'm not a games programmer, and this is for a software assignment, so I have no clue on AI techniques. All I am after is a push in the right direction on how I could go about getting this critter to find it's way around any map given to me. Here's an image of the map and critters to give you an idea of what i'm talking about. Here's a link to an image of what the maps and critters look like. Map and Critter image I'm in no way looking for anyone to give me a full solution, just a push in the general direction on map navigation. Thanks in advance!

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Cant' cast a class with multiple inheritance

    - by Jay S.
    I am trying to refactor some code while leaving existing functionality in tact. I'm having trouble casting a pointer to an object into a base interface and then getting the derived class out later. The program uses a factory object to create instances of these objects in certain cases. Here are some examples of the classes I'm working with. // This is the one I'm working with now that is causing all the trouble. // Some, but not all methods in NewAbstract and OldAbstract overlap, so I // used virtual inheritance. class MyObject : virtual public NewAbstract, virtual public OldAbstract { ... } // This is what it looked like before class MyObject : public OldAbstract { ... } // This is an example of most other classes that use the base interface class NormalObject : public ISerializable // The two abstract classes. They inherit from the same object. class NewAbstract : public ISerializable { ... } class OldAbstract : public ISerializable { ... } // A factory object used to create instances of ISerializable objects. template<class T> class Factory { public: ... virtual ISerializable* createObject() const { return static_cast<ISerializable*>(new T()); // current factory code } ... } This question has good information on what the different types of casting do, but it's not helping me figure out this situation. Using static_cast and regular casting give me error C2594: 'static_cast': ambiguous conversions from 'MyObject *' to 'ISerializable *'. Using dynamic_cast causes createObject() to return NULL. The NormalObject style classes and the old version of MyObject work with the existing static_cast in the factory. Is there a way to make this cast work? It seems like it should be possible.

    Read the article

  • Access violations in strange places when using Windows file dialogs

    - by Robert Oschler
    A long time ago I found out that I was getting access violations in my code due to the use of the Delphi Open File and/or Save File dialogs, which encapsulate the Windows dialogs. I asked some questions on a few forums and I was told that it may have been due to the way some programs add hooks to the shell system that result in DLLs getting injected in every process, some of which can cause havoc with a program. For the record, the programming environment I use is Delphi 6 Professional running on Windows XP 32-bit. At the time I got around it by not using Delphi's Dialog components and instead calling straight into comdlg32.dll. This solved the problem wonderfully. Today I was working with memory mapped files for the first time and sure enough, access violations started cropping up in weird parts of the code. I tried my comdlg32.dll direct calls and this time it didn't help. To isolate the problem as a test I created a list box with the exact same files I was using during testing. These are the exact same test files I was selecting from an Open File dialog and then launching my memory mapped file with. I set things up so that by clicking on a file in the list box, I would use that file in my memory mapped file test instead of calling into a comdlg32.dll dialog function to select a test file. Again, the access violatons vanished. To show you how dramatic a fix it was I went from experiencing an access violation within 1 to 3 trials to none at all. Unfortunately, it's going to bite me later on of course when I do need to use file dialogs. Has anyone else dealt with this issue too and found the real culprit? Did any of you find a solution I could use to fix this problem instead of dancing around it like I am now? Thanks in advance.

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • commons-exec: hanging when I call executor.execute(commandLine);

    - by Stefan Kendall
    I have no idea why this is hanging. I'm trying to capture output from a process run through commons-exec, and I continue to hang. I've provided an example program to demonstrate this behavior below. import java.io.DataInputStream; import java.io.IOException; import java.io.PipedInputStream; import java.io.PipedOutputStream; import org.apache.commons.exec.CommandLine; import org.apache.commons.exec.DefaultExecutor; import org.apache.commons.exec.ExecuteException; import org.apache.commons.exec.PumpStreamHandler; public class test { public static void main(String[] args) { String command = "java"; PipedOutputStream output = new PipedOutputStream(); PumpStreamHandler psh = new PumpStreamHandler(output); CommandLine cl = CommandLine.parse(command); DefaultExecutor exec = new DefaultExecutor(); DataInputStream is = null; try { is = new DataInputStream(new PipedInputStream(output)); exec.setStreamHandler(psh); exec.execute(cl); } catch (ExecuteException ex) { } catch (IOException ex) { } System.out.println("huh?"); } }

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • lambda traits inconsistency across C++0x compilers

    - by Sumant
    I observed some inconsistency between two compilers (g++ 4.5, VS2010 RC) in the way they match lambdas with partial specializations of class templates. I was trying to implement something like boost::function_types for lambdas to extract type traits. Check this for more details. In g++ 4.5, the type of the operator() of a lambda appears to be like that of a free standing function (R (*)(...)) whereas in VS2010 RC, it appears to be like that of a member function (R (C::*)(...)). So the question is are compiler writers free to interpret any way they want? If not, which compiler is correct? See the details below. template <typename T> struct function_traits : function_traits<decltype(&T::operator())> { // This generic template is instantiated on both the compilers as expected. }; template <typename R, typename C> struct function_traits<R (C::*)() const> { // inherits from this one on VS2010 RC typedef R result_type; }; template <typename R> struct function_traits<R (*)()> { // // inherits from this one g++ 4.5 typedef R result_type; }; int main(void) { auto lambda = []{}; function_traits<decltype(lambda)>::result_type *r; // void * } This program compiles on both g++ 4.5 and VS2010 but the function_traits that are instantiated are different as noted in the code.

    Read the article

  • The rules to connect a web service trough the SSL and Certificates

    - by blgnklc
    There is a web service running on tomcat on a server. It is built on Java Servlet. It is listening others to call itself on a SSL enabled http port. so its web service adreess looks like: https://172.29.12.12/axis/services/XYZClient?wsdl On the other hand I want to connect the web service above from a windows application which is built on .NET frame work. Finally, when I want to connect the web service from my computer; I get some specific erros; Firstly I get; Proxy authentication error; then I added some new line to my code; Dim cr As System.Net.NetworkCredential = New System.Net.NetworkCredential("xname", "xsurname", "xdomainname") Dim myProxy As New WebProxy("http://mar.xxxyyy.com", True) myProxy.Credentials = cr Secondly, after this modifications It says that bad request. I did not get over this error. Moreover I did try to connect the web server on the same computer. I copied my executable program to the computer where the web service runs. The error was like; The underlying connection was closed: Could not establish trust relationship for SSL/TLS secure channel PS: When I try to connect to web service by using Internet Explorer; I see firstly some warnings about accepting an unknown certificate and I click take me to web service an I get there clearly. I want to know what are the basic elements to connect a web service, could you please tell me the requirements that I have to use on my windows project. regards bk

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • Cannot run Python script on Windows with output redirected??

    - by Wai Yip Tung
    This is running on Windows 7 (64 bit), Python 2.6 with Win32 Extensions for Python. I have a simple script that just print "hello world". I can launch it with python hello.py. In this case I can redirect the output to a file. But if I run it by just typing hello.py on the command line and redirect the output, I get an exception. C:> python hello.py hello world C:> python hello.py >output C:> type output hello world C:> hello.py hello world C:> hello.py >output close failed in file object destructor: Error in sys.excepthook: Original exception was: I think I first get this error after upgrading to Windows 7. I remember it should work in XP. I have seen people talking about this bug python-Bugs-1012692 | Can't pipe input to a python program. But that was long time ago. And it does not mention any solution. Have anyone experienced this? Anyone can help?

    Read the article

  • After interruption, delayed audio route change notifications when recording

    - by Frank Shearar
    My iPhone application requires that I know when a user has/has not plugged in her headphones. That's easy. AudioSessionAddPropertyListener with a callback listening to kAudioSessionProperty_AudioRouteChange. I write logs with NSLog as things happen. User plugs the headphones in? Get a notification, and a line in the gdb console. User unplugs the headphones? Ditto. At the same time I'm sensing the noise level of the environment by starting a recording audio queue. This, too, works great: I can get the mic noise level and listen for audio route changes just fine. What I find is that after an interruption, and I've reactivated the audio session and restored the audio category to kAudioSessionCategory_RecordAudio, the audio route notifications go a bit haywire. When I plug in the headphones, I see no notification. When I unplug the headphones I see BOTH the "plugged in" notification AND the "unplugged" notification, in rapid succession. It's like the "plugged in" notification's delayed and, when the "unplugged" notification arrives, the queue of pending notifications is flushed. What am I doing wrong? How do I correctly restore the audio session to get timeous notifications? EDIT: iPhone OS 3.1.2, running on an iPhone 3G. I'm running a program compiled with the 3.0 SDK (from within XCode 3.1.2).

    Read the article

  • What's the correct way to do a "catch all" error check on an fstream output operation?

    - by Truncheon
    What's the correct way to check for a general error when sending data to an fstream? UPDATE: My main concern regards some things I've been hearing about a delay between output and any data being physically written to the hard disk. My assumption was that the command "save_file_obj << save_str" would only send data to some kind of buffer and that the following check "if (save_file_obj.bad())" would not be any use in determining if there was an OS or hardware problem. I just wanted to know what was the definitive "catch all" way to send a string to a file and check to make certain that it was written to the disk, before carrying out any following actions such as closing the program. I have the following code... int Saver::output() { save_file_handle.open(file_name.c_str()); if (save_file_handle.is_open()) { save_file_handle << save_str.c_str(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } save_file_handle.close(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } return 1; } else { x_message("Error - couldn't open save file"); return 0; } }

    Read the article

  • Is there a way to programmatically tell if particular block of memory was not freed by FastMM?

    - by Wodzu
    I am trying to detect if a block of memory was not freed. Of course, the manager tells me that by dialog box or log file, but what if I would like to store results in a database? For example I would like to have in a database table a names of routines which allocated given blocks. After reading a documentation of FastMM I know that since version 4.98 we have a possibility to be notified by manager about memory allocations, frees and reallocations as they occur. For example OnDebugFreeMemFinish event is passing to us a PFullDebugBlockHeader which contains useful informations. There is one thing that PFullDebugBlockHeader is missing - the information if the given block was freed by the application. Unless OnDebugFreeMemFinish is called only for not freed blocks? This is which I do not know and would like to find out. The problem is that even hooking into OnDebugFreeMemFinish event I was unable to find out if the block was freed or not. Here is an example: program MemLeakTest; {$APPTYPE CONSOLE} uses FastMM4, ExceptionLog, SysUtils; procedure MemFreeEvent(APHeaderFreedBlock: PFullDebugBlockHeader; AResult: Integer); begin //This is executed at the end, but how should I know that this block should be freed //by application? Unless this is executed ONLY for not freed blocks. end; procedure Leak; var MyObject: TObject; begin MyObject := TObject.Create; end; begin OnDebugFreeMemFinish := MemFreeEvent; Leak; end. What I am missing is the callback like: procedure OnMemoryLeak(APointer: PFullDebugBlockHeader); After browsing the source of FastMM I saw that there is a procedure: procedure LogMemoryLeakOrAllocatedBlock(APointer: PFullDebugBlockHeader; IsALeak: Boolean); which could be overriden, but maybe there is an easier way?

    Read the article

  • Cannot access Class methods from previous windows form - C#

    - by George
    I am writing an app, still, where I need to test some devices every minute for 30 minutes. It made sense to use a timer set to kick off every 60 secs and do whats required in the event handler. However, I need the app to wait for the 30 mins until I have finished with the timer since the following code alters the state of the devices I am trying to monitor. I obviously don't want to use any form of loop to do this. I thought of using another windows form, since I also display the progress, which will simply kick off the timer and wait until its complete. The problem I am having with this is that I use a device Class and cant seem to get access to the methods in the device class from the 2nd (3rd actually - see below) windows form. I have an initial windows form where I get input from the user, then call the 2nd windows form where it work out which tests need to be done and which device classes need to be used, and then I want to call the 3rd windows form to handle the timer. I will have up to 6-7 device classes and so wanted to only instantiate them when actually requiring them, from the 2nd form. Should I have put this logic into the 1st windows form (program class ??) ? Would I not still have the problem of not being able to access device class methods from there too ? Anyway, perhaps someone knows of a better way to do the checks every minute without the rest of the code executing (and changing the status of the devices) or how I should be accessing the methods in the app ?? Well that's the problem, I cant get that part of it to work correctly. Here is the definition for the calling form including the device class - namespace NdtStart { public partial class fclsNDTCalib : Form { NDTClass NDT = new NDTClass(); public fclsNDTCalib() (new fclsNDTTicker(NDT)).ShowDialog(); Here is the class def for the called form - namespace NdtStart { public partial class fclsNDTTicker : Form { public fclsNDTTicker() I tried lots but couldn't get the arguments to work.

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

< Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >