Search Results

Search found 21960 results on 879 pages for 'program termination'.

Page 756/879 | < Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >

  • Increment non unique field during SQL insert

    - by phill
    I'm not sure how to word this cause I am a little confused at the moment, so bare with me while I attempt to explain, I have a table with the following fields: OrderLineID, OrderID, OrderLine, and a few other unimportant ones. OrderLineID is the primary key and is always unique(which isn't a problem), OrderID is a foreign key that isn't unique(also not a problem), and OrderLine is a value that is not unique in the table, but should be unique for any OrderIDs that are the same...so if that didn't make sense, perhaps a picture OrderLineID, OrderID, OrderLine 1 1 1 2 1 2 3 1 3 4 2 1 5 2 2 For all OrderIDs there is a unique OrderLine. I am trying to create an insert statement that gets the max OrderLine value for a specific OrderId so I can increment it, but it's not working so well and I could use a little help. What I have right now is below, I build the sql statement in a program and replace OrderID # with an actual value. I am pretty sure the problem is with the nested select statement, and incrementing the result, but I can't find any examples that do this since my google skills are weak apparently.... INSERT INTO tblOrderLine (OrderID, OrderLine) VALUES (<OrderID #>, (SELECT MAX(OrderLine) FROM tblOrderLine WHERE orderID = <same OrderID #>)+1) any help would be nice.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • C++ Problems with #import of .NET out-of-proc server.

    - by jm
    In C++ program, I am trying to #import TLB of .NET out of proc server. I get errors like: z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' z:\server.tlh(111) : error C2501: 'TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' z:\server.tli(74) : error C2433: 'TypePtr' : 'inline' not permitted on data declarations z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: ... _bstr_t GetToString ( ); VARIANT_BOOL Equals ( const _variant_t & obj ); long GetHashCode ( ); _TypePtr GetType ( ); long Open ( ); ... I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import MSCORLIB.TLB (or put it in path), but I can't get that to compile either. Any tips?

    Read the article

  • Safe to pass objects to C functions when working in JNI Invocation API?

    - by bubbadoughball
    I am coding up something using the JNI Invocation API. A C program starts up a JVM and makes calls into it. The JNIenv pointer is global to the C file. I have numerous C functions which need to perform the same operation on a given class of jobject. So I wrote helper functions which take a jobject and process it, returning the needed data (a C data type...for example, an int status value). Is it safe to write C helper functions and pass jobjects to them as arguments? i.e. (a simple example - designed to illustrate the question): int getStatusValue(jobject jStatus) { return (*jenv)->CallIntMethod(jenv,jStatus,statusMethod); } int function1() { int status; jobject aObj = (*jenv)->NewObject (jenv, aDefinedClass, aDefinedCtor); jobject j = (*jenv)->CallObjectMethod (jenv, aObj, aDefinedObjGetMethod) status = getStatusValue(j); (*jenv)->DeleteLocalRef(jenv,aObj); (*jenv)->DeleteLocalRef(jenv,j); return status; } Thanks.

    Read the article

  • Threads in Java

    - by owca
    I've created simple program to test Threads in Java. I'd like it to print me numbers infinitely, like 123123123123123. Dunno why, but currently it stops after one cycle finishing 213 only. Anyone knows why ? public class Main { int number; public Main(int number){ } public static void main(String[] args) { new Infinite(2).start(); new Infinite(1).start(); new Infinite(3).start(); } } class Infinite extends Thread { static int which=1; static int order=1; int id; int number; Object console = new Object(); public Infinite(int number){ id = which; which++; this.number = number; } @Override public void run(){ while(1==1){ synchronized(console){ if(order == id){ System.out.print(number); order++; if(order >= which){ order = 1; } try{ console.notifyAll(); console.wait(); } catch(Exception e) {} } else { try{ console.notifyAll(); console.wait(); } catch(Exception e) {} } } try{Thread.sleep(0);} catch(Exception e) {} } } }

    Read the article

  • Using game of life or other virtual environment for artificial (intelligence) life simulation? [clos

    - by Berlin Brown
    One of my interests in AI focuses not so much on data but more on biologic computing. This includes neural networks, mapping the brain, cellular-automata, virtual life and environments. Described below is an exciting project that includes develop a virtual environment for bots to evolve in. "Polyworld is a cross-platform (Linux, Mac OS X) program written by Larry Yaeger to evolve Artificial Intelligence through natural selection and evolutionary algorithms." http://en.wikipedia.org/wiki/Polyworld " Polyworld is a promising project for studying virtual life but it still is far from creating an "intelligent autonomous" agent. Here is my question, in theory, what parameters would you use create an AI environment? Possibly a brain environment? Possibly multiple self contained life organisms that have their own "brain" or life structures. I would like a create a spin on the game of life simulation. What if you have a 64x64 game of life grid. But instead of one grid, you might have N number of grids. The N number of grids are your "life force" If all of the game of life entities die in a particular grid then that entire grid dies. A group of "grids" makes up a life form. I don't have an immediate goal. First, I want to simulate an environment and visualize what is going on in the environment with OpenGL and see if there are any interesting properties to the environment. I then want to add "scarce resources" and see if the AI environment can manage resources adequately.

    Read the article

  • Input string was not in correct format

    - by Luke
    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace measurementConverter { class Program { static void Main(string[] args) { //read in the file StreamReader convert = new StreamReader("../../convert.txt"); //define variables string line = convert.ReadLine(); int conversion; int numberIn; float conversionFactor; Console.WriteLine("Enter the conversion in the form (amount,from,to)"); String inputMeasurement = Console.ReadLine(); string[] inputMeasurementArray = inputMeasurement.Split(','); while (line != null) { string[] fileMeasurementArray = line.Split(','); if (fileMeasurementArray[0] == inputMeasurementArray[1]) { if (fileMeasurementArray[1] == inputMeasurementArray[2]) { Console.WriteLine("{0}", fileMeasurementArray[2]); } } line = convert.ReadLine(); //convert to int numberIn = Convert.ToInt32(inputMeasurementArray[0]); conversionFactor = Convert.ToInt32(fileMeasurementArray[2]); conversion = (numberIn * conversionFactor); } Console.ReadKey(); } } } Hello, I am trying to get the calculating going. On the line conversionFactor = Convert.ToInt32(fileMeasurementArray[2]);, I am getting an error saying "Input string was not in correct format". Please help! The text file consists of the following: ounce,gram,28.0 pound,ounce,16.0 pound,kilogram,0.454 pint,litre,0.568 inch,centimetre,2.5 mile,inch,63360.0

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • JOGL: cannot run java on compiled class file

    - by John Goche
    I am running ubuntu Linux 11.10. I have followed the instructions on the site http://jogamp.org/jogl/www/ and downloaded everything with git to $HOME/jogamp and built everything with ant (although I would have prefered the files installed somewhere under /usr/local). I am trying to run a simple application with JOGL found on the Wikipedia site: http://en.wikipedia.org/wiki/Java_OpenGL In order to compile the file I had to add: export CLASSPATH.:$HOME/jogamp/jogl/build/jogl/classes in order to compile the code. When I run java I get the following: $ java JOGLQuad Exception in thread "main" java.lang.NoClassDefFoundError: javax/media/nativewindow /WindowClosingProtocol at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Caused by: java.lang.ClassNotFoundException: javax.media.nativewindow.WindowClosingProtocol at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ... 12 more Could not find the main class: JOGLQuad. Program will exit. How do I fix this and where can I find more JOGL code samples that I can compile and run? Thanks, John Goche

    Read the article

  • How to use Data aware controls "correctly"?

    - by lyborko
    Hi, I would like to ask experienced users, if you prefer to use data aware controls to add, insert, delete and edit data in DB or you favor to do it manualy. I developed some DB applications, in which for the sake of "user friendly policy" I run into complicated web of table events (afterinsert, afteredit, after... and beforeedit, beforeinsert, before...). After that it was a quite nasty work to debug the application. Aware of this risk (later by another application) I tried to avoid this problem, so I paid increased attention to write code well, readable and comprehensive. It seemed everything all right from the beginning, but as I needed to handle some preprocessing stuff before sending and loading data etc, I run into the same problems again, "slowly and inevitably". Sometime I could not use dataaware controls anyway, and what seemed to be a "cool" feature of DAControl at the beginning it turned to an obstacle on the end. I "had to" write special routine for non-dataaware controls, in order to behave as dataaware. Then I asked myself, why on earth should I use dataaware controls? Is it better to found application architecture on non-dataaware controls? It requires more time to write bug-proof code, of course, but does it worth of it? I do not know... I happened to me several times, like jinxed : paradise on the beginning hell on the end... I do not know, if I use wrong method to write DB program, if there is some standard common practice how to proceed. Or if it is common problem to everybody? Thanx for advices and your experiences

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • Getting input and output from a jar file run from java class?

    - by Jack L.
    Hi, I have a jar file that runs this code: public class InputOutput { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { boolean cont = true; BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); while (cont) { System.out.print("Input something: "); String temp = in.readLine(); if (temp.equals("end")) { cont = false; System.out.println("Terminated."); } else System.out.println(temp); } } } I want to program another java class that executes this jar file and can get the input and send output to it. Is it possible? The current code I have is this but it is not working: public class JarTest { /** * Test input and output of jar files * @author Jack */ public static void main(String[] args) { try { Process io = Runtime.getRuntime().exec("java -jar InputOutput.jar"); BufferedReader in = new BufferedReader(new InputStreamReader(io.getInputStream())); OutputStreamWriter out = new OutputStreamWriter(io.getOutputStream()); boolean cont = true; BufferedReader consolein = new BufferedReader(new InputStreamReader(System.in)); while (cont) { String temp = consolein.readLine(); out.write(temp); System.out.println(in.readLine()); } } catch (IOException e) { e.printStackTrace(); } } } Thanks for your help

    Read the article

  • commons-exec: hanging when I call executor.execute(commandLine);

    - by Stefan Kendall
    I have no idea why this is hanging. I'm trying to capture output from a process run through commons-exec, and I continue to hang. I've provided an example program to demonstrate this behavior below. import java.io.DataInputStream; import java.io.IOException; import java.io.PipedInputStream; import java.io.PipedOutputStream; import org.apache.commons.exec.CommandLine; import org.apache.commons.exec.DefaultExecutor; import org.apache.commons.exec.ExecuteException; import org.apache.commons.exec.PumpStreamHandler; public class test { public static void main(String[] args) { String command = "java"; PipedOutputStream output = new PipedOutputStream(); PumpStreamHandler psh = new PumpStreamHandler(output); CommandLine cl = CommandLine.parse(command); DefaultExecutor exec = new DefaultExecutor(); DataInputStream is = null; try { is = new DataInputStream(new PipedInputStream(output)); exec.setStreamHandler(psh); exec.execute(cl); } catch (ExecuteException ex) { } catch (IOException ex) { } System.out.println("huh?"); } }

    Read the article

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • hibernate c3p0 broken pipe

    - by raven_arkadon
    Hi, I'm using hibernate 3 with c3p0 for a program which constantly extracts data from some source and writes it to a database. Now the problem is, that the database might become unavailable for some reasons (in the simplest case: i simply shut it down). If anything is about to be written to the database there should not be any exception - the query should wait for all eternity until the database becomes available again. If I'm not mistaken this is one of the things the connection pool could do for me: if there is a problem with the db, just retry to connect - in the worst case for infinity. But instead i get a broken pipe exception, sometimes followed by connection refused and then the exception is passed to my own code, which shouldn't happen. Even if I catch the exception, how could i cleanly reinitialize hibernate again? (So far without c3p0 i simply built the session factory again, but i wouldn't be surprised if that could leak connections (or is it ok to do so?)). The database is Virtuoso open source edition. My hibernate.xml.cfg c3p0 config: <property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="hibernate.c3p0.breakAfterAcquireFailure">false</property> <property name="hibernate.c3p0.acquireRetryAttempts">-1</property> <property name="hibernate.c3p0.acquireRetryDelay">30000</property> <property name="hibernate.c3p0.automaticTestTable">my_test_table</property> <property name="hibernate.c3p0.initialPoolSize">3</property> <property name="hibernate.c3p0.minPoolSize">3</property> <property name="hibernate.c3p0.maxPoolSize">10</property> btw: The test table is created and i get tons of debug output- so it seems it actually reads the config.

    Read the article

  • Querying datetime.datetime on appengine acts different then dev server help!

    - by Alon Carmel
    Hey, I'm having some trouble with stuff that work locally and dont work on the app engine python environment: Basically, i want to get a program from an epg between ranges of date and time. i know i cannot do two where < so i saw a suggestion to save the dates as list as datetime.datetime which i did. [datetime.datetime(2010, 5, 10, 14, 25), datetime.datetime(2010, 5, 10, 15, 0)] This is ok. but when i try to compare to it: progranon = get_object(Programs2Channel, 'channel_id =', channelobj.key(), 'endstartdate >', programstart_minex, 'endstartdate <', programstart_minex ) This for some reason works locally, but fails to retrieve the data on the app engine. *Im using Google app engine django patch which uses the get_object to retrieve data in transactions. Please help. Here are more details: this is the LIST: [datetime.datetime(2010, 5, 13, 10, 45), datetime.datetime(2010, 5, 13, 11, 30)] #this is the query: programstart = ""+year+"-"+month+"-"+day+" "+hour+":"+minute programstart_minex = datetime.strptime(programstart, "%Y-%m-%d %H:%M") progranon = Programs2Channel.gql('WHERE channel_id = :channelid AND endstartdate > :programstartx AND endstartdate < :programstartx',channelid = channelobj.key(),programstartx=programstart_minex).get()

    Read the article

  • Associate File Extension with Application

    - by fneep
    I've written a program that edits a specific filetype , and I want to give the user the option to set my application as the default editor for this filetype (since I don't want an installer) on startup. I've tried to write a re-useable method that associates a file for me (preferably on any OS, although I'm running Vista) by adding a key to HKEY_CLASSES_ROOT, and am using it with my application, but it doesn't seem to work. public static void SetAssociation(string Extension, string KeyName, string OpenWith, string FileDescription) { RegistryKey BaseKey; RegistryKey OpenMethod; RegistryKey Shell; RegistryKey CurrentUser; BaseKey = Registry.ClassesRoot.CreateSubKey(Extension); BaseKey.SetValue("", KeyName); OpenMethod = Registry.ClassesRoot.CreateSubKey(KeyName); OpenMethod.SetValue("", FileDescription); OpenMethod.CreateSubKey("DefaultIcon").SetValue("", "\"" + OpenWith + "\",0"); Shell = OpenMethod.CreateSubKey("Shell"); Shell.CreateSubKey("edit").CreateSubKey("command").SetValue("", "\"" + OpenWith + "\"" + " \"%1\""); Shell.CreateSubKey("open").CreateSubKey("command").SetValue("", "\"" + OpenWith + "\"" + " \"%1\""); BaseKey.Close(); OpenMethod.Close(); Shell.Close(); CurrentUser = Registry.CurrentUser.CreateSubKey(@"HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.ucs"); CurrentUser = CurrentUser.OpenSubKey("UserChoice", RegistryKeyPermissionCheck.ReadWriteSubTree, System.Security.AccessControl.RegistryRights.FullControl); CurrentUser.SetValue("Progid", KeyName, RegistryValueKind.String); CurrentUser.Close(); }

    Read the article

  • Memory Leak Looping cfmodule inside cffunction

    - by orangepips
    Hoping someone else can confirm or tell me what I'm doing wrong. I am able to consistently reproduce an OOM running by calling the file oom.cfm (shown below). Using jconsole I am able to see the request consumes memory and never releases it until complete. The issue appears to be calling <cfmodule> inside of <cffunction>, where if I comment out the <cfmodule> call things are garbage collected while the request is running. ColdFusion version: 9,0,1,274733 JVM Arguments java.home=C:/Program Files/Java/jdk1.6.0_18 java.args=-server -Xms768m -Xmx768m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/ -Djava.security.policy={application.home}/servers/41ep8/cfusion.ear/cfusion.war/WEB-INF/cfusion/lib/coldfusion.policy -Djava.security.auth.policy={application.home}/servers/41ep8/cfusion.ear/cfusion.war/WEB-INF/cfusion/lib/neo_jaas.policy -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=56033 Test Case oom.cfm (this calls template.cfm below) <cffunction name="fun" output="false" access="public" returntype="any" hint=""> <cfset var local = structNew()/> <!--- comment out cfmodule and no OOM ---> <cfmodule template="template.cfm"> </cffunction> <cfset size = 1000 * 200> <cfloop from="1" to="#size#" index="idx"> <cfset fun()> <cfif NOT idx mod 1000> <cflog file="se-err" text="#idx# of #size#"> </cfif> </cfloop> template.cfm <!--- I am empty! --->

    Read the article

  • How to pass multiple different records (not class due to delphi limitations) to a function?

    - by mingo
    Hi to all. I have a number of records I cannot convert to classes due to Delphi limitation (all of them uses class operators to implement comparisons). But I have to pass to store them in a class not knowing which record type I'm using. Something like this: type R1 = record begin x :Mytype; class operator Equal(a,b:R1) end; type R2 = record begin y :Mytype; class operator Equal(a,b:R2) end; type Rn = record begin z :Mytype; class operator Equal(a,b:Rn) end; type TC = class begin x : TObject; y : Mytype; function payload (n:TObject) end; function TC.payload(n:TObject) begin x := n; end; program: c : TC; x : R1; y : R2; ... c := TC.Create(): n:=TOBject(x); c.payload(n); Now, Delphi do not accept typecast from record to TObject, and I cannot make them classes due to Delphi limitation. Anyone knows a way to pass different records to a function and recognize their type when needed, as we do with class: if x is TMyClass then TMyClass(x) ... ???

    Read the article

  • Is there a language with native pass-by-reference/pass-by-name semantics, which could be used in mod

    - by Bubba88
    Hi! This is a reopened question. I look for a language and supporting platform for it, where the language could have pass-by-reference or pass-by-name semantics by default. I know the history a little, that there were Algol, Fortran and there still is C++ which could make it possible; but, basically, what I look for is something more modern and where the mentioned value pass methodology is preferred and by default (implicitly assumed). I ask this question, because, to my mind, some of the advantages of pass-by-ref/name seem kind of obvious. For example when it is used in a standalone agent, where copyiong of values is not necessary (to some extent) and performance wouldn't be downgraded much in that case. So, I could employ it in e.g. rich client app or some game-style or standalone service-kind application. The main advantage to me is the clear separation between identity of a symbol, and its current value. I mean when there is no reduntant copying, you know that you're working with the exact symbol/path you have queried/received. And intristing boxing of values will not interfere with the actual logic of program. I know that there is C# ref keyword, but it's something not so intristic, though acceptable. Equally, I realize that pass-by-reference semantics could be simulated in virtually any language (Java as an instant example) and so on.. not sure about pass by name :) What would you recommend - create a something like DSL for such needs wherever it be appropriate; or use some languages that I already know? Maybe, there is something that I'm missing? Thank you!

    Read the article

  • OpenGL Tearing Problem

    - by kaykun
    Hi, I'm using win32 and opengl and I have a window set up with the projection at glOrtho of the window's coordinates. I have double buffering enabled, tested it with glGet as well. My program always seems to tear any primitives that I try to draw on it if it's being constantly translated. Here is my OpenGL initialization function: glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glViewport(0, 0, 640, 480); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 640, 0, 480, 0, 100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glDrawBuffer(GL_BACK); glLoadIdentity(); And this is my rendering function, gMouseX and gMouseY are the coordinates of the mouse: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glTranslatef(gMouseX, gMouseY, 0.0f); glColor3f(0.5f, 0.5f, 0.5f); glBegin(GL_TRIANGLES); glVertex2f(0.0f, 128.0f); glVertex2f(128.0f, 0.0f); glVertex2f(0.0f, 0.0f); glEnd(); SwapBuffers(hDC); The same tearing problem occurs regardless of how often the rendering function runs. Is there something I'm doing wrong or missing here? Thanks for any help.

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • strange chi-square result using scikit_learn with feature matrix

    - by user963386
    I am using scikit learn to calculate the basic chi-square statistics(sklearn.feature_selection.chi2(X, y)): def chi_square(feat,target): """ """ from sklearn.feature_selection import chi2 ch,pval = chi2(feat,target) return ch,pval chisq,p = chi_square(feat_mat,target_sc) print(chisq) print("**********************") print(p) I have 1500 samples,45 features,4 classes. The input is a feature matrix with 1500x45 and a target array with 1500 components. The feature matrix is not sparse. When I run the program and I print the arrray "chisq" with 45 components, I can see that the component 13 has a negative value and p = 1. How is it possible? Or what does it mean or what is the big mistake that I am doing? I am attaching the printouts of chisq and p: [ 9.17099260e-01 3.77439701e+00 5.35004211e+01 2.17843312e+03 4.27047184e+04 2.23204883e+01 6.49985540e-01 2.02132664e-01 1.57324454e-03 2.16322638e-01 1.85592258e+00 5.70455805e+00 1.34911126e-02 -1.71834753e+01 1.05112366e+00 3.07383691e-01 5.55694752e-02 7.52801686e-01 9.74807972e-01 9.30619466e-02 4.52669897e-02 1.08348058e-01 9.88146259e-03 2.26292358e-01 5.08579194e-02 4.46232554e-02 1.22740419e-02 6.84545170e-02 6.71339545e-03 1.33252061e-02 1.69296016e-02 3.81318236e-02 4.74945604e-02 1.59313146e-01 9.73037448e-03 9.95771327e-03 6.93777954e-02 3.87738690e-02 1.53693158e-01 9.24603716e-04 1.22473138e-01 2.73347277e-01 1.69060817e-02 1.10868365e-02 8.62029628e+00] ********************** [ 8.21299526e-01 2.86878266e-01 1.43400668e-11 0.00000000e+00 0.00000000e+00 5.59436980e-05 8.84899894e-01 9.77244281e-01 9.99983411e-01 9.74912223e-01 6.02841813e-01 1.26903019e-01 9.99584918e-01 1.00000000e+00 7.88884155e-01 9.58633878e-01 9.96573548e-01 8.60719653e-01 8.07347364e-01 9.92656816e-01 9.97473024e-01 9.90817144e-01 9.99739526e-01 9.73237195e-01 9.96995722e-01 9.97526259e-01 9.99639669e-01 9.95333185e-01 9.99853998e-01 9.99592531e-01 9.99417113e-01 9.98042114e-01 9.97286030e-01 9.83873717e-01 9.99745466e-01 9.99736512e-01 9.95239765e-01 9.97992843e-01 9.84693908e-01 9.99992525e-01 9.89010468e-01 9.64960636e-01 9.99418323e-01 9.99690553e-01 3.47893682e-02]

    Read the article

  • How to include all objects of an archive in a shared object?

    - by Didier Trosset
    When compiling our project, we create several archives (static libraries), say liby.a and libz.a that each contains an object file defining a function y_function() and z_function(). Then, these archives are joined in a shared object, say libyz.so, that is one of our main distributable target. g++ -fPIC -c -o y.o y.cpp ar cr liby.a y.o g++ -fPIC -c -o z.o z.cpp ar cr libz.a z.o g++ -shared -L. -ly -lz -o libyz.so When using this shared object into the example program, say x.c, the link fails because of an undefined references to functions y_function() and z_function(). g++ x.o -L. -lyz -o xyz It works however when I link the final executable directly with the archives (static libraries). g++ x.o -L. -ly -lz -o xyz My guess is that the object files contained in the archives are not linked into the shared library because they are not used in it. How to force inclusion? Edit: Inclusion can be forced using --whole-archive ld option. But if results in compilation errors: g++ -shared '-Wl,--whole-archive' -L. -ly -lz -o libyz.so /usr/lib/libc_nonshared.a(elf-init.oS): In function `__libc_csu_init': (.text+0x1d): undefined reference to `__init_array_end' /usr/bin/ld: /usr/lib/libc_nonshared.a(elf-init.oS): relocation R_X86_64_PC32 against undefined hidden symbol `__init_array_end' can not be used when making a shared object /usr/bin/ld: final link failed: Bad value Any idea where this comes from?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

< Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >