Search Results

Search found 3162 results on 127 pages for 'compiled'.

Page 109/127 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Flex Spark DropDownList selectedItem didn't update after dataProvider changed

    - by Candy Chiu
    I have two dataProvider's for one DropDownList. The following code can be compiled and run. <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" creationComplete="flipLast()" minWidth="955" minHeight="600"> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; public function flipLast():void { if( last ) { list.dataProvider = dp1; list.selectedItem = "Flex"; } else { list.dataProvider = dp2; list.selectedItem = "Catalyst"; } last = !last; } public var last:Boolean = true; public var dp1:ArrayCollection = new ArrayCollection( [ "Flex", "Air" ] ); public var dp2:ArrayCollection = new ArrayCollection( [ "Catalyst", "FlashBuilder" ] ); ]]> </fx:Script> <s:VGroup> <s:DropDownList id="list" requireSelection="true" /> <s:Label id="listSelectedItem" text="{list.selectedItem}" /> <s:Label id="listSelectedIndex" text="{list.selectedIndex}" /> <s:Button label="Flip" click="flipLast()" /> </s:VGroup> </s:Application> Scenario 1: dataProvider updated, but selectedIndex is the same. At startup: [ listSelectedItem=Flex, listSelectedIndex=1 ]. Click Flip: dataProvider is updated, but still [ listSelectedItem=Flex, listSelectedIndex=1 ]. Scenario 2: dataProvider updated, selectedIndex is also updated. At startup: [ listSelectedItem=Flex, listSelectedIndex=1 ]. Select Air from list: [ listSelectedItem=Air, listSelectedIndex=2 ]. Click Flip: dataProvider is updated, but still [ listSelectedItem=Catalyst, listSelectedIndex=1 ]. Seems to me that selectedItem is driven by selectedIndex. selectedItem updates only when selectedIndex updates. Shouldn't selectedItem be updated when dataProvider is updated? Is binding to selectedItem flawed?

    Read the article

  • Fixing up an entity framework query

    - by vdh_ant
    My table structure is as follows: Person 1-M PesonAddress Person 1-M PesonPhone Person 1-M PesonEmail Person 1-M Contract Contract M-M Program Contract M-1 Organization At the end of this query I need a populated object graph where each person has their: PesonAddress's PesonPhone's PesonEmail's PesonPhone's Contract's - and this has its respective Program's Now I had the following query and I thought that it was working great, but it has a couple of problems: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") where people.Contract.Any( contract => (param.OrganizationId == contract.OrganizationId) && contract.Program.Any( contractProgram => (param.ProgramId == contractProgram.ProgramId))) select people; The problem is that it filters the person to the criteria but not the Contracts or the Contract's Programs. It brings back all Contracts that each person has not just the ones that have an OrganizationId of x and the same goes for each of those Contract's Programs respectively. What I want is only the people that have at least one contract with an OrgId of x with and where that contract has a Program with the Id of y... and for the object graph that is returned to have only the contracts that match and programs within that contract that match. I kinda understand why its not working, but I don't know how to change it so it is working... This is my attempt thus far: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") let currentContracts = from contract in people.Contract where (param.OrganizationId == contract.OrganizationId) select contract let currentContractPrograms = from contractProgram in currentContracts let temp = from x in contractProgram.Program where (param.ProgramId == contractProgram.ProgramId) select x where temp.Any() select temp where currentContracts.Any() && currentContractPrograms.Any() select new Person { PersonId = people.PersonId, FirstName = people.FirstName, ..., ...., MiddleName = people.MiddleName, Surname = people.Surname, ..., ...., Gender = people.Gender, DateOfBirth = people.DateOfBirth, ..., ...., Contract = currentContracts, ... }; //This doesn't work But this has several problems (where the Person type is an EF object): I am left to do the mapping by myself, which in this case there is quite a lot to map When ever I try to map a list to a property (i.e. Scholarship = currentScholarships) it says I can't because IEnumerable is trying to be cast to EntityCollection Include doesn't work Hence how do I get this to work. Keeping in mind that I am trying to do this as a compiled query so I think that means anonymous types are out.

    Read the article

  • Lua and Objective C not running script.

    - by beta
    I am trying to create an objective c interface that encapsulates the functionality of storing and running a lua script (compiled or not.) My code for the script interface is as follows: #import <Cocoa/Cocoa.h> #import "Types.h" #import "lua.h" #include "lualib.h" #include "lauxlib.h" @interface Script : NSObject<NSCoding> { @public s32 size; s8* data; BOOL done; } @property s32 size; @property s8* data; @property BOOL done; - (id) initWithScript: (u8*)data andSize:(s32)size; - (id) initFromFile: (const char*)file; - (void) runWithState: (lua_State*)state; - (void) encodeWithCoder: (NSCoder*)coder; - (id) initWithCoder: (NSCoder*)coder; @end #import "Script.h" @implementation Script @synthesize size; @synthesize data; @synthesize done; - (id) initWithScript: (s8*)d andSize:(s32)s { self = [super init]; self->size = s; self->data = d; return self; } - (id) initFromFile:(const char *)file { FILE* p; p = fopen(file, "rb"); if(p == NULL) return [super init]; fseek(p, 0, SEEK_END); s32 fs = ftell(p); rewind(p); u8* buffer = (u8*)malloc(fs); fread(buffer, 1, fs, p); fclose(p); return [self initWithScript:buffer andSize:size]; } - (void) runWithState: (lua_State*)state { if(luaL_loadbuffer(state, [self data], [self size], "Script") != 0) { NSLog(@"Error loading lua chunk."); return; } lua_pcall(state, 0, LUA_MULTRET, 0); } - (void) encodeWithCoder: (NSCoder*)coder { [coder encodeInt: size forKey: @"Script.size"]; [coder encodeBytes:data length:size forKey:@"Script.data"]; } - (id) initWithCoder: (NSCoder*)coder { self = [super init]; NSUInteger actualSize; size = [coder decodeIntForKey: @"Script.size"]; data = [[coder decodeBytesForKey:@"Script.data" returnedLength:&actualSize] retain]; return self; } @end Here is the main method: #import "Script.h" int main(int argc, char* argv[]) { Script* script = [[Script alloc] initFromFile:"./test.lua"]; lua_State* state = luaL_newstate(); luaL_openlibs(state); luaL_dostring(state, "print(_VERSION)"); [script runWithState:state]; luaL_dostring(state, "print(_VERSION)"); lua_close(state); } And the lua script is just: print("O Hai World!") Loading the file is correct, but I think it messes up at pcall. Any Help is greatly appreciated. Heading

    Read the article

  • Problem with combination boost::exception and boost::variant

    - by Rick
    Hello all, I have strange problem with two-level variant struct when boost::exception is included. I have following code snippet: #include <boost/variant.hpp> #include <boost/exception/all.hpp> typedef boost::variant< int > StoredValue; typedef boost::variant< StoredValue > ExpressionItem; inline std::ostream& operator << ( std::ostream & os, const StoredValue& stvalue ) { return os;} inline std::ostream& operator << ( std::ostream & os, const ExpressionItem& stvalue ) { return os; } When I try to compile it, I have following error: boost/exception/detail/is_output_streamable.hpp(45): error C2593: 'operator <<' is ambiguous test.cpp(11): could be 'std::ostream &operator <<(std::ostream &,const ExpressionItem &)' [found using argument-dependent lookup] test.cpp(8): or 'std::ostream &operator <<(std::ostream &,const StoredValue &)' [found using argument-dependent lookup] 1> while trying to match the argument list '(std::basic_ostream<_Elem,_Traits>, const boost::error_info<Tag,T>)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> and 1> [ 1> Tag=boost::tag_original_exception_type, 1> T=const type_info * 1> ] Code snippet is simplified as much as possible, in the real code are structures much more complicated and each variant has five sub-types. When i remove #include and try following test snippet, program is compiled correctly: void TestVariant() { ExpressionItem test; std::stringstream str; str << test; } Could someone please advise me how to define operators << in order to function even when using boost::Exception ? Thanks and regards Rick

    Read the article

  • AccessControlException: access denied - caller function failed to load properties file

    - by Michael Mao
    Hi all: I am having a jar archive environment which is gonna call my class in a folder like this: java -jar "emarket.jar" ../tournament 100 My compiled class is deployed into the ../tournament folder, this command runs well. After I changed my code to load a properties file, it gets the following exception message: Exception in thread "main" java.security.AccessControlException: access denied (java.io.FilePermission agent.properties read) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkRead(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at java.io.FileInputStream.<init>(Unknown Source) at Agent10479475.getPropertiesFromConfigFile(Agent10479475.java:110) at Agent10479475.<init>(Agent10479475.java:100) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at java.lang.Class.newInstance0(Unknown Source) at java.lang.Class.newInstance(Unknown Source) at emarket.client.EmarketSandbox.instantiateClientObjects(EmarketSandbox.java:92) at emarket.client.EmarketSandbox.<init>(EmarketSandbox.java:27) at emarket.client.EmarketSandbox.main(EmarketSandbox.java:166) I am wondering why this security checking will fail. I issue the getPropertitiesFromConfigFile() function inside my class's default constructor, like this: public class Agent10479475 extends AbstractClientAgent { //default constructor public Agent10479475() { //set all properties to their default values in constructor FT_THRESHOLD = 400; FT_THRESHOLD_MARGIN = 50; printOut("Now loading properties from a config file...", ""); getPropertiesFromConfigFile(); printOut("Finished loading",""); } private void getPropertiesFromConfigFile() { Properties props = new Properties(); try { props.load(new FileInputStream("agent.properties")); FT_THRESHOLD = Long.parseLong(props.getProperty("FT_THRESHOLD")); FT_THRESHOLD_MARGIN = Long.parseLong(props.getProperty("FT_THRESHOLD_MARGIN ")); } catch(java.io.FileNotFoundException fnfex) { printOut("CANNOT FIND PROPERTIES FILE :", fnfex); } catch(java.io.IOException ioex) { printOut("IOEXCEPTION OCCURED :", ioex); } } } My class is loading its own .properties file under the same folder. why would the Java environment complains about such a denial of access? Must I config the emarket.client.EmarketSandbox class, which is not written by me and stored inside the emarket.jar, to access my agent.properties file? Any hints or suggestions is much appreciated. Many thanks in advance.

    Read the article

  • multiple definition of inline function

    - by K71993
    Hi, I have gone through some posts related to this topic but was not able to sort out my doubt completly. This might be a very navie question. Code Description I have a header file "inline.h" and two translation unit "main.cpp" and "tran.cpp". Details of code are as below inline.h file details #ifndef __HEADER__ #include <stdio.h> extern inline int func1(void) { return 5; } static inline int func2(void) { return 6; } inline int func3(void) { return 7; } #endif main.c file details are below #define <stdio.h> #include <inline.h> int main(int argc, char *argv[]) { printf("%d\n",func1()); printf("%d\n",func2()); printf("%d\n",func3()); return 0; } tran.cpp file details (Not that the functions are not inline here) #include <stdio.h> int func1(void) { return 500; } int func2(void) { return 600; } int func3(void) { return 700; } Question The above code does not compile in gcc compiler whereas compiles in g++ (Assuming you make changes related to gcc in code like changing the code to .c not using any C++ header files... etc). The error displayed is "duplicate definition of inline function - func3". Can you clarify why this difference is present across compile? When you run the program (g++ compiled) by creating two seperate compilation unit (main.o and tran.o and create an executable a.out), the output obtained is 500 6 700 Why does the compiler pick up the definition of the function which is not inline. Actually since #include is used to "add" the inline definiton I had expected 5,6,7 as the output. My understanding was during compilation since the inline definition is found, the function call would be "replaced" by inline function definition. Can you please tell me in detailed steps the process of compilation and linking which would lead us to 500,6,700 output. I can only understand the output 6. Thanks in advance for valuable input.

    Read the article

  • Where is the method call in the EXE file?

    - by Victor Hurdugaci
    Introduction After watching this video from LIDNUG, about .NET code protection http://secureteam.net/lidnug_recording/Untitled.swf (especially from 46:30 to 57:30), I would to locate the call to a MessageBox.Show in an EXE I created. The only logic in my "TrialApp.exe" is: public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { MessageBox.Show("This is trial app"); } } Compiled on the Release configuration: http://rapidshare.com/files/392503054/TrialApp.exe.html What I do to locate the call Run the application in WinDBG and break after the message box appears. Get the CLR stack with !clrstack: 0040e840 5e21350b [InlinedCallFrame: 0040e840] System.Windows.Forms.SafeNativeMethods.MessageBox(System.Runtime.InteropServices.HandleRef, System.String, System.String, Int32) 0040e894 5e21350b System.Windows.Forms.MessageBox.ShowCore(System.Windows.Forms.IWin32Window, System.String, System.String, System.Windows.Forms.MessageBoxButtons, System.Windows.Forms.MessageBoxIcon, System.Windows.Forms.MessageBoxDefaultButton, System.Windows.Forms.MessageBoxOptions, Boolean) 0040e898 002701f0 [InlinedCallFrame: 0040e898] 0040e934 002701f0 TrialApp.Form1.Form1_Load(System.Object, System.EventArgs) Get the MethodDesc structure (using the address of Form1_Load) !ip2md 002701f0 MethodDesc: 001762f8 Method Name: TrialApp.Form1.Form1_Load(System.Object, System.EventArgs) Class: 00171678 MethodTable: 00176354 mdToken: 06000005 Module: 00172e9c IsJitted: yes CodeAddr: 002701d0 Transparency: Critical Source file: D:\temp\TrialApp\TrialApp\Form1.cs @ 22 Dump the IL of this method (by MethodDesc) !dumpil 001762f8 IL_0000: ldstr "This is trial app" IL_0005: call System.Windows.Forms.MessageBox::Show IL_000a: pop IL_000b: ret So, as the video mentioned, the call to to Show is 5 bytes from the beginning of the method implementation. Now I open CFFExplorer (just like in the video) and get the RVA of the Form1_Load method: 00002083. After this, I go to Address Converter (again in CFF Explorer) and navigate to offset 00002083. There we have: 32 72 01 00 00 70 28 16 00 00 0A 26 2A 7A 03 2C 13 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 In the video is mentioned that the first 12 bytes are for the method header so I skip them 2A 7A 03 2C 13 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 5 bytes from the beginning of the implementation should be the opcode for method call (28). Unfortunately, is not there. 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 Questions: What am I doing wrong? Why there is no method call at that position in the file? Or maybe the video is missing some information... Why the guy in that video replaces the call with 9 zeros?

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Binding functions of derived class with luabind

    - by Anamon
    I am currently developing a plugin-based system in C++ which provides a Lua scripting interface, for which I chose to use luabind. I'm using Lua 5 and luabind 0.9, both statically linked and compiled with MSVC++ 8. I am now having trouble binding functions with luabind when they are defined in a derived class, but not its parent class. More specifically, I have an abstract base class called 'IPlugin' from which all plugin classes inherit. When the plugin manager initialises, it registers that class and its functions like this: luabind::open(L); luabind::module(L) [ luabind::class_("IPlugin") .def("start", (void(IPlugin::*)())&IPlugin::start) ]; As it is only known at runtime what effective plugin classes are available, I had to solve loading plugins in a kind of roundabout way. The plugin manager exposes a factory function to Lua, which takes the name of a plugin class and a desired object name. The factory then creates the object, registers the plugin's class as inheriting from the 'IPlugin' base class, and immediately calls a function on the created object that registers itself as a global with the Lua state, like this: void PluginExample::registerLuaObject(lua_State *L, string a_name) { luabind::globals(L)[a_name] = (PluginExample*)this; } I initially did this because I had problems with Lua determining the most derived class of the object, as if I register it from the StreamManager it is only known as a subtype of 'IPlugin' and not the specific subtype. I'm not sure anymore if this is even necessary though, but it works and the created object is subsequently accessible from Lua under 'a_name'. The problem I have, though, is that functions defined in the derived class, which were not declared at all in the parent class, cannot be used. Virtual functions defined in the base class, such as 'start' above, work fine, and calling them from Lua on the new object runs the respective redefined code from the 'PluginExample' class. But if I add a new function to 'PluginExample', here for example a function taking no arguments and returning void, and register it like this: luabind::module(L) [ luabind::class_("PluginExample") .def(luabind::constructor()) .def("func", &PluginExample::func) ]; calling 'func' on the new object yields the following Lua runtime error: No matching overload found, candidates: void func(PluginExample&) I am correctly using the ':' syntax so the 'self' argument is not needed and it seems suddenly Lua cannot determine the derived type of the object anymore. I am sure I am doing something wrong, probably having to do with the two-step binding required by my system architecture, but I can't figure out where. I'd much appreciate some help =)

    Read the article

  • Loading a SWF dynamically causes previously loaded SWFs to misbehave

    - by Aaron
    I have run into a very strange problem with Flash and Flex. It appears that under certain circumstances, movie clips from a SWF loaded at runtime (using Loader) cannot be instantiated if another SWF has been loaded in the mean time. Here is the complete code for a program that reproduces the error. It is compiled using mxmlc, via Ensemble Tofino: package { import flash.display.*; import flash.events.*; import flash.net.*; import flash.system.*; public class DynamicLoading extends Sprite { private var testAppDomain:ApplicationDomain; public function DynamicLoading() { var request:URLRequest = new URLRequest("http://localhost/content/test.swf"); var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onTestLoadComplete); loader.load(request); } private function onTestLoadComplete(e:Event):void { var loaderInfo:LoaderInfo = LoaderInfo(e.target); testAppDomain = loaderInfo.applicationDomain; // To get the error, uncomment these lines... //var request:URLRequest = new URLRequest("http://localhost/content/tiny.swf"); //var loader:Loader = new Loader(); //loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onTinyLoadComplete); //loader.load(request); // ...and comment this one: onTinyLoadComplete(); } private function onTinyLoadComplete(e:Event = null):void { var spriteClass:Class = Class(testAppDomain.getDefinition("TopSymbol")); var sprite:Sprite = Sprite(new spriteClass()); sprite.x = sprite.y = 200; addChild(sprite); } } } With the second loading operation commented out as shown above, the code works. However, if the second loading operation is uncommented and onTinyLoadComplete runs after the second SWF is loaded, the line containing new spriteClass() fails with the following exception: TypeError: Error #1034: Type Coercion failed: cannot convert flash.display::MovieClip@2dc8ba1 to SubSymbol. at flash.display::Sprite/constructChildren() at flash.display::Sprite() at flash.display::MovieClip() at TopSymbol() at DynamicLoading/onTinyLoadComplete()[C:\Users\...\TestFlash\DynamicLoading.as:38] test.swf and tiny.swf were created in Flash CS4. test.swf contains two symbols, both exported for ActionScript, one called TopSymbol and one called SubSymbol. SubSymbol contains a simple graphic (a scribble) and TopSymbol contains a single instance of SubSymbol. tiny.swf contains nothing; it is the result of publishing a new, empty ActionScript 3 project. If I modify test.swf so that SubSymbol is not exported for ActionScript, the error goes away, but in our real project we need the ability to dynamically load sprite classes that contain other, exported sprite classes as children. Any ideas as to what is causing this, or how to fix it?

    Read the article

  • C++ type-checking at compile-time

    - by Masterofpsi
    Hi, all. I'm pretty new to C++, and I'm writing a small library (mostly for my own projects) in C++. In the process of designing a type hierarchy, I've run into the problem of defining the assignment operator. I've taken the basic approach that was eventually reached in this article, which is that for every class MyClass in a hierarchy derived from a class Base you define two assignment operators like so: class MyClass: public Base { public: MyClass& operator =(MyClass const& rhs); virtual MyClass& operator =(Base const& rhs); }; // automatically gets defined, so we make it call the virtual function below MyClass& MyClass::operator =(MyClass const& rhs); { return (*this = static_cast<Base const&>(rhs)); } MyClass& MyClass::operator =(Base const& rhs); { assert(typeid(rhs) == typeid(*this)); // assigning to different types is a logical error MyClass const& casted_rhs = dynamic_cast<MyClass const&>(rhs); try { // allocate new variables Base::operator =(rhs); } catch(...) { // delete the allocated variables throw; } // assign to member variables } The part I'm concerned with is the assertion for type equality. Since I'm writing a library, where assertions will presumably be compiled out of the final result, this has led me to go with a scheme that looks more like this: class MyClass: public Base { public: operator =(MyClass const& rhs); // etc virtual inline MyClass& operator =(Base const& rhs) { assert(typeid(rhs) == typeid(*this)); return this->set(static_cast<Base const&>(rhs)); } private: MyClass& set(Base const& rhs); // same basic thing }; But I've been wondering if I could check the types at compile-time. I looked into Boost.TypeTraits, and I came close by doing BOOST_MPL_ASSERT((boost::is_same<BOOST_TYPEOF(*this), BOOST_TYPEOF(rhs)>));, but since rhs is declared as a reference to the parent class and not the derived class, it choked. Now that I think about it, my reasoning seems silly -- I was hoping that since the function was inline, it would be able to check the actual parameters themselves, but of course the preprocessor always gets run before the compiler. But I was wondering if anyone knew of any other way I could enforce this kind of check at compile-time.

    Read the article

  • C++ Program Always Crashes While doing a std::string assign

    - by bbazso
    I have been trying to debug a crash in my application that crashes (i.e. asserts a * glibc detected free(): invalid pointer: 0x000000000070f0c0 **) while I'm trying to do a simple assign to a string. Note that I'm compiling on a linux system with gcc 4.2.4 with an optimization level set to -O2. With -O0 the application no longer crashes. E.g. std::string abc; abc = "testString"; but if I changed the code as follows it no longer crashes std::string abc("testString"); So again I scratched my head! But the interesting pattern was that the crash moved later on in the application, AGAIN at another string. I found it weird that the application was continuously crashing on a string assign. A typical crash backtrace would look as follows: #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #1 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #2 0x00000000004d8cb7 in people_streamingserver_sighandler (signum=6) at src/peoplestreamingserver.cpp:487 #3 <signal handler called> #4 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #5 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #6 0x00007f2c26680ce0 in ?? () from /lib64/libc.so.6 #7 0x00007f2c270ca7a0 in std::string::assign (this=0x7f2c21bc8d20, __str=<value optimized out>) at /home/bbazso/ThirdParty/sources/gcc-4.2.4/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/basic_string.h:238 #8 0x00007f2c21bd874a in PEOPLESProtocol::GetStreamName (this=<value optimized out>, pRawPath=0x2342fd8 "rtmp://127.0.0.1/mp4:pop.mp4", lStreamName=@0x7f2c21bc8d20) at /opt/trx-HEAD/gcc/4.2.4/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/basic_string.h:491 #9 0x00007f2c21bd9daa in PEOPLESProtocol::SignalProtocolCreated (pProtocol=0x233a4e0, customParameters=@0x7f2c21bc8de0) at peoplestreamer/src/peoplesprotocol.cpp:240 This was really weird behavior and so I started to poke around further in my application to see if there was some sort of memory corruption (either heap or stack) error that could be occurring that could be causing this weird behavior. I even checked for ptr corruptions and came up empty handed. In addition to visual inspection of the code I also tried the following tools: Valgrind using both memcheck and exp-ptrcheck electric fence libsafe I compiled with -fstack-protector-all in gcc I tried MALLOC_CHECK_ set to 2 I ran my code through lint checks as well as cppcheck (to check for mistakes) And I stepped through the code using gdb So I tried a lot of stuff and still came up empty handed. So I was wondering if it could be something like a linker issue or a library issue of some sort that could be causing this problem. Are there any know issues with the std::string that make is susceptible to crashing in -O2 or maybe it has nothing to do with the optimization level? But the only pattern that I can see thus far in my problem is that it always seems to crash on a string and so I was wondering if anyone knew of any issues that my be causing this type of behavior. Thanks a lot!

    Read the article

  • jcuda library usage problem

    - by user513164
    hi m very new to java and Linux i have a code which is taken from examples of jcuda.the code is following import jcuda.CUDA; import jcuda.driver.CUdevprop; import jcuda.driver.types.CUdevice; public class EnumDevices { public static void main(String args[]) { //Init CUDA Driver CUDA cuda = new CUDA(true); int count = cuda.getDeviceCount(); System.out.println("Total number of devices: " + count); for (int i = 0; i < count; i++) { CUdevice dev = cuda.getDevice(i); String name = cuda.getDeviceName(dev); System.out.println("Name: " + name); int version[] = cuda.getDeviceComputeCapability(dev); System.out.println("Version: " + String.format("%d.%d", version[0], version[1])); CUdevprop prop = cuda.getDeviceProperties(dev); System.out.println("Clock rate: " + prop.clockRate + " MHz"); System.out.println("Threads per block: " + prop.maxThreadsPerBlock); } } } I'm using Ubuntu as my operating system i compiled it with following command 1:-javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices i got following error error: Class names, 'EnumDevices', are only accepted if annotation processing is explicitly requested 1 error i don't know what is the meaning of this error.what should i do to compile the program than i changed the compiling option which is javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices.java than i got following error EnumDevices.java:36: clockRate is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Clock rate: " + prop.clockRate + " MHz"); ^ EnumDevices.java:37: maxThreadsPerBlock is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Threads per block: " + prop.maxThreadsPerBlock); ^ 2 errors Now I'm completely confused i don't know what to do? how to compile this program ? how to install the jcuda package or how to use it ? how to use package which have only jar files and .so files and the jar files don't having manifest file ? please help me

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to compile and run?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap Now here's something interesting. The error message shows it's looking in /opt/local/lib/libiconv.2.dylib. otools -L shows that this does implement 8.0.0: tppllc-Mac-Pro:.libs swirsky$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /usr/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) tppllc-Mac-Pro:.libs swirsky$ And, for good measure, I set the DYLD_LIBRARY_PATH to make sure this directory is the one for dynamic libraries. So even though I do have a library that provides 8.0.0, it's being seen as 7.0.0! Any ideas why this would happen? So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • segfault on vector<struct>

    - by Andre
    Hello, I created a struct to hold some data and then declared a vector to hold that struct. But when I do a push_back I get damn segfault and I have no idea why! My struct is defines as: typedef struct Group { int codigo; string name; int deleted; int printers; int subpage; /*included this when it started segfaulting*/ Group(){ name.reserve(MAX_PRODUCT_LONG_NAME); } ~Group(){ name.clear(); } Group(const Group &b) { codigo = b.codigo; name = b.name; deleted = b.deleted; printers = b.printers; subpage = b.subpage; } /*end of new stuff*/ }; Originally, the struct didn't have the copy, constructor or destructor. I added them latter when I read this post below. http://stackoverflow.com/questions/676575/seg-fault-after-is-item-pushed-onto-stl-container but the end result is the same. There is one this that is bothering me as hell! When I first push some data into the vector, everything goes fine. Later on in the code when I try to push some more data into the vector, my app just segfaults! The vector is declared vector<Group> Groups and is a global variable to the file where I am using it. No externs anywhere else, etc... I can trace the error to: _M_deallocate(this->_M_impl._M_start, this->_M_impl._M_end_of_storage- this->_M_impl._M_start); in vector.tcc when I finish adding/copying the last element to the vector.... As far as I can tell. I shouldn't be needing anything to do with a copy constructor as a shallow copy should be enough for this. I'm not even allocating any space (but I did a reserve for the string to try out). I have no idea what the problem is! I'm running this code on OpenSuse 10.2 with gcc 4.1.2 I'm not really to eager to upgrade gcc because of backward compatibility issues... This code worked "perfectly" on my windows machine. I compiled it with gcc 3.4.5 mingw without any problems... help! --- ... --- :::EDIT::: I push data Group tmp_grp; (...) tmp_grp.name = "Nova "; tmp_grp.codigo=GetGroupnextcode(); tmp_grp.deleted=0; tmp_grp.printers=0; tmp_grp.subpage=0; Groups.push_back(tmp_grp);

    Read the article

  • Java problem cant find image file

    - by user363035
    I am a student working on a homework project. I spent DAYS trying to get the following code to display an image on my new windows 7 laptop. I compiled it and ran it on my old xp pc and it worked! I really want to use my laptop. Any suggestions on how to get it to display the image? import java.awt.*; import java.applet.*; import java.awt.event.*; import java.awt.image.*; public class MoveIt extends Applet implements ActionListener { // set variables and componets private Image cup; Panel keypad = new Panel(); public int top = 15; public int left = 15; private Button keysArray[]; public void init() { cup = getImage(getDocumentBase(), "cup.gif"); Canvas myCanvas = new Canvas(); keysArray = new Button[5]; setLayout(new BorderLayout(5,5)); setBackground(Color.blue); // set up keypad layout keypad.setLayout(new BorderLayout(0,0)); keysArray[0] = new Button("Up"); keysArray[1] = new Button("Left"); keysArray[2] = new Button("Center"); keysArray[3] = new Button("Right"); keysArray[4] = new Button("Down"); // add buttons to the keypad panel keypad.add(keysArray[0], BorderLayout.NORTH); keysArray[0].addActionListener(this); keypad.add(keysArray[1], BorderLayout.EAST); keysArray[1].addActionListener(this); keypad.add(keysArray[2], BorderLayout.CENTER); keysArray[2].addActionListener(this); keypad.add(keysArray[3], BorderLayout.WEST); keysArray[3].addActionListener(this); keypad.add(keysArray[4], BorderLayout.SOUTH); keysArray[4].addActionListener(this); // add canvas and keypad to the BorderLayout add(myCanvas, BorderLayout.NORTH); add(keypad, BorderLayout.SOUTH); } public void paint(Graphics g) { g.drawImage( cup, left, top, this ); } public void actionPerformed(ActionEvent e) { // test for menu item clicks String arg = e.getActionCommand(); if (arg == "Up") top -=15; else if (arg == "Down") top +=15; else if (arg == "Left") left -=15; else if (arg == "Right") left +=15; else { top = 60; left =125; } repaint(); } }

    Read the article

  • RNDC fails: permission denied

    - by pawz
    Named works great. It creates a pid in /var/run/named/named.pid as expected. It is listening on port 953 as shown by the log: Apr 20 14:42:38 guchuko named[9115]: command channel listening on 127.0.0.1#953 But whenever I try to run "rndc reload" I get: rndc: 'reload' failed: permission denied What file is it being denied permission to ? It doesn't log anything so I don't know why it's not working. I've compiled bind 9.4-ESV-R1 from source and I've patched it with the mysql mod. my named.conf: options { directory "/var/bind"; forwarders { 203.82.213.101; 203.188.144.1; }; listen-on-v6 { none; }; listen-on { 127.0.0.1; 192.168.0.6; }; pid-file "/var/run/named/named.pid"; }; logging { channel simple_log { file "/var/log/named.log" versions 3 size 5m; severity debug 5; print-time yes; print-severity yes; print-category yes; }; category default { simple_log; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "pri/localhost.zone"; allow-update { none; }; notify no; }; include "/etc/rndc.key" my rndc.conf options { default-server 127.0.0.1; default-key "rndc-key"; }; server 127.0.0.1 { key "rndc-key"; }; include "/etc/rndc.key"; my rndc.key: key "rndc-key" { algorithm hmac-md5; secret "XFc8C+yCLK0mIheTSBj41g=="; };

    Read the article

  • Unexpected output from Bubblesort program with MSVC vs TCC

    - by Sujith S Pillai
    One of my friends sent this code to me, saying it doesn't work as expected: #include<stdio.h> void main() { int a [10] ={23, 100, 20, 30, 25, 45, 40, 55, 43, 42}; int sizeOfInput = sizeof(a)/sizeof(int); int b, outer, inner, c; printf("Size is : %d \n", sizeOfInput); printf("Values before bubble sort are : \n"); for ( b = 0; b &lt; sizeOfInput; b++) printf("%d\n", a[b]); printf("End of values before bubble sort... \n"); for ( outer = sizeOfInput; outer &gt; 0; outer-- ) { for ( inner = 0 ; inner &lt; outer ; inner++) { printf ( "Comparing positions: %d and %d\n",inner,inner+1); if ( a[inner] &gt; a[inner + 1] ) { int tmp = a[inner]; a[inner] = a [inner+1]; a[inner+1] = tmp; } } printf ( "Bubble sort total array size after inner loop is %d :\n",sizeOfInput); printf ( "Bubble sort sizeOfInput after inner loop is %d :\n",sizeOfInput); } printf ( "Bubble sort total array size at the end is %d :\n",sizeOfInput); for ( c = 0 ; c &lt; sizeOfInput; c++) printf("Element: %d\n", a[c]); } I am using Micosoft Visual Studio Command Line Tool for compiling this on a Windows XP machine. cl /EHsc bubblesort01.c My friend gets the correct output on a dinosaur machine (code is compiled using TCC there). My output is unexpected. The array mysteriously grows in size, in between. If you change the code so that the variable sizeOfInput is changed to sizeOfInputt, it gives the expected results! A search done at Microsoft Visual C++ Developer Center doesn't give any results for "sizeOfInput". I am not a C/C++ expert, and am curious to find out why this happens - any C/C++ experts who can "shed some light" on this? Unrelated note: I seriously thought of rewriting the whole code to use quicksort or merge sort before posting it here. But, after all, it is not Stooge sort... Edit: I know the code is not correct (it reads beyond the last element), but I am curious why the variable name makes a difference.

    Read the article

  • Multiline editable textarea in SVG

    - by Timo
    I'm trying to implement multiline editable textfield in SVG. I have the following code in http://jsfiddle.net/ca4d3/ : <svg width="1000" height="1000" overflow="scroll"> <g transform="rotate(5)"> <rect width="300" height="400" fill="#22DD22" fill-opacity="0.5"/> </g> <foreignObject x="10" y="10" overflow="visible" width="10000" height="10000" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"> <p style="display:table-cell;padding:10px;border:1px solid red; background-color:white;opacity:0.5;font-family:Verdana; font-size:20px;white-space: pre; word-wrap: normal; overflow: visible; overflow-y: visible; overflow-x:visible;" contentEditable="true" xmlns="http://www.w3.org/1999/xhtml"> Write here some text. Be smart and select some word. If you wanna be really COOL, paste here something cool! </p> </foreignObject> </svg> In newest Chrome, Safari and Firefox the code works in some way, but in Opera and IE 9 not. The goal is that: 0) Works in newest Chrome, Safari, Firefox, Opera and IE and if ever possible in some pads. 1) White-spaces are preserved and text wraps only on newline char (works in Chrome, Safari and Firefox, but not in Opera and IE 9 *). 2) The textfield is editable (in the same reliable and stabile way as textareas and contenteditable p elements in html) and height and width is expanded to fit text (works in Chrome, Safari and Firefox, but not in Opera and IE 9 *). 3) Texfield can be transformed (rotated, skewed, translated) while maintaining text editability (Tested rotation, but not work in any browser *). EDIT: Foreignobject rotation works on Firefox 15.0.1, but not in Safari 5.1.7 (6534.57.2), Chrome 22.0.1229.79, Opera 12.02, IE 9. Tested on Mac OS X 10.6.8. 4) Textfield can be clipped and masked while not necessarily maintaining text editability (not yet tested). *) using above code These all can be achieved using Flash, but Flash has so severe problems that it is not suitable for my purposes (after every little change in code, all have to be compiled again using Flex, which is slow, font size has limits, tracking technique is pixeloriented, not relative to em size etc.) and there still are differences across platforms. And I want to give a try to SVG! GUESTION: Can I achieve my goals 0-4 with current SVG support in browsers? Is coming SVG 2.0 for some help in this case? EDIT: Changed display:table to display:table-cell (and added new jsfiddle), because display:table made the field to loses focus when pressed arrow-up on first text row.

    Read the article

  • maven-compiler-plugin exclude

    - by easyrider
    Hi, I have a following Problem. I would like to exclude some .java files (*/jsfunit/.java) during test-compile phace and on the other side i would like to include them during compile phace (id i start tomact with tomcat:run goal) My pom.xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> <!-- <excludes> <exclude>**/*JSFIntegration*.java</exclude> </excludes> --> </configuration> <executions> <!-- <execution> <id>default-compile</id> <phase>compile</phase> <goals> <goal>compile</goal> </goals> <configuration> <includes> <include>**/jsfunit/*.java</include> </includes> </configuration> </execution>--> <execution> <id>default-testCompile</id> <phase>test-compile</phase> <configuration> <excludes> <exclude>**/jsfunit/*.java</exclude> </excludes> </configuration> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> But it does not work : exclude in default-testCompile execution does not filter these classes. If i remove the comments then all classes matched */jsfunit/.java would be compiled but only if i touch them! Please help! Thanx in advance

    Read the article

  • JSF inner datatable not respecting rendered condition of outer table.

    - by Marc
    <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="OuterItems" value="#{valueList.values}" var="item" border="0"> <h:column rendered="#{item.typeA"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.options}" var="option" border="0"> <h:column > <h:outputText value="Option: #{option.displayValue}"/> </h:column> </h:dataTable> </h:column> <h:column rendered="#{item.typeB"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.demands}" var="demand" border="0"> <h:column > <h:outputText value="Demand: #{demand.displayValue}"/> </h:column> </h:dataTable> </h:column> </h:dataTable> public class Item{ ... public boolean isTypeA(){ return this instanceof TypeA; } public boolean isTypeB(){ return this instanceof TypeB; } ... } public class typeA extends Item(){ ... public List getOptions(){ .... } ... } public class typeB extends Item(){ ... public List getDemands(){ ... } .... } I'm having an issue with JSF. I've abstracted the problem out here, and I'm hoping someone can help me understand how what I'm doing fails. I'm looping over a list of Items. These Items are actually instances of the subclasses TypeA and TypeB. For Type A, I want to display the options, for Type B I want to display the demands. When rendering the page for the first time, this works fine. However, when I post back to the page for some action, I get an error: [3/26/10 12:52:32:781 EST] 0000008c SystemErr R javax.faces.FacesException: Error getting property 'options' from bean of type TypeB at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:89) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java(Compiled Code)) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:91) at com.ibm.faces.portlet.FacesPortlet.processAction(FacesPortlet.java:193) My grasp on the JSF lifecyle is very rough. At this point, i understand there is an error in the ApplyRequestValues Phases which is very early and so the previous state is restored and nothing changes. What I don't understand is that in order to fufill the condition for rendering "item.typeA" that object has to be an instance of TypeA. But here, it looks like that object passed the condition even though it was an instance of TypeB. It is like it is evaluating the inner dataTable (InnerItems) before evaluating the outer (outerItems). My working assumption is that I just don't understand how/when the rendered attribute is actually evaluated.

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • What is the difference between NULL in C++ and null in Java?

    - by Stephano
    I've been trying to figure out why C++ is making me crazy typing NULL. Suddenly it hits me the other day; I've been typing null (lower case) in Java for years. Now suddenly I'm programming in C++ and that little chunk of muscle memory is making me crazy. Wikiperipatetic defines C++ NULL as part of the stddef: A macro that expands to a null pointer constant. It may be defined as ((void*)0), 0 or 0L depending on the compiler and the language. Sun's docs tells me this about Java's "null literal": The null type has one value, the null reference, represented by the literal null, which is formed from ASCII characters. A null literal is always of the null type. So this is all very nice. I know what a null pointer reference is, and thank you for the compiler notes. Now I'm a little fuzzy on the idea of a literal in Java so I read on... A literal is the source code representation of a fixed value; literals are represented directly in your code without requiring computation. There's also a special null literal that can be used as a value for any reference type. null may be assigned to any variable, except variables of primitive types. There's little you can do with a null value beyond testing for its presence. Therefore, null is often used in programs as a marker to indicate that some object is unavailable. Ok, so I think I get it now. In C++ NULL is a macro that, when compiled, defines the null pointer constant. In Java, null is a fixed value that any non-primitive can be assigned too; great for testing in a handy if statement. Java does not have pointers, so I can see why they kept null a simple value rather than anything fancy. But why did java decide to change the all caps NULL to null? Furthermore, am I missing anything here?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >