Search Results

Search found 21111 results on 845 pages for 'null pointer'.

Page 522/845 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Mixing secure & unsecure channels

    - by user305023
    I am unable to use an unsecure channel once a secure channel has already been registered. The code below works only if on the client side, the unsecured channel is registered before. Is it possible to mix secure and unsecure channels without any constraint on the registration order ? using System; using System.Collections; using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; public class SampleObject : MarshalByRefObject { public DateTime GetTest() { return DateTime.Now; } } public class SampleObject2 : MarshalByRefObject { public DateTime GetTest2() { return DateTime.Now; } } static class ProgramClient { private static TcpClientChannel RegisterChannel(bool secure, string name, int priority) { IDictionary properties = new Hashtable(); properties.Add("secure", secure); properties.Add("name", name); properties.Add("priority", priority); var clientChannel = new TcpClientChannel(properties, null); ChannelServices.RegisterChannel(clientChannel, false); return clientChannel; } private static void Secure() { RegisterChannel(true, "clientSecure", 2); var testSecure = (SampleObject2)Activator.GetObject(typeof(SampleObject2), "tcp://127.0.0.1:8081/Secured.rem"); Console.WriteLine("secure: " + testSecure.GetTest2().ToLongTimeString()); } private static void Unsecure() { RegisterChannel(false, "clientUnsecure", 1); var test = (SampleObject)Activator.GetObject(typeof(SampleObject), "tcp://127.0.0.1:8080/Unsecured.rem"); Console.WriteLine("unsecure: " + test.GetTest().ToLongTimeString()); } internal static void MainClient() { Console.Write("Press Enter to start."); Console.ReadLine(); // Works only in this order Unsecure(); Secure(); Console.WriteLine("Press ENTER to end"); Console.ReadLine(); } } static class ProgramServer { private static TcpServerChannel RegisterChannel(int port, bool secure, string name) { IDictionary properties = new Hashtable(); properties.Add("port", port); properties.Add("secure", secure); properties.Add("name", name); //properties.Add("impersonate", false); var serverChannel = new TcpServerChannel(properties, null); ChannelServices.RegisterChannel(serverChannel, secure); return serverChannel; } private static void StartUnsecure() { RegisterChannel(8080, false, "unsecure"); RemotingConfiguration.RegisterWellKnownServiceType(typeof(SampleObject), "Unsecured.rem", WellKnownObjectMode.Singleton); } private static void StartSecure() { RegisterChannel(8081, true, "secure"); RemotingConfiguration.RegisterWellKnownServiceType(typeof(SampleObject2), "Secured.rem", WellKnownObjectMode.Singleton); } internal static void MainServer() { StartUnsecure(); StartSecure(); Console.WriteLine("Unsecure: 8080\n Secure: 8081"); Console.WriteLine("Press the enter key to exit..."); Console.ReadLine(); } } class Program { static void Main(string[] args) { if (args.Length == 1 && args[0] == "server") ProgramServer.MainServer(); else ProgramClient.MainClient(); } }

    Read the article

  • Hooking thread exit

    - by mackenir
    Is there a way for me to hook the exit of managed threads (i.e. run some code on a thread, just before it exits?) I've developed a mechanism for hooking thread exit that works for some threads. Step 1: develop a 'hook' STA COM class that takes a callback function and calls it in its destructor. Step 2: create a ThreadStatic instance of this object on the thread I want to hook, and pass the object a managed delegate converted to an unmanaged function pointer. The delegate then gets called on thread exit (since the CLR calls IUnknown::Release on all STA COM RCWs as part of thread exit). This mechanism works on, for example, worker threads that I create in code using the Thread class. However, it doesn't seem to work for the application's main thread (be it a console or windows app). The 'hook' COM object seems to be deleted too late in the shutdown process and the attempt to call the delegate fails. (The reason I want to implement this facility is so I can run some native COM code on the exiting thread that works with STA COM objects that were created on the thread, before it's 'too late' (i.e. before the thread has exited, and it's no longer possible to work with STA COM objects on that thread.))

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. $cpp hello.S hello.s //pre-processor $as hello.s -o hello.o //assemble $ld hello.o // linker : converting relocatable to executable Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Kindly change this program appropriately so that it can run correctly on this machine. EDIT: After reading Paul R's answer. I checked my files claws@claws-desktop:~$ file ./hello.o ./hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped claws@claws-desktop:~$ file ./hello ./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped All of my questions still hold true. What exactly is happening in this case? Can someone please answer my questions and provide an x86-64 version of this code?

    Read the article

  • How does virtual inheritance solve the diamond problem?

    - by cambr
    class A { public: void eat(){ cout<<"A";} }; class B: virtual public A { public: void eat(){ cout<<"B";} }; class C: virtual public A { public: void eat(){ cout<<"C";} }; class D: public B,C { public: void eat(){ cout<<"D";} }; int main(){ A *a = new D(); a->eat(); } I understand the diamond problem, and above piece of code does not have that problem. How exatly does virtual inheritance solve the problem? What I understand: When I say A *a = new D();, the compiler wants to know if an object of type D can be assigned to a pointer of type A, but it has two paths that it can follow, but cannot decide by itself. So, how does virtual inheritance resolve the issue (help compiler take the decision)?

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • LLVM JIT segfaults. What am I doing wrong?

    - by bugspy.net
    It is probably something basic because I am just starting to learn LLVM.. The following creates a factorial function and tries to git and execute it (I know the generated func is correct because I was able to static compile and execute it). But I get segmentation fault upon execution of the function (in EE-runFunction(TheF, Args)) #include <iostream> #include "llvm/Module.h" #include "llvm/Function.h" #include "llvm/PassManager.h" #include "llvm/CallingConv.h" #include "llvm/Analysis/Verifier.h" #include "llvm/Assembly/PrintModulePass.h" #include "llvm/Support/IRBuilder.h" #include "llvm/Support/raw_ostream.h" #include "llvm/ExecutionEngine/JIT.h" #include "llvm/ExecutionEngine/GenericValue.h" using namespace llvm; Module* makeLLVMModule() { // Module Construction LLVMContext& ctx = getGlobalContext(); Module* mod = new Module("test", ctx); Constant* c = mod->getOrInsertFunction("fact64", /*ret type*/ IntegerType::get(ctx,64), IntegerType::get(ctx,64), /*varargs terminated with null*/ NULL); Function* fact64 = cast<Function>(c); fact64->setCallingConv(CallingConv::C); /* Arg names */ Function::arg_iterator args = fact64->arg_begin(); Value* x = args++; x->setName("x"); /* Body */ BasicBlock* block = BasicBlock::Create(ctx, "entry", fact64); BasicBlock* xLessThan2Block= BasicBlock::Create(ctx, "xlst2_block", fact64); BasicBlock* elseBlock = BasicBlock::Create(ctx, "else_block", fact64); IRBuilder<> builder(block); Value *One = ConstantInt::get(Type::getInt64Ty(ctx), 1); Value *Two = ConstantInt::get(Type::getInt64Ty(ctx), 2); Value* xLessThan2 = builder.CreateICmpULT(x, Two, "tmp"); //builder.CreateCondBr(xLessThan2, xLessThan2Block, cond_false_2); builder.CreateCondBr(xLessThan2, xLessThan2Block, elseBlock); /* Recursion */ builder.SetInsertPoint(elseBlock); Value* xMinus1 = builder.CreateSub(x, One, "tmp"); std::vector<Value*> args1; args1.push_back(xMinus1); Value* recur_1 = builder.CreateCall(fact64, args1.begin(), args1.end(), "tmp"); Value* retVal = builder.CreateBinOp(Instruction::Mul, x, recur_1, "tmp"); builder.CreateRet(retVal); /* x<2 */ builder.SetInsertPoint(xLessThan2Block); builder.CreateRet(One); return mod; } int main(int argc, char**argv) { long long x; if(argc > 1) x = atol(argv[1]); else x = 4; Module* Mod = makeLLVMModule(); verifyModule(*Mod, PrintMessageAction); PassManager PM; PM.add(createPrintModulePass(&outs())); PM.run(*Mod); // Now we going to create JIT ExecutionEngine *EE = EngineBuilder(Mod).create(); // Call the function with argument x: std::vector<GenericValue> Args(1); Args[0].IntVal = APInt(64, x); Function* TheF = cast<Function>(Mod->getFunction("fact64")) ; /* The following CRASHES.. */ GenericValue GV = EE->runFunction(TheF, Args); outs() << "Result: " << GV.IntVal << "\n"; delete Mod; return 0; }

    Read the article

  • WCF: Configuring Known Types

    - by jerbersoft
    I want to know as to how to configure known types in WCF. For example, I have a Person class and an Employee class. The Employee class is a sublass of the Person class. Both class are marked with a [DataContract] attribute. I dont want to hardcode the known type of a class, like putting a [ServiceKnownType(typeof(Employee))] at the Person class so that WCF will know that Employee is a subclass of Person. Now, I added to the host's App.config the following XML configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.runtime.serialization> <dataContractSerializer> <declaredTypes> <add type="Person, WCFWithNoLibrary, Version=1.0.0.0,Culture=neutral,PublicKeyToken=null"> <knownType type="Employee, WCFWithNoLibrary, Version=1.0.0.0,Culture=neutral, PublicKeyToken=null" /> </add> </declaredTypes> </dataContractSerializer> </system.runtime.serialization> <system.serviceModel> ....... </system.serviceModel> </configuration> I compiled it, run the host, added a service reference at the client and added some code and run the client. But an error occured: The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://www.herbertsabanal.net:person. The InnerException message was 'Error in line 1 position 247. Element 'http://www.herbertsabanal.net:person' contains data of the 'http://www.herbertsabanal.net/Data:Employee' data contract. The deserializer has no knowledge of any type that maps to this contract. Add the type corresponding to 'Employee' to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding it to the list of known types passed to DataContractSerializer.'. Please see InnerException for more details. Below are the data contracts: [DataContract(Namespace="http://www.herbertsabanal.net/Data", Name="Person")] class Person { string _name; int _age; [DataMember(Name="Name", Order=0)] public string Name { get { return _name; } set { _name = value; } } [DataMember(Name="Age", Order=1)] public int Age { get { return _age; } set { _age = value; } } } [DataContract(Namespace="http://www.herbertsabanal.net/Data", Name="Employee")] class Employee : Person { string _id; [DataMember] public string ID { get { return _id; } set { _id = value; } } } Btw, I didn't use class libraries (WCF class libraries or non-WCF class libraries) for my service. I just plain coded it in the host project. I guess there must be a problem at the config file (please see config file above). Or I must be missing something. Any help would be pretty much appreciated.

    Read the article

  • Haskell Linear Algebra Matrix Library for Arbitrary Element Types

    - by Johannes Weiß
    I'm looking for a Haskell linear algebra library that has the following features: Matrix multiplication Matrix addition Matrix transposition Rank calculation Matrix inversion is a plus and has the following properties: arbitrary element (scalar) types (in particular element types that are not Storable instances). My elements are an instance of Num, additionally the multiplicative inverse can be calculated. The elements mathematically form a finite field (??2256). That should be enough to implement the features mentioned above. arbitrary matrix sizes (I'll probably need something like 100x100, but the matrix sizes will depend on the user's input so it should not be limited by anything else but the memory or the computational power available) as fast as possible, but I'm aware that a library for arbitrary elements will probably not perform like a C/Fortran library that does the work (interfaced via FFI) because of the indirection of arbitrary (non Int, Double or similar) types. At least one pointer gets dereferenced when an element is touched (written in Haskell, this is not a real requirement for me, but since my elements are no Storable instances the library has to be written in Haskell) I already tried very hard and evaluated everything that looked promising (most of the libraries on Hackage directly state that they wont work for me). In particular I wrote test code using: hmatrix, assumes Storable elements Vec, but the documentation states: Low Dimension : Although the dimensionality is limited only by what GHC will handle, the library is meant for 2,3 and 4 dimensions. For general linear algebra, check out the excellent hmatrix library and blas bindings I looked into the code and the documentation of many more libraries but nothing seems to suit my needs :-(. Update Since there seems to be nothing, I started a project on GitHub which aims to develop such a library. The current state is very minimalistic, not optimized for speed at all and only the most basic functions have tests and therefore should work. But should you be interested in using or helping out developing it: Contact me (you'll find my mail address on my web site) or send pull requests.

    Read the article

  • My button background seems stretched.

    - by Kyle Sevenoaks
    Hi, I have a button as made for you to see here. Looks fine,right? Well on the live site, euroworker.no/shipping it seems stretched. Renders fine in Chrome, IE and Safari, I thought it might have been a FF issue, but loaded the fiddle into FF and seems fine. Quick ref CSS and html: #fortsett_btn { background-image: url(http://euroworker.no/public/upload/fortsett.png?1269434047); background-repeat:no-repeat; background-position:left; background-color:none; border:none; outline;none; visibility: visible; position: relative; z-index: 2; width: 106px; height: 25px; cursor:pointer; }? And HTML <button type="submit" class="submit" id="fortsett_btn" title="Fortsett" value="">&nbsp;</button>? I wonder what's up with it.

    Read the article

  • TableView Cells unresponsive

    - by John Donovan
    I have a TableView and I wish to be able to press several cells one after the other and have messages appear. At the moment the cells are often unresponsive. I found some coed for a similar problem someone was kind enough to post, however, although he claimed it worked 100% it doesn't work for me. The app won't even build. Here's the code: -(UIView*) hitTest:(CGPoint)point withEvent:(UIEvent*)event { // check to see if the hit is in this table view if ([self pointInside:point withEvent:event]) { UITableViewCell* newCell = nil; // hit is in this table view, find out // which cell it is in (if any) for (UITableViewCell* aCell in self.visibleCells) { if ([aCell pointInside:[self convertPoint:point toView:aCell] withEvent:nil]) { newCell = aCell; break; } } // if it touched a different cell, tell the previous cell to resign // this gives it a chance to hide the keyboard or date picker or whatever if (newCell != activeCell) { [activeCell resignFirstResponder]; self.activeCell = newCell; // may be nil } } // return the super's hitTest result return [super hitTest:point withEvent:event]; } With this code I get this warning: that my viewController may not respond to pointsInside:withEvent (it's a TableViewController). I also get some faults: request for member 'visibleCells' in something not a structure or a union. incompatible type for argument 1 of pointInsideWithEvent, expression does not have a valid object type and similar. I must admit I'm not so good at reading other people's code but I was wondering whether the problems here are obvious and if so if anyone could give me a pointer it would be greatly appreciated.

    Read the article

  • Does GC guarantee that cleared References are enqueued to ReferenceQueue in topological order?

    - by Dimitris Andreou
    Say there are two objects, A and B, and there is a pointer A.x --> B, and we create, say, WeakReferences to both A and B, with an associated ReferenceQueue. Assume that both A and B become unreachable. Intuitively B cannot be considered unreachable before A is. In such a case, do we somehow get a guarantee that the respective references will be enqueued in the intuitive (topological when there are no cycles) order in the ReferenceQueue? I.e. ref(A) before ref(B). I don't know - what if the GC marked a bunch of objects as unreachable, and then enqueued them in no particular order? I was reviewing Finalizer.java of guava, seeing this snippet: private void cleanUp(Reference<?> reference) throws ShutDown { ... if (reference == frqReference) { /* * The client no longer has a reference to the * FinalizableReferenceQueue. We can stop. */ throw new ShutDown(); } frqReference is a PhantomReference to the used ReferenceQueue, so if this is GC'ed, no Finalizable{Weak, Soft, Phantom}References can be alive, since they reference the queue. So they have to be GC'ed before the queue itself can be GC'ed - but still, do we get the guarantee that these references will be enqueued to the ReferenceQueue at the order they get "garbage collected" (as if they get GC'ed one by one)? The code implies that there is some kind of guarantee, otherwise unprocessed references could theoretically remain in the queue. Thanks

    Read the article

  • Do overlays/tooltips work correctly in Emacs for Windows?

    - by Cheeso
    I'm using Flymake on C# code, emacs v22.2.1 on Windows. The Flymake stuff has been working well for me. For those who don't know, you can read an overview of flymake, but the quick story is that flymake repeatedly builds the source file you are currently working on in the background, for the purpose of doing syntax checking. It then highlights the compiler warnings and erros in the current buffer. Flymake didn't work for C# initially, but I "monkey-patched it" and it works nicely now. If you edit C# in emacs, I highly recommend using flymake. The only problem I have is with the UI. Flymake highlights the errors and warnings nicely, and then inserts "overlays" with the full error or warning text. IF I hover the mouse pointer over the highlighted line in code, the overlay pops up. But as you can see, the overlay (tooltip) is truncated, and it doesn't display correctly. Flymake seems to be doing the right thing, it's the overlay part that seems broken., and overlay seems to do the right thing. It's the tooltip that is displayed incorrectly. Do overlays tooltips work correctly in emacs for Windows? Where do I look to fix this?

    Read the article

  • How to mmap the stack for the clone() system call on linux?

    - by Joseph Garvin
    The clone() system call on Linux takes a parameter pointing to the stack for the new created thread to use. The obvious way to do this is to simply malloc some space and pass that, but then you have to be sure you've malloc'd as much stack space as that thread will ever use (hard to predict). I remembered that when using pthreads I didn't have to do this, so I was curious what it did instead. I came across this site which explains, "The best solution, used by the Linux pthreads implementation, is to use mmap to allocate memory, with flags specifying a region of memory which is allocated as it is used. This way, memory is allocated for the stack as it is needed, and a segmentation violation will occur if the system is unable to allocate additional memory." The only context I've ever heard mmap used in is for mapping files into memory, and indeed reading the mmap man page it takes a file descriptor. How can this be used for allocating a stack of dynamic length to give to clone()? Is that site just crazy? ;) In either case, doesn't the kernel need to know how to find a free bunch of memory for a new stack anyway, since that's something it has to do all the time as the user launches new processes? Why does a stack pointer even need to be specified in the first place if the kernel can already figure this out?

    Read the article

  • segmentation fault on Unix - possible stack corruption

    - by bob
    hello, i'm looking at a core from a process running in Unix. Usually I can work my around and root into the backtrace to try identify a memory issue. In this case, I'm not sure how to proceed. Firstly the backtrace only gives 3 frames where I would expect alot more. For those frames, all the function parameters presented appears to completely invalid. There are not what I would expect. Some pointer parameters have the following associated with them - Cannot access memory at address Would this suggest some kind of complete stack corruption. I ran the process with libumem and all the buffers were reported as being clean. umem_status reported nothing either. so basically I'm stumped. What is the likely causes? What should I look for in code since libumem appears to have reported no errors. Any suggestions on how I can debug furhter? any extra features in mdb I should consider? thank you.

    Read the article

  • Clipplanes, vertex shaders and hardware vertex processing in Direct3D 9

    - by Igor
    Hi, I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010). I added a clipplane to the HLSLwithoutEffects sample: ... D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f ); ... void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj) { D3DXMATRIXA16 m = view * proj; D3DXMatrixInverse( &m, NULL, &m ); D3DXMatrixTranspose( &m, &m ); D3DXPLANE plane; D3DXPlaneNormalize( &plane, &g_Plane ); D3DXPLANE clipSpacePlane; D3DXPlaneTransform( &clipSpacePlane, &plane, &m ); DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane ); } void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext ) { // Update the camera's position based on user input g_Camera.FrameMove( fElapsedTime ); // Set up the vertex shader constants D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = *g_Camera.GetWorldMatrix(); mView = *g_Camera.GetViewMatrix(); mProj = *g_Camera.GetProjMatrix(); mWorldViewProj = mWorld * mView * mProj; g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj ); g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime ); SetupClipPlane( mView, mProj ); } void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext ) { // If the settings dialog is being shown, then // render it instead of rendering the app's scene if( g_SettingsDlg.IsActive() ) { g_SettingsDlg.OnRender( fElapsedTime ); return; } HRESULT hr; // Clear the render target and the zbuffer V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) ); // Render the scene if( SUCCEEDED( pd3dDevice->BeginScene() ) ) { pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration ); pd3dDevice->SetVertexShader( g_pVertexShader ); pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) ); pd3dDevice->SetIndices( g_pIB ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 ); V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices, 0, g_dwNumIndices / 3 ) ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 ); RenderText(); V( g_HUD.OnRender( fElapsedTime ) ); V( pd3dDevice->EndScene() ); } } When I rotate the camera I have different visual results when using hardware and software vertex processing. In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera. If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText. I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs. Is it a DirectX issue? Or incorrect usage of clipplanes? Thanks Igor

    Read the article

  • Concat wchar_t Unicode strings in C?

    - by Doori Bar
    I'm a beginner, I play with FindFirstFileW() of the winapi - C. The unicoded path is: " \\?\c:\Français\", and I would like to concat "*" to this path of type wchar_t (then I will use it as an arg for FindFirstFileW()). I made two test cases of mine, the first is ansi_string() which seem to work fine, the second is unicode_string() - which I don't quite understand how should I concat the additional "*" char to the unicoded path. I write the strings to a file, because I'm not able to print Unicoded characters to stdout. Note: my goal is to learn, which means I'll appreciate guidance and references to the appropriate resources regards my scenario, I'm very much a beginner and this is my first attempt with Unicode. Thanks, Doori Bar #include <stdio.h> #include <stdlib.h> #include <wchar.h> #include <string.h> #include <errno.h> void *error_malloc(int size); void ansi_string(char **str1, char **str2); void unicode_string(wchar_t **wstr1, wchar_t **wstr2); void unicode_string(wchar_t **wstr1, wchar_t **wstr2) { /* assign wstr1 with the path: \\?\c:\Français\ */ *wstr1 = error_malloc((wcslen(L"\\\\?\\c:\\Français\\")+1) *sizeof(**wstr1)); wcscpy(*wstr1,L"\\\\?\\c:\\Français\\"); /* concat wstr1+"*" , assign wstr2 with: \\?\c:\Français\* */ *wstr2 = error_malloc((wcslen(*wstr1) + 1 + 1) * sizeof(**wstr1)); /* swprintf(*wstr2,"%ls*",*wstr1); */ /* how should I concat wstr1+"*"? */ wcscpy(*wstr2,L"\\\\?\\c:\\Français\\"); } void ansi_string(char **str1, char **str2) { /* assign str1 with the path: c:\English\ */ *str1 = error_malloc(strlen("c:\\English\\") + 1); strcpy(*str1,"c:\\English\\"); /* concat str1+"*" , assign str2 with: c:\English\* */ *str2 = error_malloc(strlen(*str1) + 1 + 1); sprintf(*str2,"%s*",*str1); } void *error_malloc(int size) { void *ptr; int errornumber; if ((ptr = malloc(size)) == NULL) { errornumber = errno; fprintf(stderr,"Error: malloc(): %d; Error Message: %s;\n", errornumber,strerror(errornumber)); exit(1); } return ptr; } int main(void) { FILE *outfile; char *str1; char *str2; wchar_t *wstr1; wchar_t *wstr2; if ((outfile = fopen("out.bin","w")) == NULL) { printf("Error: fopen failed."); return 1; } ansi_string(&str1,&str2); fwrite(str2, sizeof(*str2), strlen(str2), outfile); printf("strlen: %d\n",strlen(str2)); printf("sizeof: %d\n",sizeof(*str2)); free(str1); free(str2); unicode_string(&wstr1,&wstr2); fwrite(wstr2, sizeof(*wstr2), wcslen(wstr2), outfile); printf("wcslen: %d\n",wcslen(wstr2)); printf("sizeof: %d\n",sizeof(*wstr2)); free(wstr1); free(wstr2); fclose(outfile); return 0; }

    Read the article

  • $.noConflict() not working with mootools slide show. Need help !

    - by Shantanu Gupta
    I am working on a site http://tapasya.co.in where i just impemented mootools slideshow. But I noticed that menubar that i was using stopped working, it was supposed to drop horizontaly but it is not being displayed now. I have used jquery for it. Please see the source of the web page. What can be the problem ? Mootools conflicting with javascript or some other problem. If I tries to use $.noConflict() it throws me an error in script Uncaught TypeError: Object function (B,C){if(B&&B.$family&&B.uid){return B}var A=$type(B);return($[A])?$[A](B,C,this.document):null} has no method 'noConflict' I tried the given solution below. But it is not working. <script type="text/javascript" src="<%= ResolveUrl("~/Js/jquery-1.3.2.min.js") %>" ></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/mootools.js") %>"></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/slideshow.js") %>"></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/lightbox.js") %>"></script> <script type="text/javascript"> // <![CDATA[ $.noConflict(); var timeout = 500; var closetimer = 0; var ddmenuitem = 0; function ddmenu_open(){ ddmenu_canceltimer(); ddmenu_close(); ddmenuitem = $(this).find('ul').css('visibility', 'visible'); } function ddmenu_close(){ if(ddmenuitem) ddmenuitem.css('visibility', 'hidden'); } function ddmenu_timer(){ closetimer = window.setTimeout(ddmenu_close, timeout); } function ddmenu_canceltimer(){ if(closetimer){ window.clearTimeout(closetimer); closetimer = null; }} $(document).ready(function(){ $('#ddmenu > li').bind('mouseover', ddmenu_open) $('#ddmenu > li').bind('mouseout', ddmenu_timer) }); document.onclick = ddmenu_close; // ]]> </script> <script type="text/javascript"> //<![CDATA[ window.addEvent('domready', function(){ var data = { '1.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '2.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '3.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '4.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' } }; // Note the use of "linked: true" which tells Slideshow to auto-link all slides to the full-size image. //http://code.google.com/p/slideshow/wiki/Slideshow#Options: var mootoolsSlideshow = new Slideshow('divbanner', data, {loader:true,captions: true, delay: 5000,controller: false, height: 370,linked: false, hu: '<%= ResolveUrl("~/Scripts/SlideShow/Images/") %>', thumbnails: true, width: 1002}); // Here we create the Lightbox instance. // In this case we will use the "close" and "open" callbacks to pause our show while the modal window is visible. var box = new Lightbox({ 'onClose': function(){ this.pause(false); }.bind(mootoolsSlideshow), 'onOpen': function(){ this.pause(true); }.bind(mootoolsSlideshow) }); }); //]]> </script>

    Read the article

  • Passing VB Callback function to C dll - noob is stuck.

    - by WaveyDavey
    Callbacks in VB (from C dll). I need to pass a vb function as a callback to a c function in a dll. I know I need to use addressof for the function but I'm getting more and more confused as to how to do it. Details: The function in the dll that I'm passing the address of a callback to is defined in C as : PaError Pa_OpenStream( PaStream** stream, const PaStreamParameters *inputParameters, const PaStreamParameters *outputParameters, double sampleRate, unsigned long framesPerBuffer, PaStreamFlags streamFlags, PaStreamCallback *streamCallback, void *userData ); where the function is parameter 7, *streamCallback. The type PaStreamCallback is defines thusly: typedef int PaStreamCallback( const void *input, void *output, unsigned long frameCount, const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void *userData ); In my vb project I have: Private Declare Function Pa_OpenStream Lib "portaudio_x86.dll" _ ( ByVal stream As IntPtr _ , ByVal inputParameters As IntPtr _ , ByVal outputParameters As PaStreamParameters _ , ByVal samprate As Double _ , ByVal fpb As Double _ , ByVal paClipoff As Long _ , ByVal patestCallBack As IntPtr _ , ByVal data As IntPtr) As Integer (don't worry if I've mistyped some of the other parameters, I'll get to them later! Let's concentrate on the callback for now.) In module1.vb I have defined the callback function: Function MyCallback( ByVal inp As Byte, _ ByVal outp As Byte, _ ByVal framecount As Long, _ ByVal pastreamcallbacktimeinfo As Byte, _ ByVal pastreamcallbackflags As Byte, _ ByVal userdata As Byte) As Integer ' do clever things here End Function The external function in the dll is called with err = Pa_OpenStream( ptr, _ nulthing, _ outputParameters, _ SAMPLE_RATE, _ FRAMES_PER_BUFFER, _ clipoff, _ AddressOf MyCallback, _ dataptr) This is broken in the declaration of the external function - it doesn't like the type IntPtr as a function pointer for AddressOf. Can anyone show me how to implement passing this callback function please ? Many thanks David

    Read the article

  • Python - help on custom wx.Python (pyDev) class

    - by Wallter
    I have been hitting a dead end with this program. I am trying to build a class that will let me control the BIP's of a button when it is in use. so far this is what i have (see following.) It keeps running this weird error TypeError: 'module' object is not callable - I, coming from C++ and C# (for some reason the #include... is so much easier) , have no idea what that means, Google is of no help so... I know I need some real help with sintax and such - anything woudl be helpful. Note: The base code found here was used to create a skeleton for this 'custom button class' Custom Button import wx from wxPython.wx import * class Custom_Button(wx.PyControl): # The BMP's # AM I DOING THIS RIGHT? - I am trying to get empty 'global' # variables within the class Mouse_over_bmp = None #wxEmptyBitmap(1,1,1) # When the mouse is over Norm_bmp = None #wxEmptyBitmap(1,1,1) # The normal BMP Push_bmp = None #wxEmptyBitmap(1,1,1) # The down BMP Pos_bmp = wx.Point(0,0) # The posisition of the button def __init__(self, parent, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, pos, size, text="", id=-1, **kwargs): wx.PyControl.__init__(self,parent, id, **kwargs) # The conversions, hereafter, were to solve another but. I don't know if it is # necessary to do this since the source being given to the class (in this case) # is a BMP - is there a better way to prevent an error that i have not # stumbled accost? # Set the BMP's to the ones given in the constructor self.Mouse_over_bmp = wx.Bitmap(wx.Image(MOUSE_OVER_BMP, wx.BITMAP_TYPE_ANY).ConvertToBitmap()) self.Norm_bmp = wx.Bitmap(wx.Image(NORM_BMP, wx.BITMAP_TYPE_ANY).ConvertToBitmap()) self.Push_bmp = wx.Bitmap(wx.Image(PUSH_BMP, wx.BITMAP_TYPE_ANY).ConvertToBitmap()) self.Pos_bmp = self.pos self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown) self.Bind(wx.EVT_LEFT_UP, self._onMouseUp) self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave) self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter) self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground) self.Bind(wx.EVT_PAINT,self._onPaint) self._mouseIn = self._mouseDown = False def _onMouseEnter(self, event): self._mouseIn = True def _onMouseLeave(self, event): self._mouseIn = False def _onMouseDown(self, event): self._mouseDown = True def _onMouseUp(self, event): self._mouseDown = False self.sendButtonEvent() def sendButtonEvent(self): event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId()) event.SetInt(0) event.SetEventObject(self) self.GetEventHandler().ProcessEvent(event) def _onEraseBackground(self,event): # reduce flicker pass def _onPaint(self, event): dc = wx.BufferedPaintDC(self) dc.SetFont(self.GetFont()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.Clear() dc.DrawBitmap(self.Norm_bmp) # draw whatever you want to draw # draw glossy bitmaps e.g. dc.DrawBitmap if self._mouseIn: # If the Mouse is over the button dc.DrawBitmap(self, self.Mouse_over_bmp, self.Pos_bmp, useMask=False) if self._mouseDown: # If the Mouse clicks the button dc.DrawBitmap(self, self.Push_bmp, self.Pos_bmp, useMask=False) Main.py import wx import Custom_Button from wxPython.wx import * ID_ABOUT = 101 ID_EXIT = 102 class MyFrame(wx.Frame): def __init__(self, parent, ID, title): wxFrame.__init__(self, parent, ID, title, wxDefaultPosition, wxSize(400, 400)) self.CreateStatusBar() self.SetStatusText("Program testing custom button overlays") menu = wxMenu() menu.Append(ID_ABOUT, "&About", "More information about this program") menu.AppendSeparator() menu.Append(ID_EXIT, "E&xit", "Terminate the program") menuBar = wxMenuBar() menuBar.Append(menu, "&File"); self.SetMenuBar(menuBar) self.Button1 = Custom_Button(self, parent, -1, "D:/Documents/Python/Normal.bmp", "D:/Documents/Python/Clicked.bmp", "D:/Documents/Python/Over.bmp", wx.Point(200,200), wx.Size(300,100)) EVT_MENU(self, ID_ABOUT, self.OnAbout) EVT_MENU(self, ID_EXIT, self.TimeToQuit) def OnAbout(self, event): dlg = wxMessageDialog(self, "Testing the functions of custom " "buttons using pyDev and wxPython", "About", wxOK | wxICON_INFORMATION) dlg.ShowModal() dlg.Destroy() def TimeToQuit(self, event): self.Close(true) class MyApp(wx.App): def OnInit(self): frame = MyFrame(NULL, -1, "wxPython | Buttons") frame.Show(true) self.SetTopWindow(frame) return true app = MyApp(0) app.MainLoop() Errors (and traceback) /home/wallter/python/Custom Button overlay/src/Custom_Button.py:8: DeprecationWarning: The wxPython compatibility package is no longer automatically generated or actively maintained. Please switch to the wx package as soon as possible. I have never been able to get this to go away whenever using wxPython any help? from wxPython.wx import * Traceback (most recent call last): File "/home/wallter/python/Custom Button overlay/src/Main.py", line 57, in <module> app = MyApp(0) File "/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7978, in __init__ self._BootstrapApp() File "/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7552, in _BootstrapApp return _core_.PyApp__BootstrapApp(*args, **kwargs) File "/home/wallter/python/Custom Button overlay/src/Main.py", line 52, in OnInit frame = MyFrame(NULL, -1, "wxPython | Buttons") File "/home/wallter/python/Custom Button overlay/src/Main.py", line 32, in __init__ wx.Point(200,200), wx.Size(300,100)) TypeError: 'module' object is not callable I have tried removing the "wx.Point(200,200), wx.Size(300,100))" just to have the error move up to the line above. Have I declared it right? help?

    Read the article

  • understanding valgrind output

    - by sbsp
    Hi, i made a post earlier asking about checking for memory leaks etc, i did say i wasnt to familiar with the terminal in linux but someone said to me it was easy with valgrind i have managed to get it running etc but not to sure what the output means. Glancing over, all looks good to me but would like to run it past you experience folk for confirmation if possible. THe output is as follows ^C==2420== ==2420== HEAP SUMMARY: ==2420== in use at exit: 2,240 bytes in 81 blocks ==2420== total heap usage: 82 allocs, 1 frees, 2,592 bytes allocated ==2420== ==2420== LEAK SUMMARY: ==2420== definitely lost: 0 bytes in 0 blocks ==2420== indirectly lost: 0 bytes in 0 blocks ==2420== possibly lost: 0 bytes in 0 blocks ==2420== still reachable: 2,240 bytes in 81 blocks ==2420== suppressed: 0 bytes in 0 blocks ==2420== Reachable blocks (those to which a pointer was found) are not shown. ==2420== To see them, rerun with: --leak-check=full --show-reachable=yes ==2420== ==2420== For counts of detected and suppressed errors, rerun with: -v ==2420== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 13 from 8) Is all good here? the only thing concerning me is the still reachable part. Is that ok? Thanks everyone

    Read the article

  • Replacing/Extending Visual Studio's Generate Stub in Visual Studio 2010

    - by devoured elysium
    When we write the name of a method that doesn't exist, Visual Studio 2010 asks us if we'd like to generate a method stub with that name. What I'd like to know if is it possible to replace that same code stub generating command with one made by myself. I never did any kind of extensibility programming for Visual Studio so I have a couple of questions: How hard is it? Is it something I can learn in a couple of nights, or is it something that'll make me "lose" a lot of time? It seems to me that there isn't a lot of support for that kind of programming, as generally people are not that interested in developing solutions that extend the Visual Studio IDE. I searched on SO and it doesn't appear to have many threads about extending Visual Studio. I don't know how the generate method stub thing works in Visual Studio, but I just wanted to turn it into something a bit more flexible and useful. Has anyone dealt with these kind of things before, that can give me a pointer to where to start? I know of MS VSX site but that has a lot of resources and can be overwhelming for someone new to the subject as I am. What technology will I need to use? T4? Maybe I'll need to know a lot about the code, like Visual Studio does, so I can know other method's type arguments, names, etc. Is that what T4 is for? Thanks

    Read the article

  • Foreign Key constraint violation when persisting a many-to-one class

    - by tieTYT
    I'm getting an error when trying to persist a many to one entity: Internal Exception: org.postgresql.util.PSQLException: ERROR: insert or update on table "concept" violates foreign key constraint "concept_concept_class_fk" Detail: Key (concept_class_id)=(Concept) is not present in table "concept_class". Error Code: 0 Call: INSERT INTO concept (concept_key, description, label, code, concept_class_id) VALUES (?, ?, ?, ?, ?) bind = [27, description_1, label_1, code_1, Concept] Query: InsertObjectQuery(com.mirth.results.entities.Concept[conceptKey=27]) at com.sun.ejb.containers.BaseContainer.checkExceptionClientTx(BaseContainer.java:3728) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3576) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1354) ... 101 more Here is the method that tries to persist it. I've put a comment where the line is: @Override public void loadConcept(String metaDataFilePath, String dataFilePath) throws Exception { try { ConceptClassMetaData conceptClassMetaData = (ConceptClassMetaData) ModelSerializer.getInstance().fromXML(FileUtils.readFileToString(new File(metaDataFilePath), "UTF8")); em.executeNativeQuery(conceptClassMetaData.getCreateStatement()); ConceptClassRow conceptClassRow = conceptClassMetaData.getConceptClassRow(); ConceptClass conceptClass = em.findByPrimaryKey(ConceptClass.class, conceptClassRow.getId()); if (conceptClass == null) { conceptClass = new ConceptClass(conceptClassRow.getId()); } conceptClass.setLabel(conceptClassRow.getLabel()); conceptClass.setOid(conceptClassRow.getOid()); conceptClass.setDescription(conceptClassRow.getDescription()); conceptClass = em.merge(conceptClass); DataParser dataParser = new DataParser(conceptClassMetaData, dataFilePath); for (ConceptModel conceptModel : dataParser.getConceptRows()) { ConceptFilter<Concept> filter = new ConceptFilter<Concept>(Concept.class); filter.setCode(conceptModel.getCode()); filter.setConceptClass(conceptClass.getLabel()); List<Concept> concepts = em.findAllByFilter(filter); Concept concept = new Concept(); if (concepts != null && !concepts.isEmpty()) { concept = concepts.get(0); } concept.setCode(conceptModel.getCode()); concept.setDescription(conceptModel.getDescription()); concept.setLabel(conceptModel.getLabel()); concept.setConceptClass(conceptClass); concept = em.merge(concept); //THIS LINE CAUSES THE ERROR! } } catch (Exception e) { e.printStackTrace(); throw e; } } ... Here are how the two entities are defined: @Entity @Table(name = "concept") @Inheritance(strategy=InheritanceType.JOINED) @DiscriminatorColumn(name="concept_class_id", discriminatorType=DiscriminatorType.STRING) public class Concept extends KanaEntity { @Id @Basic(optional = false) @Column(name = "concept_key") protected Integer conceptKey; @Basic(optional = false) @Column(name = "code") private String code; @Basic(optional = false) @Column(name = "label") private String label; @Column(name = "description") private String description; @JoinColumn(name = "concept_class_id", referencedColumnName = "id") @ManyToOne private ConceptClass conceptClass; ... @Entity @Table(name = "concept_class") public class ConceptClass extends KanaEntity { @Id @Basic(optional = false) @Column(name = "id") private String id; @Basic(optional = false) @Column(name = "label") private String label; @Column(name = "oid") private String oid; @Column(name = "description") private String description; .... And also, what's important is the sql that's being generated: INSERT INTO concept_class (id, oid, description, label) VALUES (?, ?, ?, ?) bind = [LOINC_TEST, 2.16.212.31.231.54, This is a meta data file for LOINC_TEST, loinc_test] INSERT INTO concept (concept_key, description, label, code, concept_class_id) VALUES (?, ?, ?, ?, ?) bind = [27, description_1, label_1, code_1, Concept] The reason this is failing is obvious: It's inserting the word Concept for the concept_class_id. It should be inserting the word LOINC_TEST. I can't figure out why it's using this word. I've used the debugger to look at the Concept and the ConceptClass instance and neither of them contain this word. I'm using eclipselink. Does anyone know why this is happening?

    Read the article

  • Where's the Win32 resource for the mouse cursor for dragging splitters?

    - by Luther Baker
    I am building a custom win32 control/widget and would like to change the cursor to a horizontal "splitter" symbol when hovering over a particular vertical line in the control. IE: I want to drag this vertical line (splitter bar) left and right (WEST and EAST). Of the the system cursors (OCR_*), the only cursor that makes sense is the OCR_SIZEWE. Unfortunately, that is the big, awkward cursor the system uses when resizing a window. Instead, I am looking for the cursor that is about 20 pixels tall and around 3 or 4 pixel wide with two small arrows pointing left and right. I can easily draw this and include it as a resource in my application but the cursor itself is so prevalent that I wanted to be sure it wasn't missing something. For example: when you use the COM drag and drop mechanism (CLSID_DragDropHelper, IDropTarget, etc) you implicitly have access to the "drag" icon (little box under the pointer). I didn't see an explicit OCR_* constant for this guy ... so likewise, if I can't find this splitter cursor outright, I am wondering if it is part of a COM object or something else in the win32 lib.

    Read the article

  • OSX: How to get a volume name (or bsd name) from a IOUSBDeviceInterface or location id

    - by LG
    Hi, I'm trying to write an app that associates a particular USB string descriptor (of a USB mass storage device) with its volume or bsd name. So the code goes through all the connected USB devices, gets the string descriptors and extracts information from one of them. I would like to get the volume name of those USB devices. I can't find the right API to do that. I have tried to do that: DASessionRef session = DASessionCreate(kCFAllocatorDefault); DADiskRef disk_ref = DADiskCreateFromIOMedia(kCFAllocatorDefault, session, usb_device_ref); const char* name = DADiskGetBSDName(disk_ref); But the DADiskCreateFromIOMedia function returned nil, I assume the usb_device_ref that I passed was not compatible with the io_service_t that the function is expecting. So how can I get the volume name of a USB device? Could I use the location id to do that? Thanks for reading. -L FOO_Result result = FOO_SUCCESS; mach_port_t master_port; kern_return_t k_result; io_iterator_t iterator = 0; io_service_t usb_device_ref; CFMutableDictionaryRef matching_dictionary = NULL; // first create a master_port if (FOO_FAILED(k_result = IOMasterPort(MACH_PORT_NULL, &master_port))) { fprintf(stderr, "could not create master port, err = %d\n", k_result); goto cleanup; } if ((matching_dictionary = IOServiceMatching(kIOUSBDeviceClassName)) == NULL) { fprintf(stderr, "could not create matching dictionary, err = %d\n", k_result); goto cleanup; } if (FOO_FAILED(k_result = IOServiceGetMatchingServices(master_port, matching_dictionary, &iterator))) { fprintf(stderr, "could not find any matching services, err = %d\n", k_result); goto cleanup; } while (usb_device_ref = IOIteratorNext(iterator)) { IOReturn err; IOCFPlugInInterface **iodev; // requires <IOKit/IOCFPlugIn.h> IOUSBDeviceInterface **dev; SInt32 score; err = IOCreatePlugInInterfaceForService(usb_device_ref, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &iodev, &score); if (err || !iodev) { printf("dealWithDevice: unable to create plugin. ret = %08x, iodev = %p\n", err, iodev); return; } err = (*iodev)->QueryInterface(iodev, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID*)&dev); (*iodev)->Release(iodev); // done with this FOO_String string_value; UInt8 string_index = 0x1; FOO_Result result = FOO_SUCCESS; CFStringRef device_name_as_cf_string; do { if (FOO_SUCCEEDED(result = FOO_GetStringDescriptor(dev, string_index, 0, string_value))) { printf("String at index %i is %s\n", string_index, string_value.GetChars()); // extract the command code if it is the FOO string if (string_value.CompareN("FOO:", 4) == 0) { FOO_Byte code = 0; FOO_HexToByte(string_value.GetChars() + 4, code); // Get other relevant information from the device io_name_t device_name; UInt32 location_id = 0; // Get the USB device's name. err = IORegistryEntryGetName(usb_device_ref, device_name); device_name_as_cf_string = CFStringCreateWithCString(kCFAllocatorDefault, device_name, kCFStringEncodingASCII); err = (*dev)->GetLocationID(dev, &location_id); // TODO: get volume or BSD name // add the device to the list break; } } string_index++; } while (FOO_SUCCEEDED(result)); err = (*dev)->USBDeviceClose(dev); if (err) { printf("dealWithDevice: error closing device - %08x\n", err); (*dev)->Release(dev); return; } err = (*dev)->Release(dev); if (err) { printf("dealWithDevice: error releasing device - %08x\n", err); return; } IOObjectRelease(usb_device_ref); // no longer need this reference }

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >