Search Results

Search found 25362 results on 1015 pages for 'compiling from source'.

Page 499/1015 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • C++ stringstream, string, and char* conversion confusion

    - by Graphics Noob
    My question can be boiled down to, where does the string returned from stringstream.str().c_str() live in memory, and why can't it be assigned to a const char*? This code example will explain it better than I can #include <string> #include <sstream> #include <iostream> using namespace std; int main() { stringstream ss("this is a string\n"); string str(ss.str()); const char* cstr1 = str.c_str(); const char* cstr2 = ss.str().c_str(); cout << cstr1 // Prints correctly << cstr2; // ERROR, prints out garbage system("PAUSE"); return 0; } The assumption that stringstream.str().c_str() could be assigned to a const char* led to a bug that took me a while to track down. For bonus points, can anyone explain why replacing the cout statement with cout << cstr // Prints correctly << ss.str().c_str() // Prints correctly << cstr2; // Prints correctly (???) prints the strings correctly? I'm compiling in Visual Studio 2008.

    Read the article

  • MPI hypercube broadcast error

    - by luvieere
    I've got a one to all broadcast method for a hypercube, written using MPI: one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype) { MPI_Status status; int mask, partner; int mask2 = ((1 << n) - 1) ^ (1 << n-1); for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1) { if (rank & mask2 == 0) { partner = rank ^ mask; if (rank & mask) MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status); else MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD); } } } Upon calling it from main: int main( int argc, char **argv ) { int n, rank; MPI_Init (&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &n); MPI_Comm_rank (MPI_COMM_WORLD, &rank); one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR); MPI_Finalize(); return 0; } compiling and executing on 8 nodes, I receive a series of errors reporting that processes 1, 3, 5, 7 were stopped before the point of receiving any data: MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD) Rank (1, MPI_COMM_WORLD): Call stack within LAM: Rank (1, MPI_COMM_WORLD): - MPI_Recv() Rank (1, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD) Rank (3, MPI_COMM_WORLD): Call stack within LAM: Rank (3, MPI_COMM_WORLD): - MPI_Recv() Rank (3, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD) Rank (5, MPI_COMM_WORLD): Call stack within LAM: Rank (5, MPI_COMM_WORLD): - MPI_Recv() Rank (5, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD) Rank (7, MPI_COMM_WORLD): Call stack within LAM: Rank (7, MPI_COMM_WORLD): - MPI_Recv() Rank (7, MPI_COMM_WORLD): - main() Where do I go wrong?

    Read the article

  • Removing padding from structure in kernel module

    - by dexkid
    I am compiling a kernel module, containing a structure of size 34, using the standard command. make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules The sizeof(some_structure) is coming as 36 instead of 34 i.e. the compiler is padding the structure. How do I remove this padding? Running make V=1 shows the gcc compiler options passed as make -I../inc -C /lib/modules/2.6.29.4-167.fc11.i686.PAE/build M=/home/vishal/20100426_eth_vishal/organised_eth/src modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE' test -e include/linux/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/linux/autoconf.h or include/config/auto.conf are missing."; \ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions ; rm -f /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions/* make -f scripts/Makefile.build obj=/home/vishal/20100426_eth_vishal/organised_eth/src gcc -Wp,-MD,/home/vishal/20100426_eth_vishal/organised_eth/src/.eth_main.o.d -nostdinc -isystem /usr/lib/gcc/i586-redhat-linux/4.4.0/include -Iinclude -I/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE/arch/x86/include -include include/linux/autoconf.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Os -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i686 -mtune=generic -Wa,-mtune=generic32 -ffreestanding -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Iarch/x86/include/asm/mach-generic -Iarch/x86/include/asm/mach-default -Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -g -pg -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -fno-dwarf2-cfi-asm -DTX_DESCRIPTOR_IN_SYSTEM_MEMORY -DRX_DESCRIPTOR_IN_SYSTEM_MEMORY -DTX_BUFFER_IN_SYSTEM_MEMORY -DRX_BUFFER_IN_SYSTEM_MEMORY -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -O0 -Wall -DT_ETH_1588_051 -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -DNETHERNET_INTERRUPTS -DETH_IEEE1588_TESTS -DSNAPTYPSEL_TMSTRENA_TEVENTENA_TESTS -DT_ETH_1588_140_147 -DLOW_DEBUG_PRINTS -DMEDIUM_DEBUG_PRINTS -DHIGH_DEBUG_PRINTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(eth_main)" -D"KBUILD_MODNAME=KBUILD_STR(conxt_eth)" -c -o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.c

    Read the article

  • Blackberry Apps - Importing a code-signed jar into an application project

    - by Eric Sniff
    Hi everyone, I'm working on a library project that Blackberry Java developers can import into their projects. It uses protected RIM APIs which require that it be code-signed, which I have done. But, I can't get my Jar imported and working with a simple helloWorld app. I'm using the eclipse plug-in Blackberry-JDE. Here is what I have tried: First: Building myLibProject with BlackBerry_JDE_PluginFull_1.0.0.67 into a JAR, signing it and importing it into a BlackBerry_JDE_PluginFull_1.0.0.67 application project -- I get a class not found error, while compiling the application project. Next: I imported myLibProject into an BlackBerry_JDE_PluginFull_1.1.1.* library project, built it into a jar, signed it and imported it into a BlackBerry_JDE_PluginFull_1.1.1.* application project. It built this time, but while loading up the simulator to test it I get the following error ( Access violation reading from 0xFFFFFFC ) before the simulator can loadup and it crashs the simulator. Other stuff I've tried: I also tried importing the jar into it's own project and having the HelloWorld app project reference that project. If I include the src in my application project it works fine... But Im looking for a way to deploy this as compiled code. Any Ideas? Or help?

    Read the article

  • Good-practices: How to reuse .csproj and .sln files to create your MSBuild script for CI ?

    - by Gishu
    What is the painless/maintainable way of using MSBuild as your build runner ? (Forgive the length of this post) I was just trying my hand at TeamCity (which I must say is awesome w.r.t. learning curve and out of the box functionality). I got an SVN MSBuild NUnit NCover combo working. I was curious as to how moderate to large projects are using MSBuild - I've just pointed MSBuild to my Main sln file. I've spent some time with NAnt some years ago and I found MSBuild to be a bit obtuse. The docs are too dense/detailed for a beginner. MSBuild seems to have some special magic to handle .sln files ; I tried my hand at writing a custom build script by hand, linking/including .csproj files in order (such that I could have custom pre-post build tasks). However it threw up (citing duplicate target imports). I'm assuming most devs wouldn't want to go messing around with msbuild proj files - they'd be making changes to the .csproj and .sln files. Is there some tool / MSBuild task that reverse-engineers a new script from an existing .sln + its .csproj files that I'm unaware of ? If I'm using MSBuild just to do the compile step, I might as well use Nant with an exec task to MSBuild for compiling the solution ? I've this nagging feeling that I'm missing something obvious. My end-goal here is to have a MSBuild build script which builds the solution that acts as a build script instead of a compile step. Allows custom pre/post tasks. (e.g. call nunit to run a nunit project (which seems to be not yet supported via the teamcity web UI)) stays out of the way of the developers making changes to the solution. No redundancy ; shouldn't require devs to make the same change in 2 places

    Read the article

  • How does symbol binding work for shared libraries in linux

    - by bbazso
    When compiling a cpp program with g++ -O0 I noticed that my binary does not contain the symbol for the empty string (basic_string): _S_empty_rep_storage When I do compile this same program with -O2 I notice that the aforementioned symbol is indeed contained within the binary as follows (using nm on the bin): 00000000006029a0 V _ZNSs4_Rep20_S_empty_rep_storageE@@GLIBCXX_3.4 My application uses several .so (dynamic libraries) and when my aplication loads I notice that several of these .so files bind as follows (I set LD_DEBUG=all and ran my program): 28596: binding file /home/bbazso/usr/local/lib/mydynamiclib.so [0] to /usr/lib64/libstdc++.so.6 [0]: normal symbol `_ZNSs4_Rep20_S_empty_rep_storageE' [GLIBCXX_3.4] 28596: binding file /home/bbazso/usr/local/lib/mydynamiclib.so [0] to /home/bbazso/workspace/mytestapplication [0]: normal symbol `_ZNSs4_Rep20_S_empty_rep_storageE' [GLIBCXX_3.4] 28596: binding file /home/bbazso/workspace/mytestapplication [0] to /usr/lib64/libstdc++.so.6 [0]: normal symbol `_ZNSs4_Rep20_S_empty_rep_storageE' [GLIBCXX_3.4]** But I also noticed that one of my .so only binds as follows: 28087: binding file /home/bbazso/usr/local/lib/anotherdynamiclib.so [0] to /usr/lib64/libstdc++.so.6 [0]: normal symbol `_ZNSs4_Rep20_S_empty_rep_storageE' [GLIBCXX_3.4] but never binds to the binary (mytestapplication) as shown above for the mydynamiclib.so. So I was wondering what this actually means? Does this mean that anotherdynamiclib.so will use a different symbol for the empty string above than the rest of the application? I guess what I'm really asking is how does symbol binding work in the context of the example above? Thanks!

    Read the article

  • GCC emits extra code for boost::shared_ptr dereference

    - by Checkers
    I have the following code: #include <boost/shared_ptr.hpp> struct Foo { int a; }; static int A; void func_shared(const boost::shared_ptr<Foo> &foo) { A = foo->a; } void func_raw(Foo * const foo) { A = foo->a; } I thought the compiler would create identical code, but for shared_ptr version an extra seemingly redundant instruction is emitted. Disassembly of section .text: 00000000 <func_raw(Foo*)>: 0: 55 push ebp 1: 89 e5 mov ebp,esp 3: 8b 45 08 mov eax,DWORD PTR [ebp+8] 6: 5d pop ebp 7: 8b 00 mov eax,DWORD PTR [eax] 9: a3 00 00 00 00 mov ds:0x0,eax e: c3 ret f: 90 nop 00000010 <func_shared(boost::shared_ptr<Foo> const&)>: 10: 55 push ebp 11: 89 e5 mov ebp,esp 13: 8b 45 08 mov eax,DWORD PTR [ebp+8] 16: 5d pop ebp 17: 8b 00 mov eax,DWORD PTR [eax] 19: 8b 00 mov eax,DWORD PTR [eax] 1b: a3 00 00 00 00 mov ds:0x0,eax 20: c3 ret I'm just curious, is this necessary, or it is just an optimizer's shortcoming? Compiling with g++ 4.1.2, -O3 -NDEBUG.

    Read the article

  • Help with SDL_mixer (no sound)

    - by Kaizoku
    Hello, I have this strange problem with SDL_mixer, it doesn't want to play music. It doesn't throw any error, it just skips it. Any advice? I am compiling on linux with libvorbis. audio.h #ifndef AUDIO_H #define AUDIO_H #include <string> #include <SDL/SDL_mixer.h> class Audio { private: Mix_Music *music; public: Audio(); virtual ~Audio(); public: void setMusic(std::string path); void playMusic(); }; #endif /* AUDIO_H */ audio.cpp #include "Audio.h" #include <stdexcept> Audio::Audio() { if (0 == Mix_Init(MIX_INIT_OGG)) throw std::runtime_error(Mix_GetError()); if (-1 == Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, MIX_DEFAULT_CHANNELS, 4096)) throw std::runtime_error(Mix_GetError()); } Audio::~Audio() { Mix_FreeMusic(music); Mix_Quit(); } void Audio::setMusic(std::string path) { music = Mix_LoadMUS(path.c_str()); if (NULL == music) throw std::runtime_error(Mix_GetError()); } void Audio::playMusic() { if (NULL != music) { if (-1 == Mix_PlayMusic(music, -1)) throw std::runtime_error(Mix_GetError()); } }

    Read the article

  • valgrind complains doing a very simple strtok in c

    - by monkeyking
    Hi I'm trying to tokenize a string by loading an entire file into a char[] using fread. For some strange reason it is not always working, and valgrind complains in this very small sample program. Given an input like test.txt first second And the following program #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/stat.h> //returns the filesize in bytes size_t fsize(const char* fname){ struct stat st ; stat(fname,&st); return st.st_size; } int main(int argc, char *argv[]){ FILE *fp = NULL; if(NULL==(fp=fopen(argv[1],"r"))){ fprintf(stderr,"\t-> Error reading file:%s\n",argv[1]); return 0; } char buffer[fsize(argv[1])]; fread(buffer,sizeof(char),fsize(argv[1]),fp); char *str = strtok(buffer," \t\n"); while(NULL!=str){ fprintf(stderr,"token is:%s with strlen:%lu\n",str,strlen(str)); str = strtok(NULL," \t\n"); } return 0; } compiling like gcc test.c -std=c99 -ggdb running like ./a.out test.txt thanks

    Read the article

  • Visual C++ Testing problem

    - by JamesMCCullum
    Hi there I have installed VisualAssert and cFix. I have been using Visual Studio C++ and programming in CLI/C++. I have a working Chess Game Program that works perfectly by itself.....and I have been studying testing and have many examples(with tutorials) I have found on the net, that compile and run in Visual Studio..... But as soon as I try and implement those tests on my chess game......I get this problem.... This is what its telling me 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C4394: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol should not be marked with __declspec(allocate) 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C2393: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol cannot be allocated in segment '.CRT$XCX' 1>C:\Program Files\VisualAssert\include\cfixpe.h(244) : error C2440: 'initializing' : cannot convert from 'void (__cdecl *)(void)' to 'const CFIX_CRT_INIT_ROUTINE' 1> Address of a function yields __clrcall calling convention in /clr:pure and /clr:safe; consider using __clrcall in target type 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>Build log was saved at "file://c:\Users\james\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 4 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas what I'm doing wrong? Im working with windows forms and have a heap of cpp source files. Any help would be appreciated. Thanks

    Read the article

  • Installing Gtk2 on portable strawberry

    - by XoR
    I downloaded "strawberry-perl-5.12.2.0-portable" and "gtk+-bundle_2.22.1-20101227_win32". I extracted strawberry-perl in some directory and there I put gtk folder with gtk stuff. In portableshell.bat I changed Path env and added: "%drivep%\gtk\bin;%drivep%\gtk\lib;". Don't ask me why I added lib directory, I saw that some guy added it in some website. When I run in portableshell command: "pkg-config --libs --cflags gtk+-2.0" I get: c:\test>pkg-config --libs --cflags gtk+-2.0 -mms-bitfields -Ic:/test/gtk/include/gtk-2.0 -Ic:/test/gtk/lib/gtk-2.0/include - Ic:/test/gtk/include/atk-1.0 -Ic:/test/gtk/include/cairo -Ic:/test/gtk/include/g dk-pixbuf-2.0 -Ic:/test/gtk/include/pango-1.0 -Ic:/test/gtk/include/glib-2.0 -Ic :/test/gtk/lib/glib-2.0/include -Ic:/test/gtk/include -Ic:/test/gtk/include/free type2 -Ic:/test/gtk/include/libpng14 -Lc:/test/gtk/lib -lgtk-win32-2.0 -lgdk-wi n32-2.0 -latk-1.0 -lgio-2.0 -lpangowin32-1.0 -lgdi32 -lpangocairo-1.0 -lgdk_pixb uf-2.0 -lpango-1.0 -lcairo -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lglib-2.0 -lintl All folders looks fine, I also have complete log of compiling glib here. It looks like it doesn't compile because pkg-config gives bad data, or something. Does anyone have some idea how to make this thing work?

    Read the article

  • VS 2008 does not understand .resource files

    - by Dmitry
    I'm trying to add globalization support to my C# application. According to MSDN, there should be one embedded resource file for neutral culture and satellite DLLs with resource files for other cultures. I've created 2 satellite DLLs without any problems and got my app to automatically load right one using ResourceManager. But I can't embed default neutral culture resource file into my executable. When I remove all satellite DLLs or set culture to some culture I don't have satellite DLL for, I get exception "Could not find any resources appropriate for the specified culture or the neutral culture." when application attempts to create ResourceManager. It looks like VS 2008 does not include my .resource file into main assembly. I've tried different ways to get resource file embedded: compiling it by resgen.exe from text file and adding it to the project; changing its name to add second .resources extension; creating .resx file with same name; etc. And I still don't see the way to get resource file embedded and used by ResourceManager - I'm having same exception. What is the right way to add default neutral culture resource file to application in VS 2008 ?

    Read the article

  • MSBuild "Wrapper" fails while VS2010 "Pure" compile succeeds for MFC application in CruiseControl.NE

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (devenv) works, but a wrapper MSBuild script run via the CCNet MSBuild task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level.

    Read the article

  • Problems using a library in Xcode

    - by Pablo
    Hi! I'm actually developping an application for iPhone and I need to use a library, initially dedicated to a Linux environment. Since I'm using a Mac (with Snow Leopard and Intel Core Duo), I guess it's possible to use this library in my app. My library has 3 files: a file .h, a file .a and a file .so (both .a and .so are in /Developer/usr/lib). In addition I have included the .h i nmy code and I've added the .a in XCode has a framework (and it works because XCode find the .so compiling). For your info when I use the command "file" for the file .so, I have: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped When I compile for the Xcode Simulator, I have a warning and an error. The warning is: In /Developer/usr/lib/mylib.so, file was built for unsupported file format which is not the architecture being linked (i386) The error is: "_mylib_fct", referenced from: -[MyAppAppDelegate applicationDidBecomeActive:] in MyAppAppDelegate.o Symbol(s) not found Collect2: ld returned 1 exit status When I compile for the Device 3.0 with architecture arm6, I also have the same error, but the warning is quite different: ln /Users/Pablo/MyApp/mylib.a file is not of required architecture I try to solve this and make the app working with this lib since days, and I don't understand why the compiler is complaining... is it a 32/64 bits issues ? How can I deal with that ? Your help will be very appreciated. Thx!

    Read the article

  • Building a python module and linking it against a MacOSX framework

    - by madflo
    I'm trying to build a Python extension on MacOSX 10.6 and to link it against several frameworks (i386 only). I made a setup.py file, using distutils and the Extension object. I order to link against my frameworks, my LDFLAGS env var should look like : LDFLAGS = -lc -arch i386 -framework fwk1 -framework fwk2 As I did not find any 'framework' keyword in the Extension module documentation, I used the extra_link_args keyword instead. Extension('test', define_macros = [('MAJOR_VERSION', '1'), ,('MINOR_VERSION', '0')], include_dirs = ['/usr/local/include', 'include/', 'include/vitale'], extra_link_args = ['-arch i386', '-framework fwk1', '-framework fwk2'], sources = "testmodule.cpp", language = 'c++' ) Everything is compiling and linking fine. If I remove the -framework line from the extra_link_args, my linker fails, as expected. Here is the last two lines produced by a python setup.py build : /usr/bin/g++-4.2 -arch x86_64 -arch i386 -isysroot / -L/opt/local/lib -arch x86_64 -arch i386 -bundle -undefined dynamic_lookup build/temp.macosx-10.6-intel-2.6/testmodule.o -o build/lib.macosx-10.6-intel-2.6/test.so -arch i386 -framework sgdosx -framework srtosx -framework ssvosx -framework stsosx Unfortunately, the .so that I just produced is unable to find several symbols provided by this framework. I tried to check the linked framework with otool. None of them is appearing. $ otool -L test.so test.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) There is the output of otool run on a test binary, made with g++ and ldd using the LDFLAGS described at the top of my post. On this example, the -framework did work. $ otool -L vitaosx vitaosx: /Library/Frameworks/sgdosx.framework/Versions/A/sgdosx (compatibility version 1.0.0, current version 1.0.0) /Library/Frameworks/ssvosx.framework/Versions/A/ssvosx (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) May this issue be linked to the "-undefined dynamic_lookup" flag on the linking step ? I'm a little bit confused by the few lines of documentation that I'm finding on Google. Cheers,

    Read the article

  • compile ICS/JB camera application - native library jni-mosaic error

    - by liorry
    I would like to use the Panorama mode that the ICS/JB camera application has. I've downloaded the source code (with resources) and managed to solve all code compilation errors but as soon as I start the application on my device (running JB), I get this error: 10-25 14:42:53.617: E/AndroidRuntime(23147): FATAL EXCEPTION: GLThread 2586 10-25 14:42:53.617: E/AndroidRuntime(23147): java.lang.UnsatisfiedLinkError: Native method not found: com.app.camera.panorama.MosaicRenderer.reset:(IIZ)V 10-25 14:42:53.617: E/AndroidRuntime(23147): at com.app.camera.panorama.MosaicRenderer.reset(Native Method) 10-25 14:42:53.617: E/AndroidRuntime(23147): at com.app.camera.panorama.MosaicRendererSurfaceViewRenderer.onSurfaceChanged(MosaicRendererSurfaceViewRenderer.java:49) 10-25 14:42:53.617: E/AndroidRuntime(23147): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1505) 10-25 14:42:53.617: E/AndroidRuntime(23147): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) I do have a libjni-mosaic lib, located in armeabi-v7a/armeabi/x86 and it seems to load it fine but it probably doesn't contain the methods the MosaicRenderer implements. I also tried compiling the CyanogenMod camera app https://github.com/CyanogenMod/android_packages_apps_Camera/tree/ics but I get the same error... The camera itself works, for stills and video recording but as soon as I change to panorama mode, it crashes. Can anyone maybe point me to the right jni-mosaic lib or maybe to what I'm doing wrong? Do I need to do something in order to make my app use the JNI/SO files?

    Read the article

  • Actual long double precision does not agree with std::numeric_limits

    - by dmb
    Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while... std::numeric_limits<long double>::max_exponent10 = 4932 ...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit. Also, sizeof() is showing long doubles to be 16 bytes, which they should be. Finally, using gives the same results as . Does anyone know where the discrepancy might be? long double x = 1e308, y = 1e309; cout << std::numeric_limits::max_exponent10 << endl; cout << x << '\t' << y << endl; cout << sizeof(x) << endl; gives 4932 1e+308 inf 16

    Read the article

  • How can I add the "--watch" flag to this TextMate snippet?

    - by Jannis
    I love TextMate as my editor for all things web, and so I'd like to use a snippet to use it with style.less files to automatically take advantage of the .less way of compiling .css files on the fly using the native $ lessc {filepath} --watch as suggested in the less documentation (link) My (thanks to someone who wrote the LESS TM Bundle!) current TextMate snippet works well for writing the currently opened .less file to the .css file but I'd like to take advantage of the --watch parameter so that every change to the .less file gets automatically compiled into the .css file. This works well when using the Terminal command line for it, so I am sure it must be possible to use it in an adapted version of the current LESS Command for TextMate since that only invokes the command to compile the file. So how do I add the --watch flag to this command:? #!/usr/bin/env ruby file = STDIN.read[/lessc: ([^*]+\.less)/, 1] || ENV["TM_FILEPATH"] system("lessc \"#{file}\"") I assume it should be something like: #!/usr/bin/env ruby file = STDIN.read[/lessc: ([^*]+\.less)/, 1] || ENV["TM_FILEPATH"] system("lessc \"#{file}\" --watch") But doing so only crashes the TextMate.app. Any ideas would be much appreciated. Thanks for reading. Jannis

    Read the article

  • floating exception using icc compiler

    - by Hristo
    I'm compiling my code via the following command: icc -ltbb test.cxx -o test Then when I run the program: time ./mp6 100 > output.modified Floating exception 4.871u 0.405s 0:05.28 99.8% 0+0k 0+0io 0pf+0w I get a "Floating exception". This following is code in C++ that I had before the exception and after: // before if (j < E[i]) { temp += foo(0, trr[i], ex[i+j*N]); } // after temp += (j < E[i])*foo(0, trr[i], ex[i+j*N]); This is boolean algebra... so (j < E[i]) is either going to be a 0 or a 1 so the multiplication would result either in 0 or the foo() result. I don't see why this would cause a floating exception. This is what foo() does: int foo(int s, int t, int e) { switch(s % 4) { case 0: return abs(t - e)/e; case 1: return (t == e) ? 0 : 1; case 2: return (t < e) ? 5 : (t - e)/t; case 3: return abs(t - e)/t; } return 0; } foo() isn't a function I wrote so I'm not too sure as to what it does... but I don't think the problem is with the function foo(). Is there something about boolean algebra that I don't understand or something that works differently in C++ than I know of? Any ideas why this causes an exception? Thanks, Hristo

    Read the article

  • Problem: Vectorizing Code with Intel Visual FORTRAN for X64

    - by user313209
    I'm compiling my fortran90 code using Intel Visual FORTRAN on Windows Server 2003 Enterprise X64 Edition. When I compile the code for 32 bit structure and using automatic and manual vectorizing options. The code will be compiled, vectorized. And when I run it on 8 core system the compiled code uses 70% of CPU that shows me that vectorizing is working. But when I compile the code with 64 Bit compiler, it says that the code is vectorized but when I run it it only shows CPU usage of about 12% that is full usage for one core out of 8, so it means that while the compiler says that code is vectorized, vectorization is not working. And it's strange for me because it's on a X64 Edition Windows and I was expecting to see the reverse result. I thought that it should be better to run a code that is compiled for 64 Bit architecture on a 64 bit windows. Anyone have any idea why the compiled code is not able to use the full power of multiple cores for 64 Bit Compiled version? Thanks in advance for your responses.

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • split string error in a compiled VB.NET class

    - by Andy Payne
    I'm having some trouble compiling some VB code I wrote to split a string based on a set of predefined delimeters (comma, semicolon, colon, etc). I have successfully written some code that can be loaded inside a custom VB component (I place this code inside a VB.NET component in a plug-in called Grasshopper) and everything works fine. For instance, let's say my incoming string is "123,456". When I feed this string into the VB code I wrote, I get a new list where the first value is "123" and the second value is "456". However, I have been trying to compile this code into it's own class so I can load it inside Grasshopper separately from the standard VB component. When I try to compile this code, it isn't separating the string into a new list with two values. Instead, I get a message that says "System.String []". Do you guys see anything wrong in my compile code? You can find an screenshot image of my problem at the following link: click to see image This is the VB code for the compiled class: Public Class SplitString Inherits GH_Component Public Sub New() MyBase.New("Split String", "Split", "Splits a string based on delimeters", "FireFly", "Serial") End Sub Public Overrides ReadOnly Property ComponentGuid() As System.Guid Get Return New Guid("3205caae-03a8-409d-8778-6b0f8971df52") End Get End Property Protected Overrides ReadOnly Property Internal_Icon_24x24() As System.Drawing.Bitmap Get Return My.Resources.icon_splitstring End Get End Property Protected Overrides Sub RegisterInputParams(ByVal pManager As Grasshopper.Kernel.GH_Component.GH_InputParamManager) pManager.Register_StringParam("String", "S", "Incoming string separated by a delimeter like a comma, semi-colon, colon, or forward slash", False) End Sub Protected Overrides Sub RegisterOutputParams(ByVal pManager As Grasshopper.Kernel.GH_Component.GH_OutputParamManager) pManager.Register_StringParam("Tokenized Output", "O", "Tokenized Output") End Sub Protected Overrides Sub SolveInstance(ByVal DA As Grasshopper.Kernel.IGH_DataAccess) Dim myString As String DA.GetData(0, myString) myString = myString.Replace(",", "|") myString = myString.Replace(":", "|") myString = myString.Replace(";", "|") myString = myString.Replace("/", "|") myString = myString.Replace(")(", "|") myString = myString.Replace("(", String.Empty) myString = myString.Replace(")", String.Empty) Dim parts As String() = myString.Split("|"c) DA.SetData(0, parts) End Sub End Class This is the custom VB code I created inside Grasshopper: Private Sub RunScript(ByVal myString As String, ByRef A As Object) myString = myString.Replace(",", "|") myString = myString.Replace(":", "|") myString = myString.Replace(";", "|") myString = myString.Replace("/", "|") myString = myString.Replace(")(", "|") myString = myString.Replace("(", String.Empty) myString = myString.Replace(")", String.Empty) Dim parts As String() = myString.Split("|"c) A = parts End Sub ' ' End Class

    Read the article

  • A queue in C using structs and dynamic memory allocation (linked list)

    - by Martin Pugh
    I am tasked with making a queue data structure in C, as a linked list. Our lecturer gave us a large amount of code to implement a stack, but we have to adapt it to create a queue. The code our lecturer gave us ends up not compiling and segfaulting at the exact same point as the code I wrote for the queue. I'm very new to structs, malloc and C in general, so there could be something painfully obvious I've overlooked. Here is the code I am using: #include <stdio.h> #include <stdlib.h> struct node{ int data; //contains the actual data struct node *prev; //pointer to previous node (Closer to front) struct node *next; //pointer to next node (Closer to back) }; typedef struct node *Nodepointer; struct queue{ Nodepointer front; Nodepointer back; }; typedef struct queue *Queuepointer; main(){ Queuepointer myqueue; //create a queue called myqueue init(myqueue); //initialise the queue Nodepointer new = (Nodepointer)malloc(sizeof(struct node)); myqueue->front = new; } int init(Queuepointer q){ q = (Queuepointer)malloc(sizeof(struct queue)); q->front = NULL; q->back = NULL; } The idea is that the queue struct 'contains' the first and last nodes in a queue, and when a node is created, myqueue is updated. However, I cannot even get to that part (pop and push are written but omitted for brevity). The code is segfaulting at the line myqueue->front = new; with the following gdb output: Program received signal SIGSEGV, Segmentation fault. 0x08048401 in main () at queue.c:27 27 myqueue->front = new; Any idea what I'm doing wrong?

    Read the article

  • Unable to display Google Map through API in Flex SDK

    - by Spero.ShiroPetto
    Hello, I am using the mxmlc to compile the examples from google to get started in using Google Maps API in Flex 4. But after compiling the swf file the map does not load. I've registered for an API key Downloaded and included the Maps SDK in the xml config file used at compile time C:\sdk\flex4\frameworks\flex-config.xml <external-library-path> <path-element>libs/google/maps/lib/map_flex_1_18.swc</path-element> </external-library-path> Foo.mxml <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <maps:Map xmlns:maps="com.google.maps.*" id="map" mapevent_mapready="onMapReady(event)" width="100%" height="100%" key="{KEY}"/> <mx:Script> <![CDATA[ import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; private function onMapReady(event:Event):void { this.map.setCenter(new LatLng(40.736072,-73.992062), 14, MapType.NORMAL_MAP_TYPE); } ]]> </mx:Script> </mx:Application> Any tips on where to go from here? I can compile a basic flex project without problem and displays the components I put in so i'd imagine it's something to do with the API Key? Thanks for the help

    Read the article

  • Building an extension framework for a Rails app

    - by obvio171
    I'm starting research on what I'd need in order to build a user-level plugin system (like Wordpress plugins) for a Rails app, so I'd appreciate some general pointers/advice. By user-level plugin I mean a package a user can extract into a folder and have it show up on an admin interface, allowing them to add some extra configuration and then activate it. What is the best way to go about doing this? Is there any other opensource project that does this already? What does Rails itself already offer for programmer-level plugins that could be leveraged? Any Rails plugins that could help me with this? A plugin would have to be able to: run its own migrations (with this? it's undocumented) have access to my models (plugins already do) have entry points for adding content to views (can be done with content_for and yield) replace entire views or partials (how?) provide its own admin and user-facing views (how?) create its own routes (or maybe just announce its presence and let me create the routes for it, to avoid plugins stepping on each other's toes) Anything else I'm missing? Also, is there a way to limit which tables/actions the plugin has access to concerning migrations and models, and also limit their access to routes (maybe letting them include, but not remove routes)? P.S.: I'll try to keep this updated, compiling stuff I figure out and relevant answers so as to have a sort of guide for others.

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >