Search Results

Search found 2264 results on 91 pages for 'avr gcc'.

Page 63/91 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • How to create Python module distribution to gracefully fall-back to pure Python code

    - by Craig McQueen
    I have written a Python module, and I have two versions: a pure Python implementation and a C extension. I've written the __init__.py file so that it tries to import the C extension, and if that fails, it imports the pure Python code (is that reasonable?). Now, I'd like to know what is the best way to distribute this module (e.g. write setup.py) so it can be easily used by people with or without the facility to build, or use, the C extension. My experience is limited but I see two possible cases: User does not have MS Visual Studio, or the GCC compiler suite, installed on their machine, to build the C extension User is running IronPython, Jython, or anything other than CPython. I only have used CPython. So I'm not sure how I could distribute this module so that it would work smoothly and be easy to install on those platforms, if they're unable to use the C extension.

    Read the article

  • How to create Python module distribution to gracefully fall-back to pure Python code

    - by Craig McQueen
    I have written a Python module, and I have two versions: a pure Python implementation and a C extension. I've written the __init__.py file so that it tries to import the C extension, and if that fails, it imports the pure Python code (is that reasonable?). Now, I'd like to know what is the best way to distribute this module (e.g. write setup.py) so it can be easily used by people with or without the facility to build, or use, the C extension, just by running: python setup.py install My experience is limited, but I see two possible cases: User does not have MS Visual Studio, or the GCC compiler suite, installed on their machine, to build the C extension User is running IronPython, Jython, or anything other than CPython. I only have used CPython. So I'm not sure how I could distribute this module so that it would work smoothly and be easy to install on those platforms, if they're unable to use the C extension.

    Read the article

  • Linking LAPACK/BLAS libraries

    - by Daniel Bremberg
    Background: I am working on a project written in a mix of C and Fortran 77 and now need to link the LAPACK/BLAS libraries to the project (all in a Linux environment). The LAPACK in question is version 3.2.1 (including BLAS) from netlib.org. The libraries were compiled using the top level Makefile (make lapacklib and make blaslib). Problem: During linking, error messages claimed that certain (not all) BLAS-routines called from LAPACK-routines were undefined. This gave me some headache but the problem was eventually solved when (in the Makefile) the order of appearance of the libraries to be linked was changed. Code: In the following, (a) gives errors while (b) does not. The linking is performed by (c). (a) LIBS = $(LAPACK)/blas_LINUX.a $(LAPACK)/lapack_LINUX.a (b) LIBS = $(LAPACK)/lapack_LINUX.a $(LAPACK)/blas_LINUX.a (c) gcc -Wall -O -o $@ project.o project.a $(LIBS) Question: What could be the reason for the undefined references of only some routines and what makes the order of appearance relevant?

    Read the article

  • How to install Python ssl module on Windows?

    - by Jader Dias
    The Google App Engine Launcher tells me: WARNING appengine_rpc.py:399 ssl module not found. Without the ssl module, the identity of the remote host cannot be verified, and connections may NOT be secure. To fix this, please install the ssl module from http://pypi.python.org/pypi/ssl . I downloaded the package and it contained a setup.py file. I ran: python setup.py install and then: Python was built with Visual Studio 2003; blablabla use MinGW32 Then I installed MinGW32 and now the compilation doesn't work. The end of the compilation errors contains: ssl/_ssl2.c:1561: error: `CRYPTO_LOCK' undeclared (first use in this function) error: command 'gcc' failed with exit status 1 What should I do?

    Read the article

  • Error using traits class.: "expected constructor destructor or type conversion before '&' token"

    - by Mark
    I have a traits class that's used for printing out different character types: template <typename T> class traits { public: static std::basic_ostream<T>& tout; }; template<> std::ostream& traits<char>::tout = std::cout; template<> std::wostream& traits<unsigned short>::tout = std::wcout; gcc (g++) version 3.4.5 (yes somewhat old) is throwing an error: "expected constructor destructor or type conversion before '&' token" And I'm wondering if there's a good way to resolve this. (it's also angry about _O_WTEXT so if anyone's got some insight into that, I'd also appreciate it)

    Read the article

  • Using Objective-C blocks with old compiler

    - by H2CO3
    I'm using the opensource iPhone toolchain on Linux for developing for jailbroken iPhones. I'd like to take advantage of the new (4.0, 5.0) iOS SDK features, but I can't as my old build of GCC doesn't understand the ^ block syntax. I noticed that blocks are just of type id (struct objc_object *). I know this from two resources: first, class-dump reports them as id, second, Apple docs clarify that "blocks can be retained". My quiestion is, how can I take advantage of blocks using this knowledge? I thought of something like: // this is in SDK 4.x/5.x - (void) doSomethingWithBlock:((int)(^block)(int)); // and I modify it like: - (void) doSomethingWithBlock(id)block; the question is: HOW TO ACTUALLY CALL IT? How do I create blocks? I can, of course, create function pointers (IMPs in particular), but how to achieve the object-like memory layout?

    Read the article

  • Is it possible to share a C struct in shared memory between apps compiled with different compilers?

    - by Joseph Garvin
    I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors? I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.

    Read the article

  • Mono mkbundle issue

    - by Sean
    Hello, I am trying to bundle mono runtime with a C# a simple (console) program that does nothing, but 'hello world'. Got Cygwin, configured it all, though it fails: $ mkbundle -o x2 x.exe --deps -z OS is: Windows Sources: 1 Auto-dependencies: True embedding: C:\cygwin\home\Sean\x.exe compression ratio: 31.71% embedding: C:\PROGRA~2\MONO-2~1.1\lib\mono\4.0\mscorlib.dll compression ratio: 34.68% Compiling: as -o temp.o temp.s gcc -mno-cygwin -g -o x2 -Wall temp.c `pkg-config --cflags --libs mono-2|dos2unix` -lz temp.o temp.c: In function `main': temp.c:173: warning: implicit declaration of function `g_utf16_to_utf8' temp.c:173: warning: assignment makes pointer from integer without a cast temp.c:188: warning: assignment makes pointer from integer without a cast /tmp/ccu8fTcQ.o: In function `main': /home/Sean/temp.c:173: undefined reference to `_g_utf16_to_utf8' /home/Sean/temp.c:188: undefined reference to `_g_utf16_to_utf8' collect2: ld returned 1 exit status [Fail] Tried post with similar problem to mine here, but it didn't work either. My configuration: Windows 7 64bit Mono 2.8.1 Latest Cygwin Not sure what's the story here. Help appreciated.

    Read the article

  • how to temporarily set makeprg in vim

    - by Haiyuan Zhang
    In the normal case I use vim's make utility I will set makeprg to the Makefile of the project I'm currently working for. Since usually the project will last for weeks or even longer, I don't need to change the setting of makeprg very often . But sometimes I need to write some "foobar" code either for practicing my c++ skill or for prototyping some primitive ideas in my mind. So whenever I switch to the "foobar" mode of vim usage, I need to comments the original makeprg setting add the new setting as following : au FileType c set makeprg=gcc\ % au FileType cpp set makeprg=g++\ % which is really very very inconvenient . when I back to the "normal project mode" of vim usage, I need to change back to the original setting . back and forth .... what I want to know from you guys is that : is it possible to make the setting of makeprg temporarily . for example , define a function in which first set a local value of makeprg and then call make before return form the function call automatically restore makeprg to the value before the function call.

    Read the article

  • help using pcap library to sniff packets

    - by scatman
    i am using pcap sample codes to create my own sniffer. i downloaded their sample sniffer and its working on windows but not on linux. i am using gcc compiler on both machines, and i have only pcap.h included. the error is : dereferencing pointer to incomplete type. the netmask is causing the error. the netmask is the mask of the first address of the interface. u_int netmask=netmask((structsockaddr_in*)d->addresses->netmask))->sin_addr.S_un.S_addr; any solutions?

    Read the article

  • Cython - properly declaring C funs

    - by deepblue
    I'm having trouble with running a bare example. I'm using this to declare a function in Cython coming from cinterf.h header: cdef extern from 'cinterf.h': int xsb_init_string(char* p_xsb_path) The declaration in the C header file is: DllExport extern int call_conv xsb_init_string(char *); both DllExport and call_conv are macros defined elsewhere, and resolve to GCC compiler directives. do I have to use those as well inside cdef to fully match the declaration? When I call xsb_init_string() as: xsb_init_string('some string') The python interpreter gives me: 'ImportError: ./py_ext.so: undefined symbol: xsb_init_string' Am I declaring the xsb_init_string() signature properly, inside cdef?

    Read the article

  • PECL install fails

    - by James
    Hey! I have browsed every Google result, read all the forum posts about this error, but I cannot solve it. When using PECL install for anything, I always end up getting this error: checking whether the C compiler works... configure: error: cannot run C compiled programs. Everything else succeeds up to that point them bam! I'm using CentOS 4.3, PEAR is the latest stable version, GCC is a stable and recent version. Everything is working as it should, but the C compiler always seems to error. I've tried to make tmp have the right privilages for the operation by temporarily enabling it using: mount -o remount,exec,suid /tmp But that doesn't work. I've literally tried everything that has been suggested by to no avail. Any ideas?

    Read the article

  • C compiler selection in cabal package

    - by ony
    Today I've tried C compiler (Clang) for C code I use in my haskell library and found that I can gain speed increase in comparsing with my system compiler (GCC 4.4.3) from 426.404 Gbit/s to 0.823 Tbit/s So I decided to add some flags to control the way that C source file is compiled (i.e. something like use-clang, use-intel etc.). Snippet of cabal package description file: C-Sources: c_lib/tiger.c Include-Dirs: c_lib Install-Includes: tiger.h if flag(debug) GHC-Options: -debug -Wall -fno-warn-orphans CPP-Options: -DDEBUG CC-Options: -DDEBUG -g else GHC-Options: -Wall -fno-warn-orphans Question is: which options in descritpion file need to be modified to change C compiler used to compile "c_lib/tiger.c"? I did found only CC-Options.

    Read the article

  • Compiling the icu sqlite extension statically linked to icu.

    - by Georg
    I want to compile the icu sqlite extension statically linked to icu. This is what I've tried, maybe the mistake is obvious to you. cd icu/source ./runConfigureIcu Linux --enable-static --with-packaging-format=archive ... make cd ../../icu-sqlite gcc -o libSqliteIcu.so -shared icu.c -I../icu/source/common -I../icu/source/i18n -L ../icu/source/lib -lsicuuc -lsicui18n -lsicudata ... sqlite3 .load "libSqliteIcu.so" Undefined symbol utf8_countTrailBytes Files icu sqlite extension Download icu.c from sqlite.org ICU 4.2.1 Download ICU4C from icu-project.org My Requirements Runs on Linux & Windows Only one file that I have to distribute: libSqliteIcu.so. Any idea what else I can try? Documentation Sqlite ICU extension's readme ICU's readme

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Problem installing Ruby 1.9.2 with RVM on OSX 10.4

    - by questionmark
    Hi, I successfully installed Ruby 1.8.7 with RVM on OS 10.4. However, when I try to install 1.9.2, I get the following error: make: * [libruby.1.9.1.dylib] Error 1 Installation: [qm]$ rvm install 1.9.2 /Users/qm/.rvm/rubies/ruby-1.9.2-p136, this may take a while depending on your cpu(s)... % ruby-1.9.2-p136 - #fetching % ruby-1.9.2-p136 - #downloading ruby-1.9.2-p136, this may take a while depending on your connection...% ruby-1.9.2-p136 - #extracting ruby-1.9.2-p136 to /Users/qm/.rvm/src/ruby-1.9.2-p136% ruby-1.9.2-p136 - #extracted to /Users/qm/.rvm/src/ruby-1.9.2-p136% ruby-1.9.2-p136 - #configuring % ruby-1.9.2-p136 - #compiling % Error running 'make ', please read /Users/qm/.rvm/log/ruby-1.9.2-p136/make.log% There has been an error while running make. Halting the installation.% Looking at the end of the make log: MACOSX_DEPLOYMENT_TARGET environment variable set to: 10.1 /usr/libexec/gcc/powerpc-apple-darwin8/4.0.1/libtool: internal link edit command failed make: * [libruby.1.9.1.dylib] Error 1 Thanks for any help/suggestions!

    Read the article

  • Makefile error: Unexpected end of line seen

    - by Winston C. Yang
    Trying to install Git, I ran configure and make, but got the following error message: make: Fatal error in reader: Makefile, line 221: Unexpected end of line seen The Makefile looks like: 218: GIT-VERSION-FILE: FORCE 219: @$(SHELL_PATH) ./GIT-VERSION-GEN 220: -include GIT-VERSION-FILE 221: 222: uname_S := $(shell sh -c 'uname -s 2>/dev/null øø echo not') What's causing the error? The following information may or may not be relevant: I tried to install Git 1.7.0.3 on SunOS 5.9 (Solaris 9) in a directory in my account. The gcc version is 3.4.2 (older then the version of 3.4.6 stated by sunfreeware.com). I don't have root privileges.

    Read the article

  • cross-compiling autoconf-based tools with mingw on Mac OS X

    - by paleozogt
    I'd like to cross-compile some open-source libraries (libiconv, gettext, glib2) for windows using mingw on Mac OS X. I've installed mingw on Mac with MacPorts. But now I'm not sure what to give to the configure script so that it will work. The cross-compilation tutorials I've seen all talk about makefiles, but no one mentions what to give autoconf-based projects. I'm configuring like this: ./configure --prefix=/opt/local/i386-mingw32 --host=i586-mingw32msvc but it doesn't seem to take. While the configure will pass, running "make" will give this error: i686-apple-darwin9-gcc-4.0.1: no input files I thought the "--host" argument to configure was supposed to tell it to use the mingw compiler? I'm not sure what's going on here.

    Read the article

  • Disassemble Microsoft Visual Studio 2003 compiler output

    - by Carl Norum
    I'm seeing what I think is strange behaviour from object files output by the Microsoft Visual Studio 2003 tools. The file utility tells me: asmfile.obj: 80386 COFF executable not stripped - version 30821 For objects created by the assembler, but for objects coming from C files, I get just: cfile.obj: data Using Microsoft's dumpbin utility and the objdump I got from cygwin, I can disassemble the assembly-built file, but I get no useful results from either utility for the C-built files. I have a couple of questions related to this difference: What is the object file format generated by the MSVC2003 compiler? How can I disassemble that object file? I am particularly interested in getting the disassembly in AT&T syntax - I'm doing a port of a large source base to make it work with GCC, and I would like to use this method as a shortcut for some of the inline assembly routines in the project. Thanks!

    Read the article

  • Can't import obj in Python on OS X 10.6.3 Snow Leopard - libiconv.2.dylib?

    - by James
    on OS X 10.6.3 Snow Leopard % python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import objc Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/__init__.py", line 22, in _update() File "/Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/__init__.py", line 19, in _update import _objc ImportError: dlopen(/Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/_objc.so, 2): Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/_objc.so Reason: Incompatible library version: _objc.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 -- what do I need to do?

    Read the article

  • CPU and profiling not supported for remote jvisualvm session

    - by yawn
    When monitoring a remote app (using jstatd) I can neither profile nor monitor cpu consumption. Heap monitoring (provided I do not use G1) works. jvisualvm provides the message "Not supported for this JVM." in the CPU graph window. Is there anything missing in my setup? The google showed up next to no results. The local environment (Mac OS X 10.6): java version "1.6.0_15" Java(TM) SE Runtime Environment (build 1.6.0_15-b03-219) Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-90, mixed mode) The remote environment (Linux version 2.6.16.27-0.9-smp (gcc version 4.1.0 (SUSE Linux))): java version "1.6.0_16" Java(TM) SE Runtime Environment (build 1.6.0_16-b01) Java HotSpot(TM) 64-Bit Server VM (build 14.2-b01, mixed mode) Local monitoring works as advertised.

    Read the article

  • HOM with Objective C

    - by Coxer
    Hey, i am new to objective C, but i tried to use HOM in order to iterate over an NSArray and append a string to each element. here is my code: void print( NSArray *array ) { NSEnumerator *enumerator = [array objectEnumerator]; id obj; while ( nil!=(obj = [enumerator nextObject]) ) { printf( "%s\n", [[obj description] cString] ); } } int main( int argc, const char *argv[] ) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSArray *names = [[NSArray alloc] init]; NSArray *names_concat = [[NSArray alloc] init]; names = [NSArray arrayWithObjects:@"John",@"Mary",@"Bob",nil]; names_concat = [[names collect] stringByAppendingString: @" Doe"]; print(names_concat); [pool release]; } What is wrong with this code? My compiler (gcc) says NSArray may not respond to "-collect"

    Read the article

  • Error With Foundation.h

    - by Nathan Campos
    Hello, I learning Objective-C in Linux(Ubuntu), but when i tryed to compile my application that needs the Foundation headers i got an error saying that the file cannot be found, but i have installed the GNUstep development package(gnustep-devel). Here is my code: // Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } - (void) print; - (void) setNumerator: (int) n; - (void) setDenominator: (int) d; - (void) numerator; - (void) denominator; @end And here is the console log: ubuntu@eeepc:~$ gcc main.m -o frac -lobjc In file included from main.m:3: Fraction.h:2:26: error: objc/NSObject.h: No such file or directory In file included from main.m:3: Fraction.h:4: error: cannot find interface declaration for ‘NSObject’, superclass of ‘Fraction’ ubuntu@eeepc:~$ What i need to do?

    Read the article

  • Convert "this" to a reference-to-pointer

    - by Austin Hyde
    Just stumbled onto this problem. (title says it all) Let's say I have a struct struct Foo { void bar () { do_baz(this); } void do_baz(Foo*& pFoo) { pFoo->p_sub_foo = new Foo; // for example } Foo* p_sub_foo; } GCC tells me that temp.cpp: In member function ‘void Foo::bar()’: temp.cpp:3: error: no matching function for call to ‘Foo::do_baz(Foo* const)’ temp.cpp:5: note: candidates are: void Foo::do_baz(Foo*&) So, how do I convert what is apparently a const Foo* to a Foo*&?

    Read the article

  • Problem Building dschaefer / android-box2d

    - by Qwark
    I'm trying to build dschaefer android-box2d, and did follow the recipe. I do get this error when trying to build the TestBox2d with eclipse: make all /cygdrive/c/android/android-ndk-r3/build/prebuilt/windows/arm-eabi-4.2.1/bin/arm-eabi-ld \ -nostdlib -shared -Bsymbolic --no-undefined \ -o obj/libtest.so obj/test.o -L../box2d/lib/android -lbox2d \ -L/cygdrive/c/android/android-ndk-r3/build/platforms/android-3/arch-arm/usr/lib \ -llog -lc -lstdc++ -lm \ /cygdrive/c/android/android-ndk-r3/build/prebuilt/windows/arm-eabi-4.2.1/lib/gcc/arm-eabi/4.2.1/interwork/libgcc.a \ /cygdrive/c/android/android-ndk-r3/build/prebuilt/windows/arm-eabi-4.2.1/bin/arm-eabi-ld: cannot find -lbox2d make: * [obj/libtest.so] Error 1 The only thing I did change was in the TestBox2d\Makefile where i did change the path to the NDK. There are some other that have the same problem HERE but I do not know how to fix it.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >