Search Results

Search found 4068 results on 163 pages for 'intel gma'.

Page 132/163 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • MySQLdb not INSERTING, _mysql does fine.

    - by Mad_Casual
    Okay, I log onto the MySQL command-line client as root. I then open or otherwise run a python app using the MySQLdb module as root. When I check the results using python (IDLE), everything looks fine. When I use the MySQL command-line client, no INSERT has occurred. If I change things around to _mysql instead of MySQLdb, everything works fine. I'd appreciate any clarification(s). "Works" until IDLE/Virtual machine is reset: <pre><code>import MySQLdb db = MySQLdb.connect(user='root', passwd='*******',db='test') cursor = db.cursor() cursor.execute("""INSERT INTO test VALUES ('somevalue');""",)</code></end> Works: <pre><code>import _mysql db = _mysql.connect(user='root', passwd='*******',db='test') db.query("INSERT INTO test VALUES ('somevalue');")</code></end> System info: Intel x86 WinXP Python 2.5 MySQL 5.1.41 MySQL-Python 1.2.2

    Read the article

  • Multi-platform development from one computer

    - by iama
    I am planning to build a new development computer for both Windows & Linux platforms. On Windows, my development would be primarily in .NET/C#/IIS/MSSQL Server. On Linux—preferably Ubuntu—my development would be in Ruby and Python. I am thinking of buying a laptop with Windows 7 pre-installed with 4GB RAM, Intel Core 2 Duo, and 320 GB HD; running 2 VMs for both Windows and Linux development with the host OS as my work station. Of course, I would be running DBs and web servers on the respective platforms. Is this a typical setup? My only concern is running two VMs side by side. Not sure if this configuration would be optimal. Alternative would be to do my Windows development on the host Windows 7 OS. What are your thoughts?

    Read the article

  • Is there any .Net JIT Support from chip vendors?

    - by NoMoreZealots
    I know that ARM actually has some support for Java and SUN obviously, but I haven't really references seen any chip vendor supporting a .Net JIT compiler. I know IBM and Intel both support C compilers, as well as TI and many of the embedded chip vendors. When you think of it, all a JIT compiler is, is the last stages of compilation and optimization which you would think would be a good match for a chip vendor's expertize. Perhaps a standardized Plug In compilation engine for the VM would make sense. Microsoft is targeting .Net to embedded Windows platforms as well, so they are fair game. Pete

    Read the article

  • What threading analysis tools do you recommend?

    - by glutz78
    My primary IDE is Visual Studio 2005 and I have a large C/C++ project. I'm interested in what thread analysis tools are recommended. By that I mean, I want a tool, static or dynamic, to help find race conditions, deadlocks, and the like. So far I've casually researched the following: 1. Intel Thread Checker: I don't believe that it ties into VS 2005? 2. Valgrind/Helgrind: free. 3. Coverity: this is a costly tool if i understand correctly. Anyone have experience with any of these or other? I'd much appreciate any advice. Thank you.

    Read the article

  • What is the difference between a 32-bit and 64-bit processor?

    - by JJG
    I have been trying to read up on 32-bit and 64-bit processors (http://en.wikipedia.org/wiki/32-bit_processing). My understanding is that a 32-bit processor (like x86) has registers 32-bits wide. I'm not sure what that means. So it has special "memory spaces" that can store integer values up to 2^32? I don't want to sound stupid, but I have no idea about processors. I'm assuming 64-bits is, in general, better than 32-bits. Although my computer now (one year old, Win 7, Intel Atom) has a 32-bit processor.

    Read the article

  • Linux program in FreeBSD

    - by Alex Farber
    Trying to run my program in FreeBSD OS, I have the following results: $ ./myprogram ELF binary type "0" not known ./myprogram: 1: Syntax error: "&" unexpected (expecting ")") $ file myprogram myprogram: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped The program is built In GCC on Ubuntu computer. What can I do? Can I build the program for FreeBSD on my Ubuntu computer by changing some build options, or I need to build it in FreeBSD OS? Maybe there is some way to convert executable to format recognized by FreeBSD?

    Read the article

  • Parallel Programming. Boost's MPI, OpenMP, TBB, or something else?

    - by unknownthreat
    Hello, I am totally a novice in parallel programming, but I do know how to program C++. Now, I am looking around for parallel programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB. For anyone who have experienced with any of these 3 API (or any other parallelism API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

    Read the article

  • How to specify execution time of x86 and PowerPC instructions?

    - by Goofy
    Hello! I have to approximate execution time of PowerPC and x86 assembler code.I understand that I cannot compute exact it dependson many problems (current processor state - x86 processor dicides internal instructions in microinstructions, memory access time obtainig code from cache of from slower memory etc.). I found some information in Intel Optimizaction reference (APPENDIX C), but it does not provide information about all general purpose instructions. Is there any complete reference about it? What about PowerPC processors? Where can I fund such information?

    Read the article

  • Multi-Core Programming. Boost's MPI, OpenMP, TBB, or something else?

    - by unknownthreat
    Hello, I am totally a novice in Multi-Core Programming, but I do know how to program C++. Now, I am looking around for Multi-Core Programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB. For anyone who have experienced with any of these 3 API (or any other API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

    Read the article

  • gdb stack strangeness

    - by aaa
    Hi I get this weird backtrace (sometimes): (gdb) bt #0 0x00002b36465a5d4c in AY16_Loop_M16 () from /opt/intel/mkl/10.0.3.020/lib/em64t/libmkl_mc.so #1 0x00000000000021da in ?? () #2 0x00000000000021da in ?? () #3 0xbf3e9dec2f04aeff in ?? () #4 0xbf480541bd29306a in ?? () #5 0xbf3e6017955273e8 in ?? () #6 0xbf442b937c2c1f37 in ?? () #7 0x3f5580165832d744 in ?? () ... Any ideas why i cant see the symbols? Compiled with debugging syms of course. The same session gives symbols at other points.

    Read the article

  • How to retrieve a bigint from a database and place it into an Int64 in SSIS

    - by b0fh
    I ran into this problem a couple years back and am hoping there has been a fix and I just don't know about it. I am using an 'Execute SQL Task' in the Control Flow of an SSIS package to retrieve a 'bigint' ID value. The task is supposed to place this in an Int64 SSIS variable but I getting the error: "The type of the value being assigned to variable "User::AuditID" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object." When I brought this to MS' attention a couple years back they stated that I had to 'work around' this by placing the bigint into an SSIS object variable and then converting the value to Int64 as needed. Does anyone know if this has been fixed or do I still have to 'work around' this mess? edit: Server Stats Product: Microsoft SQL Server Enterprise Edition Operating System: Microsoft Windows NT 5.2 (3790) Platform: NT INTEL X86 Version: 9.00.1399.06

    Read the article

  • Possible to distribute an MPI (C++) program accross the internet rather than within a LAN cluster?

    - by Ben
    Hi there, I've written some MPI code which works flawlessly on large clusters. Each node in the cluster has the same cpu architecture and has access to a networked (i.e. 'common') file system (so that each node can excecute the actual binary). But consider this scenario: I have a machine in my office with a dual core processor (intel). I have a machine at home with a dual core processor (amd). Both machines run linux, and both machines can successfully compile and run the MPI code locally (i.e. using 2 cores). Now, is it possible to link the two machines together via MPI, so that I can utilise all 4 cores, bearing in mind the different architectures, and bearing in mind the fact that there are no shared (networked) filesystems? If so, how? Thanks, Ben.

    Read the article

  • Increasing the JVM maximum heap size for memory intensive applications

    - by Alceu Costa
    I need to run a Java memory intensive application that uses more than 2GB, but I am having problems to increase the heap maximum size. So far, I have tried the following approaches: Setting the -Xmx parameter, e.g. -Xmx3000m. This approaches fails at the creation of the JVM. From what I've googled, it looks like that -Xmx must be less than 2GB. Using the -XX:+AggressiveHeap option. When I try this approach I get an 'Not enough memory' error that tells that the heap size is 1273.4 MB, even though my computer has 8GB of memory. Is there another approach that I can try to increase the maximum heap size of the JVM? Here's a summary of the computer specs: OS: Windows 7 (64 bit) Processor: Intel Core i7 (2.66 GHz) Memory: 8 GB

    Read the article

  • Why is Decimal('0') > 9999.0 True in Python?

    - by parxier
    This is somehow related to my question Why is ''0 True in Python? In Python 2.6.4: >> Decimal('0') > 9999.0 True From the answer to my original question I understand that when comparing objects of different types in Python 2.x the types are ordered by their name. But in this case: >> type(Decimal('0')).__name__ > type(9999.0).__name__ False Why is Decimal('0') > 9999.0 == True then? UPDATE: I usually work on Ubuntu (Linux 2.6.31-20-generic #57-Ubuntu SMP Mon Feb 8 09:05:19 UTC 2010 i686 GNU/Linux, Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) [GCC 4.4.1] on linux2). On Windows (WinXP Professional SP3, Python 2.6.4 (r264:75706, Nov 3 2009, 13:23:17) [MSC v.1500 32 bit (Intel)] on win32) my original statement works differently: >> Decimal('0') > 9999.0 False I even more puzzled now. %-(

    Read the article

  • Flash player debugger plugin crashes on Mac OS [tried many version of the plugin and browsers, all o

    - by Ali
    Dear All, I recently started using Mac OS X for a flex/actionscript project and having a problem with flash player debugger plugin for the browsers: OSX: 10.6.3 Browsers I tried: firefox, safari and chrome Flashplayer debug "Flash Player 10 Plugin content debugger (Intel-based Macs) Whenever I open a page containing a flash content, my browser crashes due to flash player plugin's crash. I checked the version of my flash player debug plugin with http://kb2.adobe.com/cps/155/tn_15507.html and as the version checker is written in flash, my browser crashes a few seconds later. I am using version 10,0,42,2 (debug edition: yes) This is what I see in the crash log: Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x000000001887e3e4 Any ideas how can I track this issue ? Cheers, -A

    Read the article

  • [OpenCV] cvFilter2D works very very slow in Ubuntu 9.10 amd64?

    - by Hong
    Hi, has someone tried the cvFilter2D under 64bit linux? Recently when I was trying to port some code to the amd64 version of Ubuntu 9.10, I just found that the cvFilter2D works really slow. The version is Opencv 2.0. The code is as follows: CvMat *mat_src = cvCreateMat(128, 128, CV_32FC1); CvMat *mat_dest = cvCreateMat(128, 128, CV_32FC1); CvMat* mat_kernel = cvCreateMat( 25, 25, CV_32FC1); // initialization ... cvFilter2D( (CvMat*)mat_src, (CvMat*)mat_dest, (CvMat*)mat_kernel, cvPoint( (25-1)/2, (25-1)/2)); // I needs approximately 100ms to finish that... My CPU is Intel 2.4G However, the Opencv 1.1pre only cost me 3ms for the same code...

    Read the article

  • python json_encode throws KeyError exception

    - by MattM
    In a unit test case that I am running, I get a KeyError exception on the 4th json object in the json text below. I went through the sub-objects and found that it was the "cpuid" object that is the offending object, but I am completely at a loss as to what is wrong with the formatting. response = self.app.post( '/machinestats', params=dict(record=self.json_encode([ {"type": "crash", "instance_id": "xxx", "version": "0.2.0", "build_id": "unknown", "crash_text": "Gah!"}, {"type": "machine_info", "machine_info": "I'm awesome.", "version": "0.2.0", "build_id": "unknown", "instance_id": "yyy"}, {"machine_info": "Soup", "crash_text": "boom!", "version": "0.2.0", "build_id": "unknown", "instance_id": "zzz", "type": "crash"}, {"build_id" : "unknown", "cpu_brand" : "intel", "cpu_count" : 4, "cpuid": { "00000000": {"eax" :123,"ebx" :456, "ecx" :789,"edx" :321}, "00000001": {"eax" :123,"ebx" :456, "ecx" :789,"edx" :321}}, "driver_installed" : True, "instance_id" : "yyy", "version" : "0.2.0", "machine_info" : "I'm awesome.", "os_version" : "linux", "physical_memory_mib" : 1024, "product_loaded" : True, "type" : "machine_info", "virtualization_advertised" : True} ])))

    Read the article

  • How to Use Eclipse to Debug JNI code (Java & C/C++)

    - by tkryger
    While I can debug my application with the Eclipse JDT debugger for Java code and GDB for C code, I would prefer to use a single tool for all my debugging. I found several projects that enable "mixed-mode" debugging in Eclipse and include support for single stepping between Java and native code. Intel's Integrated Debugger for Java/JNI Environments Mariot Chauvin's Summer of Code Project: Support Seamless Debugging between JDT & CDT Unfortunately, one claims to be pre-release quality and the other is currently unmaintained. Are there any plug-ins that bring mixed mode debugging functionality to Eclipse in a reliable way or should I continue to use two separate debuggers?

    Read the article

  • How to manipulate the GL.bindframebuffer to target to bind GL_EXT_framebuffer

    - by Alan
    I'm trying to change the framebuffer object from GL_ARB_framebuffer and force it to use GL_EXT_framebuffer since my system is not compatible with the first one. Where in the solution do I need to implement this and how? more information on my problem whenever I create a new Windows OpenGL project from Visual Studio using MonoGame i get the error "cannot find entry point in glbindframebuffer in opengl32.dll" since the framebuffer it uses is GL_ARB_framebuffer which is only supported in Opengl 3 so in a github post i read Gihub post where they suggest this patch that in order to patch you need to force the frame buffers to use GL_EXT_framebuffer but I dont know how to force them to use the EXT instead of the ARB , btw Im using Opengl v2 Mobile intel 4 series card, which is Opengl v2 and ARB needs Opengl v3.

    Read the article

  • Cocoa Screensaver Framework error message

    - by Veljko Skarich
    Hi, I'm trying to make a screen saver using the cocoa screensaver framework. The project builds fine and generates the .saver file, but when I try to run it in the preferences test window, it displays the error message: "You cannot use the screen saver with this version of Mac OSX. Please contact the vendor to get a newer version of the screen saver" I have the xcode settings to Release | x86_64, and I am running OSX 10.6.6 on a 2.4 GHz Intel Core i5 Macbook Pro. I've searched around online, and most of the solutions to this error message seem addressed to making sure the build is 64-bit, which the x86_64 setting should indeed take care of. I am trying to play a QT movie in the screensaver, if that makes any difference. I am at a loss, any help would be appreciated. Thank you.

    Read the article

  • Does Django tests run slower on the mac compared to linux?

    - by Thierry Lam
    I'm currently developing my Django projects on both: Mac OS X 10.5, 32 bit Ubuntu Server 9.10 64 bits (1 CPU, 512MB RAM) Both of the above OS are using: Python 2.6.4 Django 1.1.1 MySQL 5.1 Running 12 tests for one of my application take: Mac: 57.513s Linux: 30.935s EDIT: Mac Hardware Spec: MacBook Pro 2.2 GHz Intel Core 2 Duo 3GB RAM I'm running the Ubuntu OS on the same mac above through VMware Fusion 2.0.6. You might argue that Ubuntu Server 64 bits is faster but I have observed a similar speed difference on Ubuntu 8.10 32 bits desktop edition. Even if I turn off my linux VM and other mac applications, I still experience the slowness. Has anyone else experienced this Django test speed difference across those two OS?

    Read the article

  • Which is faster when animating the UI: a Control or a Picture?

    - by Christopher Walker
    /I'm working with and testing on a computer that is built with the following: {1 GB RAM (now 1.5 GB), 1.7 GHz Intel Pentium Processor, ATI Mobility Radeon X600 GFX} I need scale / transform controls and make it flow smoothly. Currently I'm manipulating the size and location of a control every 24-33ms (30fps), ±3px. When I add a 'fade' effect to an image, it fades in and out smoothly, but it is only 25x25 px in size. The control is 450x75 px to 450x250 px in size. In 2D games such as Bejeweled 3, the sprites animate with no choppy animation. So as the title would suggest: which is easier/faster on the processor: animating a bitmap (rendering it to the parent control during animation) or animating the control it's self?

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

  • Can EPD Python and MacPorts Python coexist on OS X (matplotlib)?

    - by bjoern
    I've been using MacPorts Python 2.6 on OS X 10.6. I am considering also installing the Enthought Python Distribution (EPD) on the same machine because it comes preconfigured with matplotlib and other nice data analysis and visualization packages. Can the two Python distributions co-exist peacefully on the same machine? What potential problems will I have to look out for (e.g., environment variables)? I know that building matplotlib through MacPorts is an option, but the process is lengthy (on the order of a full day) and there are open questions about compiling some dependencies on 64bit Intel. I would like to know about the tradeoffs before committing to one of the two approaches.

    Read the article

  • Linux binary built for 2.0 kernel wouldn't execute on 2.6.x kernel.

    - by lorin
    I was installing a binary Linux application on Ubuntu 9.10 x86_64. The app shipped with an old version of gzip (1.2.4), that was compiled for a much older kernel: $ file gzip gzip: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.0.0, stripped I wasn't able to execute this program. If I tried, this happened: $ ./gzip -bash: ./gzip: No such file or directory ldd was similarly unhappy with this binary: $ ldd gzip not a dynamic executable This isn't a showstopper for me, since my installation has a working version of gzip I can use. But I'm curious: What's the most likely source of this problem? A corrupted file? Or a binary incompatibility due to being built for a much older {kernel,libc,...}?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >