Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 771/1113 | < Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >

  • Python ImportError when executing 'import.py', but not when executing 'python import.py'

    - by Martin Del Vecchio
    I am running Cygwin Python version 2.5.2. I have a three-line source file, called import.py: #!/usr/bin/python import xml.etree.ElementTree as ET print "Success!" When I execute "python import.py", it works: C:\Temp>python import.py Success! When I run the python interpreter and type the commands, it works: C:\Temp>python Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14) [GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> #!/usr/bin/python ... import xml.etree.ElementTree as ET >>> print "Success!" Success! >>> But when I execute "import.py', it does not work: C:\Temp>which python /usr/bin/python C:\Temp>import.py Traceback (most recent call last): File "C:\Temp\import.py", line 2, in ? import xml.etree.ElementTree as ET ImportError: No module named etree.ElementTree When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux. Any ideas? Thanks.

    Read the article

  • "No bootable device - insert boot disk" after restart on Ubuntu 10.04 b1 update

    - by anjanesh
    I was making an update on my Ubuntu 10.04 beta1 64-bit PC when, after reboot I get PXE-E61: Mediaa test failure, check cable PXE-M0F: Exiting Intel Boot Agent. No bootable device - insert boot disk and press any key How did my boot record disappear ? BIOS Boot Boot Menu Type : Normal Boot Device Priority : <CD/DVD-ROM Drive> <Hard Disk Drive> <Floppy Drive> <Ethernet> Hard Driver Order : No Hard Disk Drive CD/DVD ROM Drive Order : <PT-TSSTcorp CDDV> Removable Drive Order : No Removable Drive Boot to Optical Devices : <Enable> Boot to Removable Devices : <Enable> Boot to Network : <Enable> USB Boot : <Enable>

    Read the article

  • not able to use g++ from Fedora

    - by eSKay
    $ yum list | grep gcc arm-gp2x-linux-gcc.i686 4.1.2-11.fc12 @fedora arm-gp2x-linux-gcc-c++.i686 4.1.2-11.fc12 @fedora gcc.i686 4.4.3-4.fc12 @updates libgcc.i686 4.4.3-4.fc12 @updates avr-gcc.i686 4.4.2-2.fc12 updates avr-gcc-c++.i686 4.4.2-2.fc12 updates compat-gcc-34.i686 3.4.6-18 fedora compat-gcc-34-c++.i686 3.4.6-18 fedora compat-gcc-34-g77.i686 3.4.6-18 fedora compat-libgcc-296.i686 2.96-143 fedora gcc-c++.i686 4.4.3-4.fc12 updates gcc-gfortran.i686 4.4.3-4.fc12 updates gcc-gnat.i686 4.4.3-4.fc12 updates gcc-java.i686 4.4.3-4.fc12 updates gcc-objc.i686 4.4.3-4.fc12 updates gcc-objc++.i686 4.4.3-4.fc12 updates mingw32-gcc.i686 4.4.1-3.fc12 fedora mingw32-gcc-c++.i686 4.4.1-3.fc12 fedora mingw32-gcc-gfortran.i686 4.4.1-3.fc12 fedora mingw32-gcc-objc.i686 4.4.1-3.fc12 fedora mingw32-gcc-objc++.i686 4.4.1-3.fc12 fedora msp430-gcc.i686 3.2.3-3.20090210cvs.fc12 $ gcc works fine on .c files but fails on .cpp files saying: $ gcc: error trying to exec 'cc1plus': execvp: No such file or directory g++ fails saying: $ g++: Command not found. What should I do to be able to compile C++ files?

    Read the article

  • Authenticating Windows 7 against MIT Kerberos 5

    - by tommed
    Hi There, I've been wracking my brains trying to get Windows 7 authenticating against a MIT Kerberos 5 Realm (which is running on an Arch Linux server). I've done the following on the server (aka dc1): Installed and configured a NTP time server Installed and configured DHCP and DNS (setup for the domain tnet.loc) Installed Kerberos from source Setup the database Configured the keytab Setup the ACL file with: *@TNET.LOC * Added a policy for my user and my machine: addpol users addpol admin addpol hosts ank -policy users [email protected] ank -policy admin tom/[email protected] ank -policy hosts host/wdesk3.tnet.loc -pw MYPASSWORDHERE I then did the following to the windows 7 client (aka wdesk3): Made sure the ip address was supplied by my DHCP server and dc1.tnet.loc pings ok Set the internet time server to my linux server (aka dc1.tnet.loc) Used ksetup to configure the realm: ksetup /SetRealm TNET.LOC ksetup /AddKdc dc1.tnet.loc ksetip /SetComputerPassword MYPASSWORDHERE ksetip /MapUser * * After some googl-ing I found that DES encryption was disabled by Windows 7 by default and I turned the policy on to support DES encryption over Kerberos Then I rebooted the windows client However after doing all that I still cannot login from my Windows client. :( Looking at the logs on the server; the request looks fine and everything works great, I think the issue is that the response from the KDC is not recognized by the Windows Client and a generic login error appears: "Login Failure: User name or password is invalid". The log file for the server looks like this (I tail'ed this so I know it's happening when the Windows machine attempts the login): If I supply an invalid realm in the login window I get a completely different error message, so I don't think it's a connection problem from the client to the server? But I can't find any error logs on the Windows machine? (anyone know where these are?) If I try: runas /netonly /user:[email protected] cmd.exe everything works (although I don't get anything appear in the server logs, so I'm wondering if it's not touching the server for this??), but if I run: runas /user:[email protected] cmd.exe I get the same authentication error. Any Kerberos Gurus out there who can give me some ideas as to what to try next? pretty please?

    Read the article

  • Enable VT-x in HP 8300 elite

    - by lang2
    I have a HP 8300 elite (Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz). I'm trying to run a virtual machine via VirtualBox. But every time I start the VM, it says: VT-x is disabled in the BIOS. (VERR_VMX_MSR_VMXON_DISABLED). My lscpu output is like this: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 58 Stepping: 9 CPU MHz: 1600.000 BogoMIPS: 6784.74 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7 I went into the BIOS but the things the can be tweaked is very limited and I couldn't find the VT-x setting. Anybody know how to do this in this setup?

    Read the article

  • Does this laptop have high enough specifications for gaming? [closed]

    - by Grant
    Here's the laptop It wouldn't be hardcore gaming, mostly things like the new Deus Ex game, Mirror's Edge, Portal 2, etc... I need to replace my current, broken, laptop and I thought this would be a good opportunity to get to play some of these games. My current laptop is really only lacking in its graphics card. (Intel series 4 chipset) If this laptop isn't good enough, I would really appreciate suggestions. I won't be able to get a desktop, otherwise I would, and I can't spend more than $1000 dollars on my new laptop.

    Read the article

  • Java Runtime.getRuntime().exec() alternatives

    - by twilbrand
    I have a collection of webapps that are running under tomcat. Tomcat is configured to have as much as 2 GB of memory using the -Xmx argument. Many of the webapps need to perform a task that ends up making use of the following code: Runtime runtime = Runtime.getRuntime(); Process process = runtime.exec(command); process.waitFor(); ... The issue we are having is related to the way that this "child-process" is getting created on Linux (Redhat 4.4 and Centos 5.4). It's my understanding that an amount of memory equal to the amount tomcat is using needs to be free in the pool of physical (non-swap) system memory initially for this child process to be created. When we don't have enough free physical memory, we are getting this: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 28 more My questions are: 1) Is it possible to remove the requirement for an amount of memory equal to the parent process being free in the physical memory? I'm looking for an answer that allows me to specify how much memory the child process gets or to allow java on linux to access swap memory. 2) What are the alternatives to Runtime.getRuntime().exec() if no solution to #1 exists? I could only think of two, neither of which is very desirable. JNI (very un-desirable) or rewriting the program we are calling in java and making it it's own process that the webapp communicates with somehow. There has to be others. 3) Is there another side to this problem that I'm not seeing that could potentially fix it? Lowering the amount of memory used by tomcat is not an option. Increasing the memory on the server is always an option, but seems like more a band-aid. Servers are running java 6.

    Read the article

  • Cannot install Windows 7 on new PC (BSOD during installation)

    - by andronz
    Yesterday I had received new hardware and tried to assemble all of it. But when I was installing Windows 7 x64 for two times I got a BSOD, and a error message like this: Your PC has rebooted unexpectly, installation will continue after reboot. I have checked everything and tried to install once more time. It was almost successful, Windows had installed, but I cannot format my disk, open cmd or control panel, boot from DVD etc. It's the first time I have assembled a PC, and I don't know where the problem is, in my hardware or in Windows. My components: MSI z77a-g43 Intel i5 3570k Seagate Barracuda (plug into sata3 slot 6 Gb, there I have also sata 3 Gb slot) Corsair XMS3 (4x4Gb) HIS IceQ Also in the BIOS, I have changed RAM frequency to DDR3 1600 and voltage to 1.65v

    Read the article

  • Programming activities for high school kids who have no idea what CS or programming is

    - by pointdxt
    I work at a small high school that's in a very high poverty area. There are only a handful of seniors that are thinking about applying to be an engineer of some sort in college and only 1 kid that applied for Computer Science (he has a couple acceptances so far!). He's been talking to me a lot as I majored in Computer Science as well and he is very excited about it. Unfortunately, our school doesn't have a Computer Science course of any kind so he asks me a lot of stuff. I want to help him out since he's really excited about majoring in CS but I don't know where to begin. I could say put Linux on a computer, go online and go research stuff like I did but this kid needs some direction and he doesn't even know what Linux is let alone have a free computer around for that sort of thing. He doesn't know much about CS but is keenly interested in having a computer do all sorts of things but I don't know how to help him in a meaningful way. Any advice? I'm not a teacher at the school so I'm not a great educator, I do IT at the school.

    Read the article

  • Mac 10.6 Universal Binary scipy: cephes/specfun "_aswfa_" symbol not found

    - by Markus
    Hi folks, I can't get scipy to function in 32 bit mode when compiled as a i386/x86_64 universal binary, and executed on my 64 bit 10.6.2 MacPro1,1. My python setup With the help of this answer, I built a 32/64 bit intel universal binary of python 2.6.4 with the intention of using the arch command to select between the architectures. (I managed to make some universal binaries of a few libraries I wanted using lipo.) That all works. I then installed scipy according to the instructions on hyperjeff's article, only with more up-to-date numpy (1.4.0) and skipping the bit about moving numpy aside briefly during the installation of scipy. Now, everything except scipy seems to be working as far as I can tell, and I can indeed select between 32 and 64 bit mode using arch -i386 python and arch -x86_64 python. The error Scipy complains in 32 bit mode: $ arch -x86_64 python -c "import scipy.interpolate; print 'success'" success $ arch -i386 python -c "import scipy.interpolate; print 'success'" Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/__init__.py", line 7, in <module> from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/interpolate.py", line 13, in <module> import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in <module> from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/basic.py", line 8, in <module> from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Attempt at tracking down the problem It looks like scipy.interpolate imports something called _cephes, which looks for a symbol called _aswfa_ but can't find it in 32 bit mode. Browsing through scipy's source, I find an ASWFA subroutine in specfun.f. The only scipy product file with a similar name is specfun.so, but both that and _cephes.so appear to be universal binaries: $ cd /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/ $ file _cephes.so specfun.so _cephes.so: Mach-O universal binary with 2 architectures _cephes.so (for architecture i386): Mach-O bundle i386 _cephes.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 specfun.so: Mach-O universal binary with 2 architectures specfun.so (for architecture i386): Mach-O bundle i386 specfun.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Ho hum. I'm stuck. Things I may try but haven't figured out how yet include compiling specfun.so myself manually, somehow. I would imagine that scipy isn't broken for all 32 bit machines, so I guess something is wrong with the way I've installed it, but I can't figure out what. I don't really expect a full answer given my fairly unique (?) setup, but if anyone has any clues that might point me in the right direction, they'd be greatly appreciated. (edit) More details to address questions: I'm using gfortran (GNU Fortran from GCC 4.2.1 Apple Inc. build 5646). Python 2.6.4 was installed more-or-less like so: cd /tmp curl -O http://www.python.org/ftp/python/2.6.4/Python-2.6.4.tar.bz2 tar xf Python-2.6.4.tar.bz2 cd Python-2.6.4 # Now replace buggy pythonw.c file with one that supports the "arch" command: curl http://bugs.python.org/file14949/pythonw.c | sed s/2.7/2.6/ > Mac/Tools/pythonw.c ./configure --enable-framework=/Library/Frameworks --enable-universalsdk=/ --with-universal-archs=intel make -j4 sudo make frameworkinstall Scipy 0.7.1 was installed pretty much as described as here, but it boils down to a simple sudo python setup.py install. It would indeed appear that the symbol is undefined in the i386 architecture if you look at the _cephes library with nm, as suggested by David Cournapeau: $ nm -arch x86_64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ 00000000000d4950 T _aswfa_ 000000000011e4b0 d _oblate_aswfa_data 000000000011e510 d _oblate_aswfa_nocv_data (snip) $ nm -arch i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ U _aswfa_ 0002e96c d _oblate_aswfa_data 0002e99c d _oblate_aswfa_nocv_data (snip) however, I can't yet explain its absence.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • Mac OS X Server 10.6 - Apple's software mirrored RAID worth it?

    - by Arko
    Hi, I am installing an Intel Xserve (Quad core Xeon) with Snow Leopard Server (10.6) on two 80Gb 7200rpm SATA HDs. I created a mirrored RAID set using Disk Utility with those two drives, all went fine. I was then asking myself if this is really a good idea. I know that an hardware RAID system would be better, but what about this software RAID? Have you any feedback on this? Will it work fine if one HD breaks down? Does this affect performance? [UPDATE] In short: Hardware RAID is better than software RAID which is better than none. Thank you all for the answers, they were very helpful. Especially Gordon's script to monitor failures. As Apple's software RAID is pretty silent about a drive failure.

    Read the article

  • Unusual Restart

    - by Nikit Batale
    I am currently running Windows XP Professional on my computer. My configuration is Intel Dual Core 3.00 GHz, 512 MB RAM, 160 GB HDD. Sometimes, the computer just restarts without any particular reason. After restarting, I get an error report, "Windows has just recovered from a serious error." Such incidents happen once in 5-6 days. I don't face any other problems apart from this. Also, I don't face this problem in Ubuntu Karmic Koala though.

    Read the article

  • CentOS 5.5 APIC issue on ESX 4.1 & ML115

    - by Adnan
    Hi, I've just installed vSphere 4.1 on an HP ProLiant ML115 G5 Quad-core and am trying to install CentOS 5.5 as a guest system. However, when the guest boots up I get a calibrate_APIC_clock warning and a kernel panic message. I've come across this knowledge base article on the vmware website which suggests moving the guest onto another Intel based host (!). Funnily enough I don't have a collection of spare host servers sitting around, so can anyone suggest another solution? Alternatively, would installing an earlier version of CentOS get around this issue, or would a yum update put me back to square one? How about BIOS settings, could anything be tweaked there? Thanks.

    Read the article

  • Dual monitors through 1 HDMI port

    - by Carlos
    I currently have a Dell Studio XPS 13 laptop connected to a 24" HP monitor (w2448hc). Im thinking on getting a second one, however am wondering what i need for the setup (hardware wise). Also I was wondering it there is any down side to it, or something i should be aware of. For example, image quality loss, GPU overloading, or anything important I should know. More than anything Im interested in your advice. Also the monitors do have built in speakers (HDMI sound output), is the sound going to be reproduced by only one monitor or both? Specs Model: Dell Studio XPS 13 OS: Genuine Windows® 7 Home Premium 64-Bit CPU: Intel® CoreTM 2 Duo P8600 (2.4GHz, 3MB L2 Cache, 1067MHz FSB) Chipset: NVIDIA® GeForce® MCP79MX RAM: 4GB 1067MHz DDR3 SDRAM Graphics: SLi NVIDIA® GeForce® 9500M - 256MB Thanks for your advice, if there is anything additional i need to buy an you have a personal preference pass the brand name so i can check it out. Thanks!

    Read the article

  • Dolby Digital Live (DDL) on Asus Rampage II Gene (Creative X-Fi Extreme)

    - by kevyn
    Hi there, I have an Asus Rampage II Gene motherboard which has X-Fi extreme built in. I can get it to work with Windows 7 ok using the Creative drivers, however when I try and install the DDL/DTS add on pack from Creative I get the error message: "There are no supported audio device available. You need to close the application. Click OK to close the application now" I don't understand it because I have the Creative software installed ok and supporting the sound without any problems. In Device manager the audio device comes up as 'High definition audio device' and uses driver: 6.1.7600.16385 from Microsoft. I tried using the Creative drivers which show up as 'soundmax HD audio' however these do not allow any of the Creative products to run properly. Please can anyone offer any help? Or even just confirm that DDL can work with my onboard sound? Windows 7 Ultimate 64 bit 6GB DDR3 XFX GS8800 384mb Asus Rampage II Gene Intel i7 920 (2.66)

    Read the article

  • C: Running Unix configure file in Windows

    - by Shiftbit
    I would like to port a few applications that I use on Linux to Windows. In particular I have been working on wdiff. A program that compares the differences word by word of two files. Currently I have been able to successfully compile the program on windows through Cygwin. However, I would like to run the program natively on Windows similar to the Project: UnixUtils. How would I go about porting unix utilities on a windows environment? My possible guess it to manually create the ./configure file so that I can create a proper makefile. Am I on the right track? Has anyone had experience porting GNU software to windows? Update: I've compiled it on Code::Blocks and I get two errors: wdiff.c|226|error: `SIGPIPE' undeclared (first use in this function) readpipe.c:71: undefined reference to `_pipe' readpipe.c:74: undefined reference to `_fork This is a linux signal that is not supported by windows... equvilancy? wdiff.c|1198|error: `PRODUCT' undeclared (first use in this function)| this is in the configure.in file... hardcode would probably be the fastest solution... Outcome: MSYS took care of the configure problems, however MinGW couldnt solve the posix issues. I attempt to utilize pthreads as recommended by mrjoltcola. However, after several hours I couldnt get it to compile nor link using the provided libraries. I think if this had worked it would have been the solution I was after. Special mention to Michael Madsen for MSYS.

    Read the article

  • What is the minimum power supply wattage needed for a Pentium 2.4GHz system?

    - by scottmarlowe
    It looks like I've got a dead Antec True 330 power supply in an older desktop that has an Intel D845PESV motherboard, a Pentium 2.4GHz processor, 2 dvd/cd writers, 2 hard drives, and other typical devices. I have an even older computer that is not being used that has a 200W power supply. Can a 200W power supply drive what I've listed above? Or, put another way, what is the minimum power supply specs for the above system? (A new 350W power supply will run me $30--so buying a new one is not a problem--but I'm curious about the question nonetheless).

    Read the article

  • Dual monitors though 1 HDMI port

    - by Carlos
    I currently have a Dell Studio XPS 13 laptop connected to a 24" HP monitor (w2448hc). Im thinking on getting a second one, however am wondering what i need for the setup (hardware wise). Also I was wondering it there is any down side to it, or something i should be aware of. For example, image quality loss, GPU overloading, or anything important I should know. More than anything Im interested in your advice. Also the monitors do have built in speakers (HDMI sound output), is the sound going to be reproduced by only one monitor or both? Specs Model: Dell Studio XPS 13 OS: Genuine Windows® 7 Home Premium 64-Bit CPU: Intel® CoreTM 2 Duo P8600 (2.4GHz, 3MB L2 Cache, 1067MHz FSB) Chipset: NVIDIA® GeForce® MCP79MX RAM: 4GB 1067MHz DDR3 SDRAM Graphics: SLi NVIDIA® GeForce® 9500M - 256MB Thanks for your advice, if there is anything additional i need to buy an you have a personal preference pass the brand name so i can check it out. Thanks!

    Read the article

  • Win7 glass-like Aero feature - unable to enable

    - by user24752
    I have a 4 months old PC with Win7 Home Premium x64. Windows Experience Index is 5.4. Intel i5 processor 6GB memory nVidia GT220 video card. During games, Windows reported shortage of system resources, so switched the desktop back to "Windows 7 Basic" desktop theme. After game-over, I could switch back to the normal theme and enjoy all Aero eye-candies. However, lately the glass-like window transparency feature got disabled, and I found no ways to enable it again. There is a Troubleshouting option in Control Panel saying: "Find and fix problems with transparency and other visual effects". If I launch that, it does not find anything. Event viewer is full with the following warnings: The Desktop Window Manager is experiencing heavy resource contention. Scenario : The Desktop Window Manager responsiveness has degraded. Taskbar, window borders, etc, none of the other transparent features work, and I cannot turn them on. Any thoughts?

    Read the article

  • Wrong number of args passed to: repl$repl

    - by grm
    Hi, I have a trouble with a compojure "Getting started" example that I do not seem to understand. When I run the example from http://weavejester.github.com/compojure/docs/getting-started.html ...I get the following error at the lein repl step: ~/hello-www> lein repl src/hello_www/core.clj Exception in thread "main" java.lang.IllegalArgumentException: Wrong number of args passed to: repl$repl (NO_SOURCE_FILE:0) at clojure.lang.Compiler.eval(Compiler.java:5359) at clojure.lang.Compiler.eval(Compiler.java:5311) at clojure.core$eval__4350.invoke(core.clj:2364) at clojure.main$eval_opt__6502.invoke(main.clj:228) at clojure.main$initialize__6506.invoke(main.clj:247) at clojure.main$script_opt__6526.invoke(main.clj:263) at clojure.main$main__6544.doInvoke(main.clj:347) at clojure.lang.RestFn.invoke(RestFn.java:483) at clojure.lang.Var.invoke(Var.java:381) at clojure.lang.AFn.applyToHelper(AFn.java:180) at clojure.lang.Var.applyTo(Var.java:482) at clojure.main.main(main.java:37) Caused by: java.lang.IllegalArgumentException: Wrong number of args passed to: repl$repl at clojure.lang.AFn.throwArity(AFn.java:439) at clojure.lang.AFn.invoke(AFn.java:43) at clojure.lang.Var.invoke(Var.java:369) at clojure.lang.AFn.applyToHelper(AFn.java:165) at clojure.lang.Var.applyTo(Var.java:482) at clojure.core$apply__3776.invoke(core.clj:535) at leiningen.core$_main__59$fn__61.invoke(core.clj:94) at leiningen.core$_main__59.doInvoke(core.clj:91) at clojure.lang.RestFn.applyTo(RestFn.java:138) at clojure.core$apply__3776.invoke(core.clj:535) at leiningen.core$_main__59.invoke(core.clj:97) at user$eval__67.invoke(NO_SOURCE_FILE:1) at clojure.lang.Compiler.eval(Compiler.java:5343) ... 11 more I have tried both the stable and the developer version of lein without any success. Any ideas on what I could look for next? I get the same result both on linux and cygwin. When I run it manually, it seems to work fine on linux: java -cp "lib/*" clojure.main src/hello_www/core.clj 2010-05-17 19:34:17.280::INFO: Logging to STDERR via org.mortbay.log.StdErrLog 2010-05-17 19:34:17.281::INFO: jetty-6.1.14 2010-05-17 19:34:17.382::INFO: Started [email protected]:8080

    Read the article

  • How to improve Java perfomance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • Mechanical mouse using USB-to-PS/2 Adapter freezes occasionally

    - by izn
    I am using an AOpen PS/2 mechanical mouse in Ubuntu 11.10 with a Staples USB-to-PS/2 Adapter with my Intel DP67DE motherboard. The mouse is more comfortable for my hand as it has a lower height than optical mouses. Occasionally the mouse cursor freezes and often I have to unplug it from the USB port and plug it back in to unfreeze it. This happens with all the USB ports. I've been using the adapter for a few weeks now and this seems to be happening more often recently. What might be happening and is there anything that can be done to fix this?

    Read the article

  • Sony Vaio E14 upgraded to Windows 8.1 has fan on full all the time

    - by dav_i
    Having a bit of a nightmare after upgrading to 8.1 - the fan is constantly on maximum. This question and other sources around the net suggest upgrading the AMD driver. I have upgraded both the graphics drivers as far as I can: AMD Radeon HD 7600M - v13.150.102.0 Intel HD Graphics 4000 - v10.18.10.3316 and I am still having the issue. I'm at a lost as to what to do now! Any help would be greatly appreciated. Update Upgrading the AMD Radeon HD 7600M Series driver to 13.152.1.1000 seems to have fixed the issue on startup - however, putting the laptop to sleep and resuming causes the fan to go full speed at all times again.

    Read the article

< Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >