Search Results

Search found 591 results on 24 pages for 'smp'.

Page 17/24 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to calibrate ASUS k52f battery on Ubuntu?

    - by cutalion
    I'm not sure if the problem is in software or my battery is dying, I'll move my question to another forum if it's not a SO question I have a problem with the battery on my laptop - ASUS K52F. It shows incorrect information about capacity. When I unplug the charger it can work some time, but then it will power off without any warnings. Sometimes it will power off right after I unplug the charger. Here is some info I could get: > uname -a Linux alligator 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > acpi -i Battery 0: Charging, 99%, 18:25:15 until charged Battery 0: design capacity 5235 mAh, last full capacity 69964 mAh = 100% > cat /sys/class/power_supply/BAT0/uevent POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Charging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=9246000 POWER_SUPPLY_POWER_NOW=176000 POWER_SUPPLY_ENERGY_FULL_DESIGN=**48400000** POWER_SUPPLY_ENERGY_FULL=**646822000** POWER_SUPPLY_ENERGY_NOW=**643588000** POWER_SUPPLY_MODEL_NAME=K52F-44 POWER_SUPPLY_MANUFACTURER=ASUSTek POWER_SUPPLY_SERIAL_NUMBER= I noticed, that POWER_SUPPLY_ENERGY_NOW and POWER_SUPPLY_ENERGY_FULL are greater than POWER_SUPPLY_ENERGY_FULL_DESIGN. I don't think it's ok :) I can run any additional commands.

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

  • Centos 6.2 postfix install dependency issues

    - by Mishari
    I am administrating a VPS running cPanel and I'm trying to install postfix. Redhat-release says the version is CentOS release 6.2 (Final) and uname -a says: Linux server.mydomain.com 2.6.32-220.el6.i686 #1 SMP Tue Dec 6 16:15:40 GMT 2011 i686 i686 i386 GNU/Linux This is how I'm installing postfix (I had tried to solve the problem earlier by installing epel). # yum install postfix Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirror.cogentco.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package postfix.i686 2:2.6.6-2.2.el6_1 will be installed --> Processing Dependency: mysql-libs for package: 2:postfix-2.6.6-2.2.el6_1.i686 --> Finished Dependency Resolution Error: Package: 2:postfix-2.6.6-2.2.el6_1.i686 (centos-burstnet) Requires: mysql-libs You could try using --skip-broken to work around the problem Attempts to install mysql-libs tells me several files conflict with "MySQL-server-5.1.61-0.glibc23.i386" I'm not sure why or how this is happening, does anyone know how to resolve this? Surely Centos 6.2 could not have shipped with a broken postfix.

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • HPET missing from available clocksources on CentOS

    - by squareone
    I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet cat /sys/devices/system/clocksource/clocksource0/current_clocksource: tsc cat /sys/devices/system/clocksource/clocksource0/available_clocksource: tsc jiffies What is even more confusing, is that I have about a dozen other machines that utilize the same kernel .config, and can use HPET fine. I fear it is a hardware issue, but would appreciate any advice or help with getting HPET available. Thanks in advance!

    Read the article

  • RTL8188CE doesn't connect to any wifi access points

    - by Drakmail
    I'm using network manager to connect. Also, tryed iwconfig. Results are same. I even try to connect to open access point — results are same. More information: Drakmail@thinkpad-x220:~$ lspci | grep Network | grep -v Ethernet 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) Drakmail@thinkpad-x220:~$ uname -a Linux thinkpad-x220 3.1.0 #1 SMP PREEMPT Wed Oct 26 02:19:49 UTC 2011 x86_64 Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz GenuineIntel GNU/Linux Drakmail@thinkpad-x220:~$ dmesg | tail -n 10 [ 846.901574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 906.812461] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 966.728810] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1026.639676] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1030.925574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin At this moment I try to connect to open wifi ap: [ 1031.252403] wlan0: direct probe to 00:24:8c:55:fa:ed (try 1/3) [ 1031.451943] wlan0: direct probe to 00:24:8c:55:fa:ed (try 2/3) [ 1031.651658] wlan0: direct probe to 00:24:8c:55:fa:ed (try 3/3) [ 1031.851354] wlan0: direct probe to 00:24:8c:55:fa:ed timed out [ 1086.544960] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin My distribution: Drakmail@thinkpad-x220:~$ cat /etc/*version AgiliaLinux release 8.0.0 (Sammy) (Something between Slackware and Archlinux). Also, I saw that wifi module to often trying to load a firmware file. Any ideas what it would be?

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Static IP addressing issue in Ubuntu on BeagleBoneBlack Rev C

    - by Stringfellow
    I have my BBB configured to use a static IP address using the following in the file /etc/network/interfaces: allow-hotplug eth0 iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 This seems to work ok on boot, but when the ethernet cable is unplugged and then plugged back in, I lose the IP address. Any ideas what's going on here? Another weird symptom: If I boot the BBB with the network cable unplugged, but the switch it's plugged into off, I'll get my static IP. But, when I turn the switch on, I'll get a DHCP-assigned address. This is even though I have it configured with a static IP address. One last thing. If I ifdown etho, the interface will be gone when I do an ifconfig. If I wait a few seconds, though, and then re-run ifconfig, it will reappear, without an IP address. (Before I disabled IPv6, I used to get a IPv4 DHCP address in this case... weird). When that happens, I get a message like this in /var/log/messages: Apr 23 20:32:06 beaglebone kernel: [ 737.170172] libphy: 4a101000.mdio:00 - Link is Up - 100/Full Apr 23 20:32:06 beaglebone kernel: [ 737.170304] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Here's my uname -a: root@beaglebone:/etc# uname -a Linux beaglebone 3.8.13-bone47 #1 SMP Fri Apr 11 01:36:09 UTC 2014 armv7l GNU/Linux Any ideas what's going on here?

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • One single page showing 3 requests (also printing the headers)

    - by Korcholis
    Someone in my studio designed a webpage some years ago, and now the client decided to change the server (he moved to a Linux Apache server running Gen2 SMP, 64 bits, PHP version 5.3.8, Standard MYSQL version 5). It suddenly started to do weird things. When clicking on a link that requires login, the page redirects you to the login page using header() function in PHP. Curiously, the page shows this: OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. HTTP/1.1 200 OK Date: Mon, 15 Oct 2012 17:27:32 GMT Server: Apache/2.2.22 (Unix) FrontPage/5.0.2.2635 X-Powered-By: PHP/5.3.8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Keep-Alive: timeout=5, max=399 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html 232c Then the page itself, and then, another header: 0 1f4 OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. 0 What's most intriguing is that if you refresh the page or hit enter on the url, it loads correctly. I've been checking the logs, and it only blames of an inexisting favicon. I also checked the .htaccess, everything was correct (RewriteBase was / as intended, and the only stuff there is another rule that moves ^en/ requests to request?lang=en. Has anyone faced something like this? Edit: IE doesn't trigger these two headers. This is getting wierder.

    Read the article

  • apt unable to install particular version of package (but gives no error)

    - by Arc2009
    I'd like to install particular version of libstd++6 with following command: # apt-get install libstdc++6=4.9.0-8 -V Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libstdc++6 (4.8.2-16) 0 upgraded, 0 newly installed, 0 to remove and 216 not upgraded. It gives no error, but apt keeps version that was already installed. And also it refers to this package as "extra". There's no apt preferences set in /etc/apt/preferences.d. And the desirable version is definetely available through our local mirror. (If I try to run "apt-get download libstdc++6=4.9.0-8" it will download exactly desirable version.) System info: # cat /etc/issue.net "Debian GNU/Linux jessie/sid" # uname -a Linux www27 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux. # dpkg -l |egrep -i "apt|dpkg" ii apt 0.9.16.1 amd64 commandline package manager ii dpkg 1.17.6 amd64 Debian package management system Any suggestions?

    Read the article

  • How do I install LFE on Ubuntu Karmic?

    - by karlthorwald
    Erlang was already installed: $dpkg -l|grep erlang ii erlang 1:13.b.3-dfsg-2ubuntu2 Concurrent, real-time, distributed function ii erlang-appmon 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application monitor ii erlang-asn1 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP modules for ASN.1 support ii erlang-base 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP virtual machine and base applica ii erlang-common-test 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for automated testin ii erlang-debugger 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for debugging and te ii erlang-dev 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP development libraries and header [... many more] Erlang seems to work: $ erl Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> I downloaded lfe from github and checked out 0.5.2: git clone http://github.com/rvirding/lfe.git cd lfe git checkout -b local0.5.2 e207eb2cad $ configure configure: command not found $ make mkdir -p ebin erlc -I include -o ebin -W0 -Ddebug +debug_info src/*.erl #erl -I -pa ebin -noshell -eval -noshell -run edoc file src/leex.erl -run init stop #erl -I -pa ebin -noshell -eval -noshell -run edoc_run application "'Leex'" '"."' '[no_packages]' #mv src/*.html doc/ Must be something stupid i missed :o $ sudo make install make: *** No rule to make target `install'. Stop. $ erl -noshell -noinput -s lfe_boot start {"init terminating in do_boot",{undef,[{lfe_boot,start,[]},{init,start_it,1},{init,start_em,1}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot () Is there an example how I would create a hello world source file and compile and run it?

    Read the article

  • Ruby through RVM fails

    - by TheLQ
    In constant battle to install Ruby 1.9.2 on an RPM system (OS is based off of CentOS), I'm trying again with RVM. So once I install it, I then try to use it: [root@quackwall ~]# rvm use 1.9.2 Using /usr/local/rvm/gems/ruby-1.9.2-p136 [root@quackwall ~]# ruby bash: ruby: command not found [root@quackwall ~]# which ruby /usr/bin/which: no ruby in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) Now that's interesting; rvm info says something completely different: [root@quackwall bin]# rvm info ruby-1.9.2-p136: system: uname: "Linux quackwall.highwow.lan 2.6.18-194.8.1.v5 #1 SMP Thu Jul 15 01:14:04 EDT 2010 i686 i686 i386 GNU/Linux" bash: "/bin/bash => GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu)" zsh: " => not installed" rvm: version: "rvm 1.2.2 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/]" ruby: interpreter: "ruby" version: "1.9.2p136" date: "2010-12-25" platform: "i686-linux" patchlevel: "2010-12-25 revision 30365" full_version: "ruby 1.9.2p136 (2010-12-25 revision 30365) [i686-linux]" homes: gem: "/usr/local/rvm/gems/ruby-1.9.2-p136" ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136" binaries: ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/ruby" irb: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/irb" gem: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/gem" rake: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin/rake" environment: PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin:/usr/local/rvm/gems/ruby-1.9.2-p136@global/bin:/usr/local/rvm/rubies/ruby-1.9.2-p136/bin:bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/rvm/bin" GEM_HOME: "/usr/local/rvm/gems/ruby-1.9.2-p136" GEM_PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136:/usr/local/rvm/gems/ruby-1.9.2-p136@global" MY_RUBY_HOME: "/usr/local/rvm/rubies/ruby-1.9.2-p136" IRBRC: "/usr/local/rvm/rubies/ruby-1.9.2-p136/.irbrc" RUBYOPT: "" gemset: "" So I have RVM that says one thing and bash which says another. Any suggestions on how to get this working?

    Read the article

  • Java OutOfMemoryError due to Linux RAM disk cache not freed

    - by Markus Jevring
    The process will run fine all day, then, bam, without warning, it will throw this error. Sometimes seemingly in the middle of doing nothing. It will happen at seemingly random times during the day. I checked to see if anything else was running on the machine, like scheduled backups or something, but found nothing. The machine has enough physical memory (2GB, with about 1GB free for a 3-500MB load), and has sufficient -Xmx specified. According to our sysadmin, the problem is that the RAM that the kernel uses as a disk cache (apparently all but 8MB) is not freed when the JVM needs to allocate memory, so the JVM process throws an OutOfMemoryError. This could be because Java asks the kernel if enough memory is available before allocating and finds that it is insufficient, resulting in a crash. I would like to think, however, that Java simply tries to allocate the memory via the kernel, and when the kernel gets such a request, it makes room for the application by throwing our some of the disk cache. Has anyone else run in to the issue, and if so, what was the error, and how did you solve it? We are currently using jdk1.6.0_20 on SLES 10 SP2 Linux 2.6.16.60-0.42.9-smp in VMWare ESX.

    Read the article

  • Why does perl crash with "*** glibc detected *** perl: munmap_chunk(): invalid pointer"?

    - by sid_com
    #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML::Reader; my $reader = XML::LibXML::Reader->new( location => 'http://www.heise.de/' ) or die $!; while ( $reader->read ) { say $reader->name; } At the end of the output from this script I get this error-messages: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl ... Is this due a bug? perl -V: Summary of my perl5 (revision 5 version 12 subversion 0) configuration: Platform: osname=linux, osvers=2.6.31.12-0.2-desktop, archname=x86_64-linux uname='linux linux1 2.6.31.12-0.2-desktop #1 smp preempt 2010-03-16 21:25:39 +0100 x86_64 x86_64 x86_64 gnulinux ' config_args='-Dnoextensions=ODBM_File' hint=recommended, useposix=true, d_sigaction=define useithreads=undef, usemultiplicity=undef useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O2', cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include' ccversion='', gccversion='4.4.1 [gcc-4_4-branch revision 150839]', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc', ldflags =' -fstack-protector -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64 libs=-lnsl -ldl -lm -lcrypt -lutil -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc libc=/lib/libc-2.10.1.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.10.1' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector' Characteristics of this binary (from libperl): Compile-time options: PERL_DONT_CREATE_GVSV PERL_MALLOC_WRAP USE_64_BIT_ALL USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO USE_PERL_ATOF Built under linux Compiled at Apr 15 2010 13:25:46 @INC: /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.12.0 /usr/local/lib/perl5/5.12.0/x86_64-linux /usr/local/lib/perl5/5.12.0 .

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [[email protected]][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [[email protected]][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [[email protected]][~/dev/study/java/mongodb]% javac test1.java [[email protected]][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • SD card initialization using SPI interface

    - by Tobias
    I get invalid response Codes from my SD Card(CMD8, CMD55, CMD41) Init routine: SDCS = 1; // MMC deaktiviert SPI1CON1bits.SMP = 0; SPI1CON1bits.CKE = 1; SPI1CON1bits.MSTEN = 1; SPI1CON1bits.CKP = 0; SPI1STATbits.SPIEN = 1; for(i=0;i<10;i++) SPI(0xFF); // RESET unsigned char rr=Command(CMD0,0); SDCS=1; // MMC deactivated /*OK response == 1*/ r=Command(CMD8,0); // check voltage SDCS=1; /* response == 0xC1 ?!? */ r = Command(CMD58,0); // READ_OCR unsigned char ocr1 = SPI(0xFF); unsigned char ocr2 = SPI(0xFF); unsigned char ocr3 = SPI(0xFF); unsigned char ocr4 = SPI(0xFF); unsigned char ocr5 = SPI(0xFF); /* r = 0xF8; ?!? ocr1 = 0x0F; ocr2 = 0xFF; ocr3 = 0xFF; ocr4 = 0xFF; ocr5 = 0xFF; */ SDCS=1; // INIT unsigned char rrr = 0; i=10000; do { rrr=Command(55,0); // Next is APP CMD SDCS=1; if(r) break; }while(--i>0); /* OK response == 1 */ // APP CMD 41 with OCR = 0x0F?? You can read the response codes in the comments. Is it possible the response code to CMD8 is 0xC1? Bit 7 should be 0, right? Is it a hardware error?

    Read the article

  • Why does gcc generate verbose assembly code?

    - by Jared Nash
    I have a question about assembly code generated by GCC (-S option). Since, I am new to assembly language and know very little about it, the question will be very primitive. Still, I hope somebody will answer: Suppose, I have this C code: main(){ int x = 15; int y = 6; int z = x - y; return 0; } If we look at the assembly code (especially the part corresponding to int z = x - y ), we see: main: ... subl $16, %esp movl $15, -4(%ebp) movl $6, -8(%ebp) movl -8(%ebp), %eax movl -4(%ebp), %edx movl %edx, %ecx subl %eax, %ecx movl %ecx, %eax movl %eax, -12(%ebp) ... Why doesn't GCC generate something like this, which is less copying things around. main: ... movl $15, -4(%ebp) movl $6, -8(%ebp) movl -8(%ebp), %edx movl -4(%ebp), %eax subl %edx, %eax movl %eax, -12(%ebp) ... P.S. Linux zion-5 2.6.32-21-generic #32-Ubuntu SMP Fri Apr 16 08:10:02 UTC 2010 i686 GNU/Linux gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)

    Read the article

  • Is there a modern free D?VCS that can ignore mainframe sequence numbers?

    - by Brent.Longborough
    I'm looking at migrating a large suite of IBM Assembler Language programs, from a vcs based on "filenames include version numbers", to a modern vcs which will give me, among other things, the ability to branch and merge. These files have 80-column records, the last 8 columns being an almost-meaningless sequence number. For a number of reasons which I don't really want to waste space by going into, I need the vcs to ignore (but hopefully preserve in some well-defined manner) the sequence number columns, and to diff and patch based only on the contents of the first 72 columns. Any ideas? Just to clarify "ignore but preserve": I accept it's a bit vague, as I haven't fully collected my ideas yet. It would be something along the lines of this: "When merging/patching, if one side has sequence numbers, output them; if more-than-one side has sequence numbers, use those present in file (1|2|3)" Why do I want to preserve sequence numbers? First, they really are sequence numbers. Second, I want to reintegrate this stuff back onto the mainframe, where sequence numbers can be terribly significant. (Those of you who know what "SMP/E" means will understand. Those who don't, be happy, but tremble...) I've just realised I hadn't accepted an answer. Difficult choice, but @Noldorin comes closest to where I have to go.

    Read the article

  • gdb: Cannot find new threads: generic error

    - by Alexander Gladysh
    When I run GDB against a program which loads a .so which is linked to pthreads, GDB reports error "Cannot find new threads: generic error". I probably miss something in my Ubuntu configuration (as it was installed from minimal install). Any clues? $ gdb --args lua -lluarocks.require GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/bin/lua...(no debugging symbols found)...done. (gdb) run Starting program: /usr/bin/lua -lluarocks.require Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio require 'ev' [Thread debugging using libthread_db enabled] Cannot find new threads: generic error (gdb) q A debugging session is active. Inferior 1 [process 4986] will be killed. Quit anyway? (y or n) y This function gets called on require 'ev': http://github.com/brimworks/lua-ev/blob/master/lua_ev.c#L25-65 Additional information about my system: $ uname -a Linux localhost 2.6.31-20-generic #58-Ubuntu SMP Fri Mar 12 04:38:19 UTC 2010 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 9.10 Release: 9.10 Codename: karmic

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >