Search Results

Search found 4385 results on 176 pages for 'gnu flex'.

Page 159/176 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Macports install of ack doesn't create correct executable

    - by user1664196
    I am trying to install p5-app-ack port from Mac Ports, but it seems it doesn't create a /opt/local/bin/ack binary at the end: $ sudo port search *app-ack Password: p5-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.8-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.10-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.12-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.14-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.16-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories Found 6 ports. $ perl --version This is perl 5, version 12, subversion 4 (v5.12.4) built for darwin-thread-multi-2level Copyright 1987-2010, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. $ sudo port install p5-app-ack ---> Computing dependencies for p5-app-ack ---> Cleaning p5-app-ack ---> Updating database of binaries: 100.0% ---> Scanning binaries for linking errors: 35.0% ---> No broken files found. $ $ ls /opt/local/bin/ac* /opt/local/bin/ack-5.12 /opt/local/bin/aclocal /opt/local/bin/aclocal-1.12 /opt/local/bin/activation-client /opt/local/bin/acyclic $ which ack $ ack -bash: ack: command not found Update If I then try to install p5.12-app-ack afterwards, I get $ sudo port install p5.12-app-ack Password: ---> Computing dependencies for p5.12-app-ack ---> Cleaning p5.12-app-ack ---> Scanning binaries for linking errors: 100.0% ---> No broken files found. $

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • grub refuses to install to raid array

    - by ronno
    I have a software raid 0 setup with dual booting Windows 7 and Ubuntu 12.04. The GRUB bootloader that is already on the hard drive seems to work fine. However, since the latest package update for grub, it refuses to install the new version to the hard disk. grub-install throws the following error: /usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map. Auto-detection of a filesystem of /dev/mapper/< raid name_RAID0p9 failed. Try with --recheck. If the problem persists please report this together with the output of "/usr/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to < [email protected] update-grub pops the same "/usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map." every alternate line. I don't understand what exactly is going on. I'm afraid to reinstall the grub package because it might mess up the boot, which currently works fine. Is it safe to just ignore this?

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • apache segmentation error

    - by lush
    I can't start Apache with the following errors: [root@web]# /etc/init.d/httpd start Starting httpd: /bin/bash: line 1: 19232 Segmentation fault /usr/sbin/httpd [root@web]# /usr/sbin/apachectl -k start /usr/sbin/apachectl: line 102: 19919 Segmentation fault $HTTPD $OPTIONS $ARGV I use webmin control panel and I've already tried re-installing Apache from scratch. Can someone advise what else I should try to do? Many thanks. UPDATE: The only line is always written in the error logs which seems not to be very important: [Mon Nov 14 19:00:09 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) UPDATE 2: I've recently had the error below in the logs. Looks like some modules are incompatible, so I've just disabled these extensions: fileinfo and mcrypt in my php.ini. I should be able to start the web server without them. PHP Warning: PHP Startup: fileinfo: Unable to initialize module\nModule compiled with module API=20050922, debug=0, thread-safety=0\nPHP compiled with module API=20060613, debug=0, thread-safety=0\nThese options need to match\n in Unknown on line 0 PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 UPDATE 3: [root@web]# file /usr/sbin/httpd /usr/sbin/httpd: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, stripped [root@web]# uname -m x86_64

    Read the article

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • HPET missing from available clocksources on CentOS

    - by squareone
    I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet cat /sys/devices/system/clocksource/clocksource0/current_clocksource: tsc cat /sys/devices/system/clocksource/clocksource0/available_clocksource: tsc jiffies What is even more confusing, is that I have about a dozen other machines that utilize the same kernel .config, and can use HPET fine. I fear it is a hardware issue, but would appreciate any advice or help with getting HPET available. Thanks in advance!

    Read the article

  • RTL8188CE doesn't connect to any wifi access points

    - by Drakmail
    I'm using network manager to connect. Also, tryed iwconfig. Results are same. I even try to connect to open access point — results are same. More information: Drakmail@thinkpad-x220:~$ lspci | grep Network | grep -v Ethernet 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) Drakmail@thinkpad-x220:~$ uname -a Linux thinkpad-x220 3.1.0 #1 SMP PREEMPT Wed Oct 26 02:19:49 UTC 2011 x86_64 Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz GenuineIntel GNU/Linux Drakmail@thinkpad-x220:~$ dmesg | tail -n 10 [ 846.901574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 906.812461] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 966.728810] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1026.639676] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1030.925574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin At this moment I try to connect to open wifi ap: [ 1031.252403] wlan0: direct probe to 00:24:8c:55:fa:ed (try 1/3) [ 1031.451943] wlan0: direct probe to 00:24:8c:55:fa:ed (try 2/3) [ 1031.651658] wlan0: direct probe to 00:24:8c:55:fa:ed (try 3/3) [ 1031.851354] wlan0: direct probe to 00:24:8c:55:fa:ed timed out [ 1086.544960] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin My distribution: Drakmail@thinkpad-x220:~$ cat /etc/*version AgiliaLinux release 8.0.0 (Sammy) (Something between Slackware and Archlinux). Also, I saw that wifi module to often trying to load a firmware file. Any ideas what it would be?

    Read the article

  • Static IP addressing issue in Ubuntu on BeagleBoneBlack Rev C

    - by Stringfellow
    I have my BBB configured to use a static IP address using the following in the file /etc/network/interfaces: allow-hotplug eth0 iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 This seems to work ok on boot, but when the ethernet cable is unplugged and then plugged back in, I lose the IP address. Any ideas what's going on here? Another weird symptom: If I boot the BBB with the network cable unplugged, but the switch it's plugged into off, I'll get my static IP. But, when I turn the switch on, I'll get a DHCP-assigned address. This is even though I have it configured with a static IP address. One last thing. If I ifdown etho, the interface will be gone when I do an ifconfig. If I wait a few seconds, though, and then re-run ifconfig, it will reappear, without an IP address. (Before I disabled IPv6, I used to get a IPv4 DHCP address in this case... weird). When that happens, I get a message like this in /var/log/messages: Apr 23 20:32:06 beaglebone kernel: [ 737.170172] libphy: 4a101000.mdio:00 - Link is Up - 100/Full Apr 23 20:32:06 beaglebone kernel: [ 737.170304] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Here's my uname -a: root@beaglebone:/etc# uname -a Linux beaglebone 3.8.13-bone47 #1 SMP Fri Apr 11 01:36:09 UTC 2014 armv7l GNU/Linux Any ideas what's going on here?

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Boot stuck at blinking cursor before GRUB - only works via BIOS boot menu

    - by delta1
    I have a new box running Debian Squeeze. Grub is installed on /dev/sda, but when booting up I just get a blinking cursor, before the Grub menu. I can only boot to grub successfully when I choose boot options (during post) and select that specific drive! I have made sure the correct drive is set to boot first in the BIOS. So Grub works, but the system won't boot to that drive automatically? Any ideas on what could cause this? Drives sda/b/c are all 2TB (sda runs the system with b/c as raid device md0) with the following partitions: $ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 977 sda1 8 2 9765625 sda2 8 3 6445313 sda3 8 4 1937302627 sda4 8 32 1953514584 sdc 8 16 1953514584 sdb 9 0 1953513424 md0 but # fdisk -l /dev/sda gives WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 243202 1953514583+ ee GPT Any insight into this strange behaviour would be greatly appreciated.

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Truecrypt and hidden volumes

    - by user51166
    I would like to know the opinion of some users using (or not) the hidden volume encryption feature of Truecrypt. Personally until now I never used this feature: on Windows I encrypt the system drive as a standard volume, on GNU/Linux I encrypt using LUKS which is Truecrypt's equivalent to standard volume. As for data I use the standard volume approach as well. I read that this feature is nice and all, but it isn't really used by most people. Do you use it or not? Why? Do you only store inside it VERY sensible data or what else? Because technically speaking doing a hidden volume which has (almost) the same size as the outer one doesn't make sense: the outer volume will be encrypted but no data will be on it, which will appear very strange. So not only one has to plan which data store where, but has even to remember each time to mount the outer volume with hidden volume protection (otherwise there'll be a data loss when writing to it). It's a bit messy: hidden OS + outer OS + outer volume + hidden volume = 4 partitions :( Similar question about the hidden operating system (which I don't use [yet]).

    Read the article

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • How to make a redundant desktop system with daily snapshots? (Is btrfs ready for use?)

    - by TestUser16418
    I want to configure a desktop system in which the home filesystem would be redundant (e.g. RAID-1), and would have weekly snapshots taken. I've already done this with ZFS, the snapshot system is wonderful, and with send/recv you can easily create backups on external media. Unfortunately, at that point, I want GNU+Linux and not FreeBSD or Solaris, so I'm looking for suggestions for good alternatives. I reckon that my alternatives are: btrfs - it seems to be exactly what I need, it has snapshots and commands that allow you to easily replicate zfs send. Yet all documentation mentions that it's still experimental. I can't seem to find any actual reports on its reliability or usability issues. Can you point me to any information on that issue that could clarify whether it would be a possible choice? I have a large preference for this option, mostly because I don't want to reformat the drives when btrfs becomes ready, but I there's no information on whether it's usable at all, whether it's a silly idea to use it, etc. The question that I cannot get the answer to is what does "experimental" mean. lvm snapshots and ext4 - preferably not, since it can consume an awful amount of space when new files are created. Creating 200 GB files requres 200 GB free space and 200 GB additionally for snapshots. I also have found it unreliable -- failed metadata rewrite results in an unreadable PV. I'm wondering how btrfs would compare here. A single filesystem (ext4) on a RAID-1 array with custom COW snapshots with hardlinks (like cp -al). That's my current preference if I can't use btrfs. So how experimental btrfs is, which should I choose, and do I have any other options? What if I don't keep external incremental backups, would that affect my choice?

    Read the article

  • How do I re-configure grub options

    - by Ash
    I've searched this topic and I can't find an article with a plausable solution, to my problem. I installed windows 7 first, with 100gb of disk space. I created the necessary partitions via windows. Then I installed Ubuntu 12.04 on the remaining 400gb of disk space. During the Ubuntu installation I installed the ubuntu boot loader on /dev/sda3 which Windows (as expected never granted me the option pre-boot for which OS I wanted to boot). So I re-installed Ubuntu on that /dev/sda3 partition, overriding the windows 7 loader. Now when I boot windows 7, it runs GNU Grub, so like an infinite loop. How can I reconfigure grub to say: /dev/sda is the bootloader. /dev/sda2 is Windows. /dev/sda3 is Ubuntu. Re-installing windows and my partitions isn't an option, purchasing software for windows isn't an option (theres a reason I use linux, it's not because it's free, because you don't have to install lots of shit to get stuff working, and over-all it's a robust OS).

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • Eclipse throwing error when copying and pasting

    - by hoffmandirt
    I am using Eclipse 3.5 SR2 for Java EE developers. Each time I press control+C or control+V for the first time after I open a file I get an error. After I close the error, I can successfully copy and paste. The error message made me believe that it was related to the Mylyn plugin, but I uninstalled it and still no difference. Has anyone else experience this problem? I also have the subclipse, adobe flex builder, and maven plugins installed. The 'org.eclipse.mylyn.tasks.ui.hyperlinks.detectors.url' extension from plug-in 'org.eclipse.mylyn.tasks.ui' to the 'org.eclipse.ui.workbench.texteditor.hyperlinkDetectors' extension point failed to load the hyperlink detector. Plug-in org.eclipse.mylyn.tasks.ui was unable to load class org.eclipse.mylyn.internal.tasks.ui.editors.TaskUrlHyperlinkDetector. An error occurred while automatically activating bundle org.eclipse.mylyn.tasks.ui (520). The 'org.eclipse.mylyn.java.hyperlink.detector.stack' extension from plug-in 'org.eclipse.mylyn.java.tasks' to the 'org.eclipse.ui.workbench.texteditor.hyperlinkDetectors' extension point failed to load the hyperlink detector. Plug-in org.eclipse.mylyn.java.tasks was unable to load class org.eclipse.mylyn.internal.java.tasks.JavaStackTraceHyperlinkDetector. org/eclipse/mylyn/tasks/ui/AbstractTaskHyperlinkDetector The 'org.eclipse.mylyn.tasks.ui.hyperlinks.detectors.task' extension from plug-in 'org.eclipse.mylyn.tasks.ui' to the 'org.eclipse.ui.workbench.texteditor.hyperlinkDetectors' extension point failed to load the hyperlink detector. Plug-in org.eclipse.mylyn.tasks.ui was unable to load class org.eclipse.mylyn.internal.tasks.ui.editors.TaskHyperlinkDetector. An error occurred while automatically activating bundle org.eclipse.mylyn.tasks.ui (520).

    Read the article

  • MacPorts - Installing Port, Dependencies Failed

    - by Louis
    I am attempting to install xulrunner on OSX 10.6.3 using the following: sudo port install xulrunner However, I am receiving the following error: nat-10-200-136-126:phoneyc-new $ sudo port install xulrunner ---> Computing dependencies for xulrunner ---> Activating zlib @1.2.5_0 Error: The following dependencies failed to build: gconf dbus-glib glib2 zlib gtk-doc docbook-xml docbook-xml-4.1.2 xmlcatmgr docbook-xml-4.2 docbook-xml-4.3 docbook-xml-4.4 docbook-xml-4.5 docbook-xml-5.0 docbook-xsl gnome-doc-utils iso-codes libxslt libxml2 p5-xml-parser py26-libxml2 python26 bzip2 db46 gdbm openssl readline sqlite3 tk Xft2 fontconfig freetype xrender xorg-libX11 xorg-bigreqsproto xorg-inputproto xorg-kbproto xorg-libXau xorg-xproto xorg-libXdmcp xorg-util-macros xorg-xcmiscproto xorg-xextproto xorg-xf86bigfontproto xorg-xtrans xorg-renderproto tcl xorg-libXScrnSaver xorg-libXext xorg-scrnsaverproto rarian getopt intltool gnome-common p5-getopt-long p5-pathtools p5-scalar-list-utils gtk2 atk cairo libpixman libpng jasper jpeg pango shared-mime-info tiff xorg-libXcomposite xorg-compositeproto xorg-libXfixes xorg-fixesproto xorg-libXcursor xorg-libXdamage xorg-damageproto xorg-libXi xorg-libXinerama xorg-xineramaproto xorg-libXrandr xorg-randrproto orbit2 libidl policykit heimdal lcms libcanberra gstreamer bison flex gzip texinfo lzmautils libvorbis libogg libnotify nss xorg-libXt xorg-libsm xorg-libice Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. nat-10-200-136-126:phoneyc-new$ I am unsure how to correct this issue, so any help would be much appreciated!

    Read the article

  • Thrift,.NET,Cassandra - Is this is right combination?

    - by Vadi
    I've been evaluating technology stack for developing a social network based application. Below are the stack I think could well suitable for this application type of application: GUI -- ASP.NET MVC, Flash (Flex) Business Services -- Thrift based services One of the advantage of using Thrift is to solve scaling problems that will come in future when the user base increases rapidly. All the business logic can be exposed as a services using REST,JSON etc., This also allows us to go with C++ or Erlang based services when situation demands. Database -- mySQL, CasSandara mySQL can be used for storing the data which needs to be persisted. Cassandara will be used for storing global identifiers to the persisted data. Since Cassandara is also very good at scaling by introducing more nodes this will leverage Thrift based services as well. And also there is native support between Cassandara and Thrift Cache Server -- Memcached Any requests from Business Services will only talk to Memcached if any non-dirty data is required, otherwise there will be some background jobs that will invalidate the cache from database. The question is: Is the Thrift which is open-sourced one is production-ready? Is it the right stack for services layer to choose when the application (GUI) is primarily gets developed in ASP.NET and DB is mysql? Is there any other caveats that anyone here experienced? One of the main objective behind this stack is to easily scale up with more nodes and also this helps us to use Linux boxes, it will reduce our cost significantly Thoughts please ..

    Read the article

  • WPF Control validation

    - by Jon
    Hi all, I'm developing a WPF GUI framework and have had bad experiences with two way binding and lots of un-needed events being fire(mainly in Flex) so I have gone down the route of having bindings (string that represent object paths) in my controls. When a view is requested to be displayed the controller loads the view, and gets the needed entities (using the bindings) from the DB and populates the controls with the correct values. This has a number of advantages i.e. lazy loading, default undo behaviour etc. When the data in the view needs to be saved the view is passed back to the controller again which basically does the reserve i.e. re-populates the entities from the view if values have changed. However, I have run into problems when I try and validate the components. Each entity has attributes on its properties that define validation rules which the controller can access easily and validate the data from the view against it. The actual validation of data is fine. The problem comes when I want the GUI control to display error validation information. It I try changing the style I get errors saying styles cannot be changed once in use. Is the a way in c# to fire off the normal WPF validation mechanism and just proved it with the validaiton errors the controller has found? Thanks in advance Jon

    Read the article

  • IE History Tracking, IFRAMES, and Cross Domain error...

    - by peiklk
    So here's the deal. We have a Flash application that is running within an HTML file. For one page we call a legacy reporting system in ASP.NET that is within an IFRAME. This page then communicates back to the Flash application using cross-domain scripting (document.domain = "domain" is set in both pages. THIS ALL WORKS. Now the kicker. Flash has history tracking enabled. This loads the history.js file that created a div tag to store page changes so the back and forward buttons work in the browser. Which works for Firefox and Chrome as they create a div tag. HOWEVER In Internet Explorer, history.js creates another IFRAME (instead of a DIV) called ie_historyFrame. When the ScriptResource.axd code attempts to access this with: var frameDoc = this._historyFrame.contentWindow.document; we get an "Access is Denied" error message. ARGH! We've tried getting a handle to this IFRAME and inserting the document.domain code. FAIL. We've tried editing the historytemplate.html file that flex also uses to include document.domain... FAIL. I've tried to edit the underlying ASP.NET page to disable history tracking in the ScriptManager control. FAIL. At my wit's end on this one. We have users who need to use IE to access this site. They are big clients who we cannot tell to just use Firefox. Any suggestions would be greatly appreciated.

    Read the article

  • How do I play back a WAV in ActionScript?

    - by Jeremy White
    Please see the class I have created at http://textsnip.com/51013f for parsing a WAVE file in ActionScript 3.0. This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag: [Embed(source="some_sound.wav", mimeType="application/octet-stream")] public var sound_class:Class; public var wave:WaveFile = new WaveFile(new sound_class()); After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc. Here are the two documents I'm using as a reference for the WAVE file format: http://www.lightlink.com/tjweber/StripWav/Canon.html https://ccrma.stanford.edu/courses/422/projects/WaveFormat/ Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?

    Read the article

  • AS3 - Unloaded AVM1 swfs trace out as unloaded but memory is not freed for the AVM2 machine

    - by puppbits
    I have a large project built in as3. Part of its main functionality is to load and unload various as2 swfs. The problem is that the memory ins't free up once they are unloaded. I have access to the as2 swfs code base and destroyed all objects, stopped and killed timers, listeners, removed from stage, destroyed all the MovieClip.protoypes that were created. They look to be clean as far as the AS2 debugger show no remnants of the object after the destroy function is run. In AS3 i've closed the local connection, cleaned all references/listeners to the AVM1Movie and ran Loader.unloadAndStop(). The trace out in flex says the swf was unloaded but looking at windows task manager the memory usage never drops to when it was before the as2 swf was loaded. Each as2 swf can take up to 80 megs each time it's run so memory gets eaten up fast and loading and unloading a few as2 files. At this point if the AS2 swfs are unloaded the only thing that I can assume that could be left is MovieClip.prototype and/or _global, _root variables add during the AS2's run time. But i've gone through those and can't find anything else that might be sticking. Has anyone ever seen problems before with the AVM1 machine not freeing up its memory?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >