Search Results

Search found 2301 results on 93 pages for 'cat'.

Page 27/93 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Where I can find /sbin/hotplug in my kernel source

    - by Rahul
    I am reading hotplug events. And I want to enable automatic loading unloading module, when a new device is added. For that I read that kernel does it using /sbin/hotplug script, but I am not able to find it in my source code, can someone help me out where can I find it? Also when I tried to do cat /proc/sys/kernel/hotplug there is nothing coming in output, I am running the same kernel which I downloaded and built.

    Read the article

  • Saving files with libgdx

    - by Rudy_TM
    Writing my game in libgdx, i arrived at the point when i need to save the player stats and the info of the levels, but in libgdx its not allowed to write the file inside folder of the application, only external (in the sd) is allowed, well, the point is that i want that my new file cat be seen by anyone, or if they see it how can i pass it to a binary file, so no one can see it. I just want to hide the file :P

    Read the article

  • Your Presence Matters at the PASS Summit

    - by AllenMWhite
    This year will be my tenth year attending the annual PASS Summit. It's in Seattle again this year, which to me is important because of the proximity to the Microsoft offices and the people on the SQL Server dev team, as well as the CX (formerly CAT) teams that help so many people get the most out of SQL Server. The conference is the biggest event in the world specifically focused on SQL Server. As a result, it's an opportunity to meet many people who are directly focused on the SQL Server platform,...(read more)

    Read the article

  • Booting the server redis no errors

    - by Tylër
    The redis but usually begins with the following errors: tyler @ tyler-vortex: ~ / pens $. / src / redis-server [3690] Dec 01 10:56:05 # Warning: the specified config file, using the default config. In order to Specify a config file use 'redis-server / path / to / redis.conf' [3690] Dec 01 10:56:05 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max configuration to 992 clients. Others errors founds: tyler@tyler-vortex:~/redis$ sudo ./utils/install_server.sh Welcome to the redis service installer This script will help you easily set up a running redis server Please select the redis port for this instance: [6379] Selecting default: 6379 Please select the redis config file name [/etc/redis/6379.conf] Selected default - /etc/redis/6379.conf Please select the redis log file name [/var/log/redis_6379.log] Selected default - /var/log/redis_6379.log Please select the data directory for this instance [/var/lib/redis/6379] Selected default - /var/lib/redis/6379 Please select the redis executable path [/usr/local/bin/redis-server] cat: ./redis.conf.tpl: Arquivo ou diretório não encontrado cat: ./redis_init_script.tpl: Arquivo ou diretório não encontrado ERROR: Could not write init script to /tmp/6379.conf. Aborting! Furthermore, I would like to know how to configure it not to consume so much RAM. Follow the memory configuration of our website, but the settings of "vm-*" does not exist in the file redis.conf. http://redis.io/topics/virtual-memory You have to create them? * Edit: I installed. After that, I believe that I no longer have access via. / Src / redis-server, because it happens: tyler@tyler-vortex:~$ cd redis/ tyler@tyler-vortex:~/redis$ ./src/redis-server [2616] 01 Dec 22:29:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [2616] 01 Dec 22:29:30 # Opening port 6379: bind: Address already in use tyler@tyler-vortex:~/redis$ But there's another detail, the redistribution starts with the system .. redis 127.0.0.1:6379> exit tyler@tyler-vortex:~/redis$ ./src/redis-cli redis 127.0.0.1:6379> exit ... but how can I now see that the communication had before you installed from. sh?

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • Trying to install ffmpeg-php and having installation issues.

    - by dallasclark
    I've installed ffmpeg successfully using the ffmpeginstaller 3 series (http://www.ffmpeginstaller.com/download). ffmpeg is working fine without any known issues with bash. The ffmpeginstaller is meant to install ffmpeg-php but it cannot be found and I receive an error when I execute php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 Looking at the '/usr/lib64/php/modules/' folder, it doesn't contain the ffmpeg.so file. I've tried to install ffmpeg-php manually but I receive the following error checking for ffmpeg headers... configure: error: ffmpeg headers not found. Make sure you've built ffmpeg as shared libs using the --enable-shared option Should I install ffmpeg with series 4 or 5 of ffmpeginstaller or does someone know how to fix this issue? Thanks in advance ! System Specs cat /etc/redhat-release CentOS release 5.5 (Final) cat /proc/version Linux version 2.6.18-028stab068.5 (root@rhel5-64-build) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP 5.2.13 (cli) (built: Mar 2 2010 18:08:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Any other details you need, just let me know.

    Read the article

  • Xen HVM networking wont work

    - by Nathan
    I'm trying to get a Xen HVM network working using route however I am failing. Xen PV works fine using Ubuntu but when installing Ubuntu on HVM it fails to pick up the network. I'll let you know now that I'm not that experienced with Xen so I would appreciate any help. vm104 is the HVM thats causing me the problems, here is the configs that I believe should help resolve the problem. [root@eros vm104]# cat vm104.cfg import os, re arch = os.uname()[4] if re.search('64', arch): arch_libdir = 'lib64' else: arch_libdir = 'lib' kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' memory = 6000 shadow_memory = '8' cpu_weight = 256 name = 'vm104' vif = ['type=ioemu, ip=85.25.x.y, vifname=vifvm104.0, mac=00:16:3e:52:3d:fe, bridge=xenbr0'] acpi = 1 apic = 1 vnc = 1 vcpus = 4 vncdisplay = 3 vncviewer = 0 vncconsole = 1 vnclisten = '217.118.x.y' vncpasswd = 'kCfb5S4tE7' serial = 'pty' disk = ['phy:/dev/vpsvg/vm104_img,hda,w', 'file:/home/solusvm/xen/iso/Windows-Server-2008-RC2.iso,hdc:cdrom,r'] device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' boot = 'cd' sdl = '0' usbdevice = 'tablet' pae=1 [root@eros /]# cat /etc/xen/xend-config.sxp | egrep -v "(^#.*|^$)" (xend-unix-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-route) (vif-script vif-route) (network-script 'network-route netdev=eth0') (dom0-min-mem 256) (dom0-cpus 0) (vnc-listen '0.0.0.0') (vncpasswd '') (keymap 'en-us') The Windows install will not pick up the network - I've tried setting the IP manually by using the Xen servers IP as the gateway and setting the main IP in Windows but no luck. If anyone needs any more information let me know and I appreciate any input!

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • SSH over HTTPS with proxytunnel and nginx

    - by Thermionix
    I'm trying to setup an ssh over https connection using nginx. I haven't found any working examples, so any help would be appreciated! ~$ cat .ssh/config Host example.net Hostname example.net ProtocolKeepAlives 30 DynamicForward 8118 ProxyCommand /usr/bin/proxytunnel -p ssh.example.net:443 -d localhost:22 -E -v -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)" ~$ ssh [email protected] Local proxy ssh.example.net resolves to 115.xxx.xxx.xxx Connected to ssh.example.net:443 (local proxy) Tunneling to localhost:22 (destination) Communication with local proxy: -> CONNECT localhost:22 HTTP/1.0 -> Proxy-Connection: Keep-Alive -> User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32) <- <html> <- <head><title>400 Bad Request</title></head> <- <body bgcolor="white"> <- <center><h1>400 Bad Request</h1></center> <- <hr><center>nginx/1.0.5</center> <- </body> <- </html> analyze_HTTP: readline failed: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host Nginx config on the server; ~$ cat /etc/nginx/sites-enabled/ssh upstream tunnel { server localhost:22; } server { listen 443; server_name ssh.example.net; location / { proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } ssl on; ssl_certificate /etc/ssl/certs/server.cer; ssl_certificate_key /etc/ssl/private/server.key; } ~$ tail /var/log/nginx/access.log 203.xxx.xxx.xxx - - [08/Feb/2012:15:17:39 +1100] "CONNECT localhost:22 HTTP/1.0" 400 173 "-" "-"

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • Linux NIC Bonding Issue (CentOS 4 / RHEL 3)

    - by jinanwow
    I am having an issue with bonding NICs on CentOS 4. It appears the bonding driver does work, but it is stuck in round-robin mode and I am trying to get to active-backup. The current config is: ifcfg-bond0 DEVICE=bond0 IPADDR=192.168.204.18 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no TYPE=Bonding BONDING_OPTS="mode=1 miimon=100" ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yes ifcfg-eth3 DEVICE=eth3 ONBOOT=yes BOOTPROTO=none TYPE=Ethernet MASTER=bond0 SLAVE=yes cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:17:a4:8f:94:b1 Slave Interface: eth3 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:1b:21:56:b8:69 cat /etc/modprobe.conf alias eth0 tg3 alias eth1 tg3 alias eth3 e1000 alias eth2 e1000 alias bond0 bonding options bond0 mode=1 miimon=100 I have tried moving the bonding information out of the ifcfg-bond0 into the modprobe configuration file. It seems that it is stuck in RR and I am trying to get it into the Active-backup (mode 1) state. Any ideas what would be causing this issue?

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • How to get an ARM CPU clock speed in Linux?

    - by MiKy
    I have an ARM-based embedded machine based on S3C2416 board. According to the specifications I have available there should be a 533 MHz ARM9 (ARM926EJ-S according to /proc/cpuinfo), however the software running on it "feels" slow, compared to the same software on my Android phone with a 528MHz ARM CPU. /proc/cpuinfo tells me that BogoMIPS is 266.24. I know that I should not trust BogoMIPS regarding performance ("Bogo" = bogus), however I would like to get a measurement on the actual CPU speed. On x86, I could use the rdtsc instruction to get the time stamp counter, wait a second (sleep(1)), read the counter again to get an approximation on the CPU speed, and according to my experience, this value was close enough to the real CPU speed. How can I find the actual CPU speed of given ARM processor? Update I found this simple Pi calculator, which I compiled both for my Android phone and the ARM board. The results are as follows: S3C2416 # cat /proc/cpuinfo Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 266.24 Features : swp half fastmult edsp java ... #./pi_arm 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 8.50 sec. (real time) Android # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 2 (v6l) BogoMIPS : 527.56 Features : swp half thumb fastmult edsp java # ./pi_android 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 5.95 sec. (real time) So it seems that the ARM926EJ-S is slower than my Android phone, but not twice slower as I would expect by the BogoMIPS figures. I am still unsure about the clock speed of the ARM9 CPU.

    Read the article

  • How to make gpg2 to flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 < /tmp/qqq You need a passphrase to unlock the secret key for user: "Vitaly Shukela (_Vi) " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • Midnight Commander Woes: Output while panels are active, and tab completion.

    - by Eddie Parker
    I'm trying out midnight commander (loved Norton back in the day!) and I'm finding two things hard to work out. I'm curious if there's ways around this or not however. 1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever. i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn. Is there anyway to see the output? I've tried 'ctrl-o', but it appears to just give me a fresh sub-shell and wipes the previous output away. Pausing after every invocation is a bit irritating, so I'd rather not use that option. 2) Tab completion for commands When mc is running, it consumes the tab character for switching panels. Is there any way to get around this so I can still type in paths and what not on the command line? I'm running cygwin if that matters at all.

    Read the article

  • limits.conf to set memory limits

    - by Rupert Jipe
    I would like to limit any process from using more than 500 MB of RAM. AFAIK this is done using RSS in /etc/security/limits.conf but the process called gnome-panel apparently is using 618436 kB of VmRSS. How can this be ? /etc/security/limits.conf * hard rss 512000 username@debian:~$ cat /proc/3002/status Name: gnome-panel State: S (sleeping) Tgid: 3002 Pid: 3002 PPid: 2910 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 20 24 25 29 44 46 112 116 117 1000 1002 1003 VmPeak: 916636 kB VmSize: 916636 kB VmLck: 0 kB VmHWM: 618436 kB VmRSS: 618436 kB VmData: 601972 kB VmStk: 104 kB VmExe: 516 kB VmLib: 29232 kB VmPTE: 1760 kB Threads: 1 SigQ: 0/14001 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000020001000 SigCgt: 0000000180000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: 3 Cpus_allowed_list: 0-1 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 871965 nonvoluntary_ctxt_switches: 47553 PaX: PeMRs username@debian:~$ cat /proc/3002/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us

    Read the article

  • Why can't I get rid of default index.html even if I disable the default virtual host in Apache2?

    - by Emre Sevinç
    I have created a virtual host settings file and I disabled the default settings by using a2dissite default (this is a pretty standard Ubuntu 10.04 installation). But no matter what I try my Apache2 server simply keeps on displaying the default index.html page instead of the index.php page that I set up in the virtual host file. Can someone help me what I'm missing. Details follow: No default settings: ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 51 May 5 13:32 webmin.1273066327.conf -> /etc/apache2/sites-available/webmin.1273066327.conf lrwxrwxrwx 1 root root 34 May 30 11:03 www.accontax.be -> ../sites-available/www.accontax.be Contents of the relevant virtual host: cat /etc/apache2/sites-enabled/www.accontax.be <VirtualHost *> ServerName www.accontax.be ServerAlias accontax.be DirectoryIndex index.php DocumentRoot /var/www/drupal/ <Directory /var/www/drupal/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Contents of httpd.conf: cat /etc/apache2/httpd.conf Listen 80 NameVirtualHost * I also have those relevant lines in my apache2.conf: # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ When I visit http://www.accontax.be I expect apache2 server go to the /var/www/drupal subdirectory and start serving index.php but it simply keeps on serving index.html from /var/www directory. I have reloaded the configuration, restarted the server, deleted my browser cache. Nothing changed. Probably I'm missing a simple yet crucial step but I just could not find it.

    Read the article

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • pdftotext not outputting hebrew characters

    - by Ofri Raviv
    I'm using Xpdf's pdftotext to get the text out of some hebrew pdf files on Ubuntu. On my local machine this worked fine. I then tried to do it on another machine and the hebrew characters don't show up in the text file. I verified that I have the language package (see below why I think so). Where else can I look for the problem? >> tail -2 /etc/xpdf/xpdfrc include /etc/xpdf/includes >> cat /etc/xpdf/includes # This file was automatically generated by /usr/sbin/update-xpdfrc. # Instead, add or remove files in /etc/xpdf/ then run # /usr/sbin/update-xpdfrc to regenerate this file. include /etc/xpdf/xpdfrc-latin2 include /etc/xpdf/xpdfrc-thai include /etc/xpdf/xpdfrc-greek include /etc/xpdf/xpdfrc-turkish include /etc/xpdf/xpdfrc-arabic include /etc/xpdf/xpdfrc-hebrew include /etc/xpdf/xpdfrc-cyrillic >> cat /etc/xpdf/xpdfrc-hebrew #----- begin Hebrew support package (2003-feb-16) unicodeMap ISO-8859-8 /usr/share/xpdf/hebrew/ISO-8859-8.unicodeMap unicodeMap Windows-1255 /usr/share/xpdf/hebrew/Windows-1255.unicodeMap #----- end Hebrew support package >> ls /usr/share/xpdf/hebrew/ ISO-8859-8.unicodeMap Windows-1255.unicodeMap

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • /dev/input/uinput Device appears to be 'broken'

    - by Adam Luchjenbroers
    I'm trying to setup Pystromo so that I can remap the keys on my Belkin N52TE gamepad. Pystromo basically captures the key strokes and then outputs the remapped keystrokes to the uinput device. However, at the moment it simply swallows the input and outputs absolutely nothing. I've tracked the issue to something being wrong with my uinput device, with the smoking gun being: # ls -l /dev/input/uinput crw-rw---- 1 root plugdev 10, 223 Dec 31 2009 /dev/input/uinput # cat /dev/input/uinput cat: /dev/input/uinput: No such device The uinput module is loaded, and can be clearly seen via lsmod. Anyone seen this before, or can think of something worth attempting? Current Setup Gentoo Linux Kernel 2.6.32 (Gentoo Sources 2.6.32-r1) HP DV7 Laptop Output dmesg dmesg | grep uinput does nothing, and no new lines appear if I run modprobe -r uinput && modprobe uinput. Yet the uinput module can clearly be seen when running lsmod: # lsmod | grep uinput uinput 6200 0 lsusb # lsusb Bus 005 Device 003: ID 050d:0200 Belkin Components Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 1532:0101 Razer USA, Ltd Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 5986:0143 Acer, Inc Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 03f0:171d Hewlett-Packard Wireless (Bluetooth + WLAN) Interface [Integrated Module] Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb -v PasteBin Update Hmm, updating evdev and hal seems to have partially fixed it. /dev/input/uinput still can't be accessed but Pystromo is now remapping keys successfully. I'm a little bit mystified about what's going on here, but it seems that my understanding of how all this works is flawed. Since I've posted a bounty, I'll leave this here for someone to post an explanation for how user-space input devices work under the hood.

    Read the article

  • Wireless connection drops when wired computer starts a game.

    - by Skadlig
    Starting this week I have had a strange problem on my network. Some background of my setup: Internet is provided by a adsl-modem. A D-link Dir-600 router is hooked up to to adsl-modem. My computer is hooked up to the router using a cat-5 cable. My wife's computer is hooked up using a wireless usb dongle, TP-Link TL-WN821N. Both computers use windows 7 64-bit home premium. Up until this week everything was normal, we could for instance play Dungeons & Dragons Online together without any network issues. Now every time I start DDO or any other network game, for instance L4D, the whole wireless network drops. I have confirmed that it's not just her computer using an Samsung Galaxy Spica android phone. Shutting down the game on my end restores the wireless connection automagicly. My wife can start DDO without the net dropping but if I plug in a wireless network card in my computer and start up the game the connection drops. So it seems like something my computer, and my computer only, does when starting a game makes the wireless connection write a sad note and kill it self but for the life of me I can't figure out what that might be. I could hook her computer up using cat-5 but I would prefer not to do that. Does anyone have any suggestions as to what the problem might be, what I can do to fix it or what I should do to get more data regarding what is happening?

    Read the article

  • Using a named pipe to simulate a serial port on a VMware virtual machine (linux host and client)

    - by Dave M
    Trying to write a python program to create a simulated data stream and feed it, through a named pipe, to a VMware virtual machine. The host is running Ubuntu 11.10 and VMware player 5.0.0. The Vm is running Ubuntu netbook 10.04. I am able to get the pipe working on the local machine but I am not able to get the pipe to pass data through the virtual serial port to the programs running on the virtual machine. #!/usr/bin/python import os # # Create a named pipe that will be used as the serial port on a VMware virtual machine SerialPipe = '/tmp/gpsd2NMEA' try: os.unlink(SerialPipe) except: pass os.mkfifo(SerialPipe) # # Open the named pipe NMEApipe = os.open(SerialPipe, os.O_RDWR|os.O_NONBLOCK) # # Write a string to the named pipe NMEAtime = "235959" os.write(NMEApipe, str( '%s\n' % NMEAtime )) Test to see if the python program is working on the host machine (displays 235959 if data is passing through the pipe) $ cat /tmp/gpsd2NMEA 235959 Serial port as defined in the VMware .vmx file: serial0.present = "TRUE" serial0.startConnected = "TRUE" serial0.fileType = "pipe" serial0.fileName = "/tmp/gpsd2NMEA" serial0.pipe.endPoint = "client" serial0.autodetect = "FALSE" serial0.tryNoRxLoss = "TRUE" serial0.yieldOnMsrRead = "TRUE" Test to see if the serial port in the VM is receiving data $ cat /dev/ttyS0 or $ minicom -D /dev/ttyS0 or $ stty -F /dev/ttyS0 cs8 -parenb -cstopb 115200 $ echo < /dev/ttyS0 None of these display any data from the python program.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >