Search Results

Search found 2649 results on 106 pages for 'signal slot'.

Page 88/106 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Cannot establish maximum resolution on ASUS PB278Q

    - by dentuzhik
    I've recently bought brand new ASUS PB278Q monitor. When trying to connect to my laptop, everything works great, except that I can't get the native resolution of my monitor (2560x1440) working. The automatic is 1920x1080. My graphic card is Nvidia GeForce 320m. Here's output from lspci for it: ~$ lspci | grep VGA 02:00.0 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 320M] (rev a2) and also xrandr: ~$ xrandr Screen 0: minimum 8 x 8, current 3286 x 1437, maximum 8192 x 8192 VGA-0 disconnected (normal left inverted right x axis y axis) LVDS-0 connected primary 1366x768+0+669 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ HDMI-0 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 600mm x 340mm 1920x1080 60.0*+ 59.9 50.0 30.0 25.0 24.0 60.0 50.0 1680x1050 60.0 1440x900 59.9 1280x1024 75.0 60.0 1280x960 60.0 1280x800 59.8 1280x720 60.0 59.9 50.0 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 720x576 50.0 720x480 59.9 640x480 75.0 59.9 59.9 480x576 50.0 480x480 59.9 I have proprietary drivers installed on my machine, here's the info about the monitor from nvidia-settings (Actually I don't have enough reputation to post images, so here's the text): Chip Location: Internal Signal: TDMS Connection link: Single Native resolution: 2560x1440 Refresh rate: 60.00 Hz The monitor is connected to laptop via HDMI cable, and honestly I have no idea what version it is, and what version is my HDMI output of my graphics card. I tried to find how I can figure it out on the web, but had no luck. Also my video card has only VGA and HDMI outs so I can't test neither DVI-D cable nor DisplayPort. So apparently, there's some problem over there. At least I want to know exactly what's going on. I've tried to see if it a linux-specific problem, but windows also gave me the same resolution by default. What I've already tried: Connect through VGA (stupid one, of course it gave me 1920x1080). Checked two HDMI cables (not sure if they're the same or not, as mentioned above). Played around with xrandr and adding custom modes. Didn't help. Surfed for the info a lot on the web, but couldn't get appropriate results. Actually xrandr gives me the following: ~$ cvt 2560 1440 60 # 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync ~$ xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync ~$ xrandr Screen 0: minimum 8 x 8, current 3286 x 1437, maximum 8192 x 8192 VGA-0 disconnected (normal left inverted right x axis y axis) LVDS-0 connected 1366x768+0+669 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ HDMI-0 connected primary 1920x1080+1366+0 (normal left inverted right x axis y axis) 600mm x 340mm 1920x1080 60.0*+ 59.9 50.0 30.0 25.0 24.0 60.0 50.0 1680x1050 60.0 1440x900 59.9 1280x1024 75.0 60.0 1280x960 60.0 1280x800 59.8 1280x720 60.0 59.9 50.0 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 720x576 50.0 720x480 59.9 640x480 75.0 59.9 59.9 480x576 50.0 480x480 59.9 2560x1440_60.00 (0x34f) 312.2MHz h: width 2560 start 2752 end 3024 total 3488 skew 0 clock 89.5KHz v: height 1440 start 1443 end 1448 total 1493 clock 60.0Hz ~$ xrandr --addmode HDMI-0 2560x1440_60.00 X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 29 Current serial number in output stream: 30 What I intend to do next: Try another HDMI cable? Try HDMI to DVI-D cable? Try HDMI to DisplayPort cable? Another type of adapters? VGA to DVI-D? Buy another laptop with another graphic card. Damn. My ideas pretty much end here. Any ideas? Any explanations why it isn't working are appreciated.

    Read the article

  • KVM guest io is much slower than host io: is that normal?

    - by Evolver
    I have a Qemu-KVM host system setup on CentOS 6.3. Four 1TB SATA HDDs working in Software RAID10. Guest CentOS 6.3 is installed on separate LVM. People say that they see guest performance almost equal to host performance, but I don't see that. My i/o tests are showing 30-70% slower performance on guest than on host system. I tried to change scheduler (set elevator=deadline on host and elevator=noop on guest), set blkio.weight to 1000 in cgroup, change io to virtio... But none of these changes gave me any significant results. This is a guest .xml config part: <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/vgkvmnode/lv2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> There are my tests: Host system: iozone test # iozone -a -i0 -i1 -i2 -s8G -r64k random random KB reclen write rewrite read reread read write 8388608 64 189930 197436 266786 267254 28644 66642 dd read test: one process and then four simultaneous processes # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct 1073741824 bytes (1.1 GB) copied, 4.23044 s, 254 MB/s # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 14.4528 s, 74.3 MB/s 1073741824 bytes (1.1 GB) copied, 14.562 s, 73.7 MB/s 1073741824 bytes (1.1 GB) copied, 14.6341 s, 73.4 MB/s 1073741824 bytes (1.1 GB) copied, 14.7006 s, 73.0 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 6.2039 s, 173 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 32.7173 s, 32.8 MB/s 1073741824 bytes (1.1 GB) copied, 32.8868 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9097 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9688 s, 32.6 MB/s Guest system: iozone test # iozone -a -i0 -i1 -i2 -s512M -r64k random random KB reclen write rewrite read reread read write 524288 64 93374 154596 141193 149865 21394 46264 dd read test: one process and then four simultaneous processes # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 1073741824 bytes (1.1 GB) copied, 5.04356 s, 213 MB/s # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 24.7348 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7378 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7408 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.744 s, 43.4 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 10.415 s, 103 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 49.8874 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8608 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8693 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.9427 s, 21.5 MB/s I wonder is that normal situation or did I missed something?

    Read the article

  • how to make bridge networking with KVM work in Fedora19

    - by netllama
    I'm attempting to get several virtual machines setup on a Fedora-19 host system, with the traditional bridge network devices (br0, br1, etc). I've done this many times before with older versions of Fedora (16, 14, etc), and it just works. However, for reasons that I cannot figure out, the bridge doesn't seem to be working in Fedora19. While I can successfully connect to the outside world (local network + internet) from inside a VM, nothing can communicate with the VM from outside (local network). I'm referring to something as trivial as pinging. From inside the VM, I can ping anything successfully (0% packet loss). However, from outside the VM (on the host, or any other system on the same network), I see 100% packet loss when pinging the IP address of the VM. My first question is simply, does anyone else have this working successfully in F19? And if so, what steps did you need to follow? I'm not using NetworkManager at all, its all the network service. There are no firewalls involved anywhere (iptables & firewall services are currently disabled). Here's the current host configuration: # brctl show bridge name bridge id STP enabled interfaces br0 8000.38eaa792efe5 no em2 vnet1 br1 8000.38eaa792efe6 no em3 br2 8000.38eaa792efe7 no em4 vnet0 virbr0 8000.525400db3ebf yes virbr0-nic # more /etc/sysconfig/network-scripts/ifcfg-em2 TYPE=Ethernet BRIDGE="br0" NAME=em2 DEVICE="em2" UUID=aeaa839e-c89c-4d6e-9daa-79b6a1b919bd ONBOOT=yes HWADDR=38:EA:A7:92:EF:E5 NM_CONTROLLED="no" # more /etc/sysconfig/network-scripts/ifcfg-br0 TYPE=Bridge NM_CONTROLLED="no" BOOTPROTO=dhcp NAME=br0 DEVICE="br0" ONBOOT=yes # ifconfig em2 ;ifconfig br0 em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 1000 (Ethernet) RX packets 100093 bytes 52354831 (49.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25321 bytes 15791341 (15.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d00000-f7e00000 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.31.99.226 netmask 255.255.252.0 broadcast 10.31.99.255 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 0 (Ethernet) RX packets 19619 bytes 1963328 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 1074 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Relevant section from /etc/libvirt/qemu/foo.xml (one of the VMs with this problem): <interface type='bridge'> <mac address='52:54:00:26:22:9d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> # ps -ef | grep qemu qemu 1491 1 82 13:25 ? 00:42:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name cuda-linux64-build5 -S -machine pc-0.13,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 16384 -smp 6,sockets=6,cores=1,threads=1 -uuid 6e930234-bdfd-044d-2787-22d4bbbe30b1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/cuda-linux64-build5.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/cuda-linux64-build5.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:26:22:9d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 I can provide additional information, if requested. thanks!

    Read the article

  • Openldap/Sasl/GSSAPI on Debian: Key table entry not found

    - by badbishop
    The goal: to make an OpenLDAP server to authenticate using Kerberos V via GSSAPI Setup: several virtual machines running on freshly installed/updated Debian Squeeze A master KDC server kdc.example.com A LDAP server, running OpenLDAP ldap.example.com The problem: tom@ldap:~$ ldapsearch -b 'dc=example,dc=com' SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Key table entry not found) One might suggest to add that bloody keytab entry, but here's the real problem: ktutil: rkt /etc/ldap/ldap.keytab ktutil: list slot KVNO Principal ---- ---- --------------------------------------------------------------------- 1 2 ldap/[email protected] 2 2 ldap/[email protected] 3 2 ldap/[email protected] 4 2 ldap/[email protected] So, the entry as suggested by the OpenLDAP manual is there allright. Deleting and re-creating both service principal and the keytab on ldap.example.com didn't help, I get the same error. And before I make the keytab file readable by openldap, I get "Permission denied" error instead of the one in the subject. Which implies, that the right keytab file is being accessed, as set in /etc/default/slapd. I have my doubts about the following part of slapd config: root@ldap:~# cat /etc/ldap/slapd.d/cn\=config.ldif | grep -v "^#" dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcLogLevel: 256 olcPidFile: /var/run/slapd/slapd.pid olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: d6737f5c-d321-1030-9dbe-27d2a7751e11 olcSaslHost: kdc.example.com olcSaslRealm: EXAMPLE.COM olcSaslSecProps: noplain,noactive,noanonymous,minssf=56 olcAuthzRegexp: {0}"uid=([^/]*),cn=EXAMPLE.COM,cn=GSSAPI,cn=auth" "uid=$1,ou=People,dc=example,dc=com" olcAuthzRegexp: {1}"uid=host/([^/]*).example.com,cn=example.com,cn=gssapi,cn=auth" "cn=$1,ou=hosts,dc=example,dc=com" A HOWTO at https://help.ubuntu.com/community/OpenLDAPServer#Kerberos_Authentication mentiones vaguely: Also, it is frequently necessary to map the Distinguished Name (DN) of an authorized Kerberos client to an existing entry in the DIT. I fail to understand where in the tree this should be defined, what schema should be used, etc. After hours of googling, it's official: I'm stuck! Please, help. Other things checked: Kerberos as such works fine (I can ssh without using a password to any machine in this setup). That means there should be no DNS-related problems. ldapsearch -b 'dc=example,dc=com' -x works OK. SASL/GSSAPI has been tested using sasl-sample-server -m GSSAPI -s ldap and sasl-sample-client -s ldap -n ldap.example.com -u tom without errors: root@ldap:~# sasl-sample-server -m GSSAPI -s ldap Forcing use of mechanism GSSAPI Sending list of 1 mechanism(s) S: R1NTQVBJ Waiting for client mechanism... C: R1NTQVBJAGCCAmUGCSqGSIb3EgECAgEAboICVDCCAlCgAwIBBaEDAgEOogcDBQAgAAAAo4IBamGCAWYwggFioAMCAQWhDRsLRVhBTVBMRS5DT02iIzAhoAMCAQOhGjAYGwRsZGFwGxBsZGFwLmV4YW1wbGUuY29to4IBJTCCASGgAwIBEqEDAgECooIBEwSCAQ8Re8XUnscB8dx6V/cXL+uzSF2/olZvcrVAJHZBZrfRKUFEQmU1Li46bUGK3GZwsn6qUVwmW6lyqVctOIYwGvBpz81Rw/5mj4V5iQudZbIRa+5Ew6W1oBB7ALi2cnPsbUroqzGmEh8/Vw8zSFk7W1gND4DLuWrPXD2xhLDUMMekBn5nXEPTnNAnV4w81Sj3ZlyLZz5OSitGVUEnQweV53z1spWsASHHWod/tSuxb19YeWmY5QHXPLG+lL5+w+Cykr0EhYVj8f8MDWFB8qoN1cr85xDfn18r8JldSw+i18nFKOo8usG+37hZTWynHYvBfMONtG9mLJv82KGPZMydWK7pzyTZDcnSsIjo2AftMZd5pIHMMIHJoAMCARKigcEEgb5aG1k4xgxmUXX7RKfvAbVBVJ12dWOgFFjMYceKjziXwrrOkv8ZwIvef9Yn2KsWznb5L55SXt2c/zlPa5mLKIktvw77hsK1h/GYc7p//BGOsmr47aCqVWsGuTqVT129uo5LNQDeSFwl2jXCkCZJZavOVrqYsM6flrPYE4n5lASTcPitX+/WNsf6WrvZoaexiv1JqyM/MWqS/vMBRMMc5xlurj6OARFvP9aFZoK/BLmfkSyAJj6MLbLVXZtkHiIPgot 'GSSAPI' Sending response... S: YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvkxggi9pW+yJ1ExbTwLDclqw/VQ98aPq8mt39hkO6PPfcO2cB+t6vJ01xRKBrT9D2qF2XK0SWD4PQNb5UFbH4RM/bKAxDuCfZ1MHKgIWTLu4bK7VGZTbYydcckU2d910jIdvkkHhaRqUEM4cqp/cR Waiting for client reply... C: got '' Sending response... S: BQQF/wAMAAAAAAAAMBOWqQcACAAlCodrXW66ZObsEd4= Waiting for client reply... C: BQQE/wAMAAAAAAAAFUYbXQQACAB0b20VynB4uGH/iIzoRhw=got '?' Negotiation complete Username: tom Realm: (NULL) SSF: 56 sending encrypted message 'srv message 1' S: AAAASgUEB/8AAAAAAAAAADATlqrqrBW0NRfPMXMdMz+zqY32YakrHqFps3o/vO6yDeyPSaSqprrhI+t7owk7iOsbrZ/idJRxCBm8Wazx Waiting for encrypted message... C: AAAATQUEBv8AAAAAAAAAABVGG17WC1+/kIV9xTMUdq6Y4qYmmTahHVCjidgGchTOOOrBLEwA9IqiTCdRFPVbK1EgJ34P/vxMQpV1v4WZpcztgot '' recieved decoded message 'client message 1' root@ldap:~# sasl-sample-client -s ldap -n ldap.example.com -u tom service=ldap Waiting for mechanism list from server... S: R1NTQVBJrecieved 6 byte message Choosing best mechanism from: GSSAPI returning OK: tom Using mechanism GSSAPI Preparing initial. Sending initial response... C: R1NTQVBJAGCCAmUGCSqGSIb3EgECAgEAboICVDCCAlCgAwIBBaEDAgEOogcDBQAgAAAAo4IBamGCAWYwggFioAMCAQWhDRsLRVhBTVBMRS5DT02iIzAhoAMCAQOhGjAYGwRsZGFwGxBsZGFwLmV4YW1wbGUuY29to4IBJTCCASGgAwIBEqEDAgECooIBEwSCAQ8Re8XUnscB8dx6V/cXL+uzSF2/olZvcrVAJHZBZrfRKUFEQmU1Li46bUGK3GZwsn6qUVwmW6lyqVctOIYwGvBpz81Rw/5mj4V5iQudZbIRa+5Ew6W1oBB7ALi2cnPsbUroqzGmEh8/Vw8zSFk7W1gND4DLuWrPXD2xhLDUMMekBn5nXEPTnNAnV4w81Sj3ZlyLZz5OSitGVUEnQweV53z1spWsASHHWod/tSuxb19YeWmY5QHXPLG+lL5+w+Cykr0EhYVj8f8MDWFB8qoN1cr85xDfn18r8JldSw+i18nFKOo8usG+37hZTWynHYvBfMONtG9mLJv82KGPZMydWK7pzyTZDcnSsIjo2AftMZd5pIHMMIHJoAMCARKigcEEgb5aG1k4xgxmUXX7RKfvAbVBVJ12dWOgFFjMYceKjziXwrrOkv8ZwIvef9Yn2KsWznb5L55SXt2c/zlPa5mLKIktvw77hsK1h/GYc7p//BGOsmr47aCqVWsGuTqVT129uo5LNQDeSFwl2jXCkCZJZavOVrqYsM6flrPYE4n5lASTcPitX+/WNsf6WrvZoaexiv1JqyM/MWqS/vMBRMMc5xlurj6OARFvP9aFZoK/BLmfkSyAJj6MLbLVXZtkHiIP Waiting for server reply... S: YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvkxggi9pW+yJ1ExbTwLDclqw/VQ98aPq8mt39hkO6PPfcO2cB+t6vJ01xRKBrT9D2qF2XK0SWD4PQNb5UFbH4RM/bKAxDuCfZ1MHKgIWTLu4bK7VGZTbYydcckU2d910jIdvkkHhaRqUEM4cqp/cRrecieved 156 byte message C: Waiting for server reply... S: BQQF/wAMAAAAAAAAMBOWqQcACAAlCodrXW66ZObsEd4=recieved 32 byte message Sending response... C: BQQE/wAMAAAAAAAAFUYbXQQACAB0b20VynB4uGH/iIzoRhw= Negotiation complete Username: tom SSF: 56 Waiting for encoded message... S: AAAASgUEB/8AAAAAAAAAADATlqrqrBW0NRfPMXMdMz+zqY32YakrHqFps3o/vO6yDeyPSaSqprrhI+t7owk7iOsbrZ/idJRxCBm8Wazxrecieved 78 byte message recieved decoded message 'srv message 1' sending encrypted message 'client message 1' C: AAAATQUEBv8AAAAAAAAAABVGG17WC1+/kIV9xTMUdq6Y4qYmmTahHVCjidgGchTOOOrBLEwA9IqiTCdRFPVbK1EgJ34P/vxMQpV1v4WZpczt

    Read the article

  • Performing mechanical movements using computer

    - by Vi
    How to make a computer (in particular, my laptop) to perform some mechanical movements without buying anything $5, soldering things inside computer or creating big sophisticated circuits? Traditionally CD-ROM tray is used to make computer do some movement IRL by, for example, SSH command, but in laptop tray is one-shot (unless manually reloaded) and also not very comfortable [mis]usage. Some assistance circuits can be in use too, but not complex. For example, there is a little motor that can work on USB power. Devices in my computer: DVD-ROM tray: one-time push. USB power: continuous power to the motor or LEDS or relay that turns on something powerful. Audio card. 3 outputs (modprobe alsa model=test can set Mic and Line-in as additional output). One controllable DC output (microphone) that can power up LED and some electronic (may be even mechanic?) relay. Also with sophisticated additional circuiting can control a lot of devices with a good precision. Both input and output support. Probably the most useful object in computer for radio ham. Modem. Don't know about this much, it doesn't work because of hsfmodem crashes kernel if memory is = 1GB. May be it's "pick up" and "hang up" can turn on and off power taken from USB port? Video card. VGA port? S-Video port? Will them be useful? Backlight. Tunable, but probably unuseful. CardBus (or some) slot. Nothing interesting for the task probably (is it?). AC adapter and battery. Probably nothing programmable here. /* My AC adapter already have additional jacks to connect extra devics */ Keyboard. No use. Touchpad. Good sensor (synclient -m 1), but no output. Various LEDs inside laptop. Probably too weak and requires soldering. Fans inside laptop. Poor control over them, requires soldering and dangerous to tinker. HDD (internal and external) that can be spin down and up (hdparm -Y, cat /dev/ubb). But connecting anything serially with it's power line makes HDD underpowered... And too complex. Is something are missed? Any ideas how to use described components? Any other ideas? May be there are easily available /* in developing countries */ cheap devices like "enhanced multimeters" that are controllable from computer and can provide configurable output and measure current and other things? Things to aid pushing many physical buttons with computer. Isn't this a simple idea and implementation and a lot of use in good hands?

    Read the article

  • Centos/OVH: public IP on KVM virtual machine

    - by Sébastien
    Since a few days, I'm trying to configure my KVM vm to have a public IP address, without any success. First, I'm on OVH, and you need to know they don't allow networking from different mac addresses. I have so registered a virtual mac address associated with my failover IP Here's my configuration: Guest wanted IP: 46.105.40.x Host IP: 176.31.240.x Host configuration dummy0 interface: ifcfg-dummy0 BOOTPROTO=static IPADDR=10.0.0.1 NETMASK=255.0.0.0 ONBOOT=yes NM_CONTROLLED=no ARP=yes BRIDGE=br0 br0 bridge: ifcfg-br0 DEVICE=br0 TYPE=Bridge DELAY=0 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.1.1 NETMASK=255.255.255.0 PEERDNS=yes NM_CONTROLLED=no ARP=yes Failover ip is redirected to the br0 bridge with ip route add 46.105.40.xxx dev br0 > cat /proc/sys/net/ipv4/ip_forward 1 > cat /proc/sys/net/ipv4/conf/vnet0/proxy_arp 1 > route -n Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 176.31.240.254 0.0.0.0 UG 0 0 0 eth0 46.105.40.x 0.0.0.0 255.255.255.255 UH 0 0 0 br0 176.31.240.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 Guest configuration: KVM: <interface type='bridge'> <mac address='02:00:00:30:22:05'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </interface> I've borrowed most of the OVH configuration here (in french, http://guides.ovh.com/BridgeClient) for the guest configuration eth0 interface: ifcfg-eth0 DEVICE="eth0" BOOTPROTO=none HWADDR="02:00:00:30:22:05" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="e9138469-0d81-4ee6-b5ab-de0d7d17d1c8" USERCTL=no PEERDNS=yes IPADDR=46.105.40.xxx NETMASK=255.255.255.255 GATEWAY=176.31.240.254 ARP=yes For the routes, I have in route-eth0: 176.31.240.254 dev eth0 default via 176.31.240.254 dev eth0 With this configuration, I don't have any access to the internet. The only thing I can do is to ping the public ip of the host, nothing more. My final conclusion is that the route does not work, because, when, on the guest, I run ping 8.8.8.8, I have, on the host: > tcpdump -i vnet0 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:38:09.009324 IP 46-105-40-xxx.kimsufi.com > google-public-dns-a.google.com: ICMP echo request, id 50183, seq 1, length 64 13:38:09.815344 IP 46-105-40-xxx.kimsufi.com > google-public-dns-a.google.com: ICMP echo request, id 50183, seq 2, length 64 I never get the ping reply, only the request. It seems Guest - Host communication is fine. On eth0: > tcpdump -i eth0 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:39:40.240561 IP 46-105-40-xxx.kimsufi.com > google-public-dns-a.google.com: ICMP echo request, id 50439, seq 1, length 64 13:39:40.250161 IP google-public-dns-a.google.com > 46-105-40-xxx.kimsufi.com: ICMP echo reply, id 50439, seq 1, length 64 I have the request and the reply on eth0, but reply is not forwarded to the bridge. I really don't understand why, I though it was the aim of the route to do that! IPtables is disabled on both host and guest. I really hope some of you will be able to help me! Many thanks in advance, Sébastien

    Read the article

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • Slow disk transfer rate

    - by Nooklez
    I have problem with slow disk transfer rate. It's static files server for our website. I was making backup of data and noticed that tar is very slow. So I did hdparm -t and... hdparm -t /dev/sda3 /dev/sda3: Timing buffered disk reads: 6 MB in 4.70 seconds = 1.28 MB/sec It's low traffic hour now on our site, so huge I/O traffic is not a reason (iotop show less than 1 MB/s). It's RAID10 setup (2x2 SATA drives). Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy ------------------------------------------------------------------------------ u0 RAID-10 OK - - 64K 1396.96 W ON VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 698.63 GB SATA 0 - WDC WD7500AADS-00M2 p1 OK u0 698.63 GB SATA 1 - WDC WD7500AADS-00M2 p2 OK u0 698.63 GB SATA 2 - WDC WD7500AADS-00M2 p3 OK u0 698.63 GB SATA 3 - WDC WD7500AADS-00M2 We have recently changed almost all components of server (excluding 3ware controller + disks). And I think problems started since then. Can it be configuration problem or hardware? EDIT: I found something like that in dmesg [166843.625843] irq 16: nobody cared (try booting with the "irqpoll" option) [166843.625846] Pid: 0, comm: swapper Not tainted 3.1.5-gentoo #3 [166843.625847] Call Trace: [166843.625848] <IRQ> [<ffffffff810859d5>] __report_bad_irq+0x35/0xc1 [166843.625856] [<ffffffff81085cec>] note_interrupt+0x165/0x1e1 [166843.625859] [<ffffffff8108445f>] handle_irq_event_percpu+0x16f/0x187 [166843.625861] [<ffffffff810844a9>] handle_irq_event+0x32/0x51 [166843.625863] [<ffffffff8108640b>] handle_fasteoi_irq+0x75/0x99 [166843.625866] [<ffffffff810039d7>] handle_irq+0x83/0x8b [166843.625868] [<ffffffff810036ad>] do_IRQ+0x48/0xa0 [166843.625871] [<ffffffff8155082b>] common_interrupt+0x6b/0x6b [166843.625872] <EOI> [<ffffffff812981e8>] ? acpi_safe_halt+0x22/0x35 [166843.625877] [<ffffffff812981e2>] ? acpi_safe_halt+0x1c/0x35 [166843.625879] [<ffffffff81298216>] acpi_idle_do_entry+0x1b/0x2b [166843.625881] [<ffffffff81298276>] acpi_idle_enter_c1+0x50/0x99 [166843.625884] [<ffffffff813b792a>] cpuidle_idle_call+0xed/0x171 [166843.625886] [<ffffffff81001257>] cpu_idle+0x55/0x81 [166843.625888] [<ffffffff81532a69>] rest_init+0x6d/0x6f [166843.625891] [<ffffffff81aa1aca>] start_kernel+0x329/0x334 [166843.625893] [<ffffffff81aa12a6>] x86_64_start_reservations+0xb6/0xba [166843.625894] [<ffffffff81aa139c>] x86_64_start_kernel+0xf2/0xf9 [166843.625896] handlers: [166843.625898] [<ffffffff812dc8de>] twl_interrupt [166843.625900] Disabling IRQ #16 It's related to problem? EDIT #2: Based on feedback in comments, here is more informations. cat /proc/interrupts 16: 390813 0 0 0 IO-APIC-fasteoi 3w-sas Controller model: [ 1.095350] 3ware Storage Controller device driver for Linux v1.26.02.003. [ 1.095467] 3ware 9000 Storage Controller device driver for Linux v2.26.02.014. [ 1.095641] LSI 3ware SAS/SATA-RAID Controller device driver for Linux v3.26.02.000. [ 1.095787] 3w-sas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1.095881] 3w-sas 0000:01:00.0: setting latency timer to 64 [ 1.910801] 3w-sas: scsi0: Found an LSI 3ware 9750-4i Controller at 0xfe560000, IRQ: 16. [ 2.216537] 3w-sas: scsi0: Firmware FH9X 5.08.00.008, BIOS BE9X 5.07.00.011, Phys: 8. [ 2.216836] scsi 0:0:0:0: Direct-Access LSI 9750-4i DISK 5.08 PQ: 0 ANSI: 5 And motherboard: description: Motherboard product: P8H67-M vendor: ASUSTeK Computer INC.

    Read the article

  • apache webserver unresponsible with server-status showing all child processes waiting for connection

    - by Jeff
    My setup: i have 3 nearly identical webserver machines serving the same high loaded dynamic website with simple load balancing over dns. The service has been working for over two ears with the same apache config. apache2, php5, ubuntu 8.04 linux 2.6.24-29-server My problem: since about two weeks i'm experiencing problems with this config. Nearly every day i have one small moment about 5 minutes, in which the website is unreachable. I'm still able to login to the servers over ssh. If i run htop, i see the machine simply doing nothing. i have about 1000 apache processes running, but no cpu activity. i've used the apache mod_status to debug this situation. the process scoreboard looks like this: _C.___K_______________________R._______.__K_K____K___C_______.__ _______C__________.___________________________________.________C _.____K__________K___K_WK_____._K_____________________________._ W______K__________K________.____________________._______C_______ _C_.__K__K____.._.._____________________________________C_______ _R___________K___.______C________.C_________.______._____C______ ____________KKC____K_____K__WC_________________C_____.__.____.__ _____________________C_________K______.____C______._____________ _.___C____.___.___________________________.K______.____K________ W__.___________________C.__.____K________K_______R_._.__._______ __C__C_.__________C__C_______._____W______________C_.___C_______ ____.______C_____________C________.____C____________.________._K __.__________.K_____________K_________._____C____.K__________KW_ __K.W________R_________._______.___W___________.____.__K_____W__ W___.___..________W____K Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process So the most of the processes are just waiting for connection. after about 5 minutes the situation will return to normal: i have lot least processes on every machine, the most workers have the "."-status (meaing they are open to process a request) and of course the website is reachable! so i'm trying to find something in the logs, but there is simply nothing... the apache access log is silent for about 4 minutes, the same is for the error log. i also can not figure out anything wrong in other system logs. the situation is the same on all 3 webservers (all of them have this load peak and unresposibility at the same time), so i do not thing this is hardware related. but i think, this might be related to some network (tcp) issue. any ideas? EDIT: some more information, that i have just discovered: it has just happened again. and i was able to verify that i'm also not able to connect locally when this problem occurs. i have made some connection statistics with the following command after it happend netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 109 CLOSE_WAIT 2652 ESTABLISHED 2 FIN_WAIT1 11 LAST_ACK 12 LISTEN 91 SYN_RECV 1 SYN_SENT 16 TIME_WAIT If i execute the same command some time later, i have something like this: 4 CLOSING 108 ESTABLISHED 18 FIN_WAIT1 182 FIN_WAIT2 37 LAST_ACK 12 LISTEN 50 SYN_RECV 11276 TIME_WAIT So in the normal situation i have only 100-200 open connections by clients beeing handled by apache in this moment. when i have this "crash", i have a lot more connections. what is the best way to analyse this? EDIT2: the important lines in apache2.conf are: KeepAlive On MaxKeepAliveRequests 20 KeepAliveTimeout 1 <IfModule mpm_prefork_module> ServerLimit 920 StartServers 30 MinSpareServers 80 MaxSpareServers 120 MaxClients 920 MaxRequestsPerChild 700 </IfModule> it is an apache2 prefork with php_mod. the server has 8GB ram and a 4gb swap partition.

    Read the article

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

  • PDF rendering crashes app Core Graphics

    - by Felixyz
    EDIT: The memory leaks turned out to be unrelated to the crashes. Leaks are fixed but crashes remain, still mysterious. My (iPhone) app does lots of PDF loading and rendering, some of it threaded. Sometime, it seems always after I flush a page cash after getting a memory warning, the app crashes with a bad access when trying to draw a pdf page stored in an NSData object. Here is one example trace: #0 0x3016d564 in CGPDFResourcesGetResource () #1 0x3016d58a in CGPDFResourcesGetResource () #2 0x3016d94e in CGPDFResourcesGetExtGState () #3 0x3015fac4 in CGPDFContentStreamGetExtGState () #4 0x301629a8 in op_gs () #5 0x3016df12 in handle_xname () #6 0x3016dd9e in read_objects () #7 0x3016de6c in CGPDFScannerScan () #8 0x30161e34 in CGPDFDrawingContextDraw () #9 0x3016a9dc in CGContextDrawPDFPage () But sometimes I get this instead: Program received signal: “EXC_BAD_ACCESS”. (gdb) bt #0 0x335625fa in objc_msgSend () #1 0x32c04eba in CFDictionaryGetValue () #2 0x3016d500 in get_value () #3 0x3016d5d6 in CGPDFResourcesGetFont () #4 0x3015fbb4 in CGPDFContentStreamGetFont () #5 0x30163480 in op_Tf () #6 0x3016df12 in handle_xname () #7 0x3016dd9e in read_objects () #8 0x3016de6c in CGPDFScannerScan () #9 0x30161e34 in CGPDFDrawingContextDraw () #10 0x3016a9dc in CGContextDrawPDFPage () Is this an indication that I've mistakenly deallocated an object? It's hard for me to decode what's happening here. This is how I create and retain the various objects involved: // Some data was just loaded from the network and is pointed to by "data" self.pdfData = data; _dataProviderRef = CGDataProviderCreateWithData( NULL, [_pdfData bytes], [_pdfData length], NULL ); _documentRef = CGPDFDocumentCreateWithProvider(_dataProviderRef); _pageRef = CGPDFDocumentGetPage(_documentRef, 1); CGPDFPageRetain(_pageRef); _pdfFrame = CGPDFPageGetBoxRect(_pageRef, kCGPDFArtBox); So the NSData object is retained, and I explicitly retain the page reference. The data provider and the document are already retained by the create-functions. And here is my dealloc method: -(void)dealloc { if (_pageRef) CGPDFPageRelease(_pageRef); if (_documentRef) CGPDFDocumentRelease(_documentRef); if (_dataProviderRef) CGDataProviderRelease(_dataProviderRef); self.pdfData = nil; [super dealloc]; } Am I doing anything wrong? Even an assurance that I'm not, with explanation, would be a help.

    Read the article

  • How to use Tor control protocol in C#?

    - by Ed
    I'm trying to send commands to the Tor control port programmatically to make it refresh the chain. I haven't been able to find any examples in C#, and my solution's not working. The request times out. I have the service running, and I can see it listening on the control port. public string Refresh() { TcpClient client = new TcpClient("localhost", 9051); string response = string.Empty; string authenticate = MakeTcpRequest("AUTHENTICATE", client); if (authenticate.Equals("250")) response = MakeTcpRequest("SIGNAL NEWNYM", client); client.Close(); return response; } public string MakeTcpRequest(string message, TcpClient client) { client.ReceiveTimeout = 20000; client.SendTimeout = 20000; string proxyResponse = string.Empty; try { // Send message StreamWriter streamWriter = new StreamWriter(client.GetStream()); streamWriter.Write(message); streamWriter.Flush(); // Read response StreamReader streamReader = new StreamReader(client.GetStream()); proxyResponse = streamReader.ReadToEnd(); } catch (Exception ex) { // Ignore } return proxyResponse; } Can anyone spot what I'm doing wrong?

    Read the article

  • Label in PyQt4 GUI not updating with every loop of FOR loop

    - by user297920
    I'm having a problem, where I wish to run several command line functions from a python program using a GUI. I don't know if my problem is specific to PyQt4 or if it has to do with my bad use of python code. What I wish to do is have a label on my GUI change its text value to inform the user which command is being executed. My problem however, arises when I run several commands using a for loop. I would like the label to update itself with every loop, however, the program is not updating the GUI label with every loop, instead, it only updates itself once the entire loop is completed, and displays only the last command that was executed. I am using PyQt4 for my GUI environment. And I have established that the text variable for the label is indeed being updated with every loop, but, it is not actually showing up visually in the GUI. Is there a way for me to force the label to update itself? I have tried the update() and repaint() methods within the loop, but they don't make any difference. I would really appreciate any help. Thank you. Ronny. Here is the code I am using: # -*- coding: utf-8 -*- import sys, os from PyQt4 import QtGui, QtCore Gui = QtGui Core = QtCore # ================================================== CREATE WINDOW OBJECT CLASS class Win(Gui.QWidget): def __init__(self, parent = None): Gui.QWidget.__init__(self, parent) # --------------------------------------------------- SETUP PLAY BUTTON self.but1 = Gui.QPushButton("Run Commands",self) self.but1.setGeometry(10,10, 200, 100) # -------------------------------------------------------- SETUP LABELS self.label1 = Gui.QLabel("No Commands running", self) self.label1.move(10, 120) # ------------------------------------------------------- SETUP ACTIONS self.connect(self.but1, Core.SIGNAL("clicked()"), runCommands) # ======================================================= RUN COMMAND FUNCTION def runCommands(): for i in commands: win.label1.setText(i) # Make label display the command being run print win.label1.text() # This shows that the value is actually # changing with every loop, but its just not # being reflected in the GUI label os.system(i) # ======================================================================== MAIN # ------------------------------------------------------ THE TERMINAL COMMANDS com1 = "espeak 'senntence 1'" com2 = "espeak 'senntence 2'" com3 = "espeak 'senntence 3'" com4 = "espeak 'senntence 4'" com5 = "espeak 'senntence 5'" commands = (com1, com2, com3, com4, com5) # --------------------------------------------------- SETUP THE GUI ENVIRONMENT app = Gui.QApplication(sys.argv) win = Win() win.show() sys.exit(app.exec_())

    Read the article

  • Objective C - displaying data in NSTextView

    - by Leo
    Hi, I'm having difficulties displaying data in a TextView in iPhone programming. I'm analyzing incoming audio data (from the microphone). In order to do that, I create an object "analyzer" from my SignalAnalyzer class which performs analysis of the incoming data. What I would like to do is to display each new incoming data in a TextView in realtime. So when I push a button, I create the object "analyzer" whiwh analyze the incoming data. Each time there is new data, I need to display it on the screen in a TextView. My problem is that I'm getting an error because (I think) I'm trying to send a message to the parent class (the one taking care of displaying stuff in my TextView : it has a TexView instance variable linked in Interface Builder). What should I do in order to tell my parent class what it needs to display ? Or how sohould I design my classes to display automaticlally something ? Thank you for your help. PS : Here is my error : 2010-04-19 14:59:39.360 MyApp[1421:5003] void WebThreadLockFromAnyThread(), 0x14a890: Obtaining the web lock from a thread other than the main thread or the web thread. UIKit should not be called from a secondary thread. 2010-04-19 14:59:39.369 MyApp[1421:5003] bool _WebTryThreadLock(bool), 0x14a890: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Program received signal: “EXC_BAD_ACCESS”.

    Read the article

  • iPhone crashing when presenting modal view controller

    - by Michael Waterfall
    Hi there, I'm trying to display a modal view straight after another view has been presented modally (the second is a loading view that appears). - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; // Show load LoadViewController *loader = [[LoadViewController alloc] init]; [self presentModalViewController: loader animated:NO]; [loader release]; } But when I do this I get a "Program received signal: "EXC_BAD_ACCESS"." error. The stack trace is: 0 0x30b43234 in -[UIWindowController transitionViewDidComplete:fromView:toView:] 1 0x3095828e in -[UITransitionView notifyDidCompleteTransition:] 2 0x3091af0d in -[UIViewAnimationState sendDelegateAnimationDidStop:finished:] 3 0x3091ad7c in -[UIViewAnimationState animationDidStop:finished:] 4 0x0051e331 in run_animation_callbacks 5 0x0051e109 in CA::timer_callback 6 0x302454a0 in CFRunLoopRunSpecific 7 0x30244628 in CFRunLoopRunInMode 8 0x32044c31 in GSEventRunModal 9 0x32044cf6 in GSEventRun 10 0x309021ee in UIApplicationMain 11 0x00002154 in main at main.m:14 Any ideas? I'm totally stumped! The loading view is empty so there's definitely nothing going on in there that's causing the error. Is it something to do with launching 2 views modally in the same event loop or something? Thanks, Mike Edit: Very strange... I have modified it slightly so that the loading view is shown after a tiny delay, and this works fine! So it appears to be something within the same event loop! - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; // Show load [self performSelector:@selector(doit) withObject:nil afterDelay:0.1]; } - (void)doit { [self presentModalViewController:loader animated:YES]; }

    Read the article

  • Segmentation fault on MPI, runs properly on OpenMP

    - by Bellman
    Hi, I am trying to run a program on a computer cluster. The structure of the program is the following: PROGRAM something ... CALL subroutine1(...) ... END PROGRAM SUBROUTINE subroutine1(...) ... DO i=1,n CALL subroutine2(...) ENDDO ... END SUBROUTINE SUBROUTINE subroutine2(...) ... CALL subroutine3(...) CALL subroutine4(...) ... END SUBROUTINE The idea is to parallelize the loop that calls subroutine2. Main program basically only makes the call to subroutine1 and only its arguments are declared. I use two alternatives. On the one hand, I write OpenMP clauses arround the loop. On the other hand, I add an IF conditional branch arround the call and I use MPI to share the results. In the OpenMP case, I add CALL KMP_SET_STACKSIZE(402653184) at the beginning of the main program and I can run it with 8 threads on an 8 core machine. When I run it (on the same 8 core machine) with MPI (either using 8 or 1 processors) it crashes just when makes the call to subroutine3 with a segmentation fault (signal 11) error. If I comment subroutine4, then it doesn't crash (notice that it crashed just when calling subroutine3 and it works when commenting subroutine4). I compile with mpif90 using MPICH2 libraries and the following flags: -O3 -fpscomp logicals -openmp -threads -m64 -xS. The machine has EM64T architecture and I use a Debian Linux distribution. I set ulimit -s hard before running the program. Any ideas on what is going on? Has it something to do with stack size? Thanks in advance

    Read the article

  • Sync Vs. Async Sockets Performance in C#

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • phonegap.js crashes android app

    - by peirix
    I'm having this weird problem, where including the phonegap.js file in my project causes the app to crash on both the android emulator and my phone. I got the latest file from GitHub, so I can't see why this isn't working. This happens even if I try to build the sample project that's included in the PhoneGap download... Console log: [2010-12-17 11:05:14 - sample] Android Launch! [2010-12-17 11:05:14 - sample] adb is running normally. [2010-12-17 11:05:14 - sample] Performing com.phonegap.sample.sample activity launch [2010-12-17 11:05:14 - sample] Automatic Target Mode: using existing emulator 'emulator-5554' running compatible AVD 'FirstDevice' [2010-12-17 11:05:16 - sample] Uploading sample.apk onto device 'emulator-5554' [2010-12-17 11:05:16 - sample] Installing sample.apk... [2010-12-17 11:05:21 - sample] Success! [2010-12-17 11:05:22 - sample] Starting activity com.phonegap.sample.sample on device emulator-5554 [2010-12-17 11:05:23 - sample] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=com.phonegap.sample/.sample } LogCat: 12-17 11:13:12.533: DEBUG/AndroidRuntime(373): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< 12-17 11:13:12.533: DEBUG/AndroidRuntime(373): CheckJNI is ON 12-17 11:13:13.453: DEBUG/AndroidRuntime(373): Calling main entry com.android.commands.pm.Pm 12-17 11:13:13.503: DEBUG/AndroidRuntime(373): Shutting down VM 12-17 11:13:13.513: DEBUG/dalvikvm(373): GC_CONCURRENT freed 101K, 71% free 297K/1024K, external 0K/0K, paused 2ms+2ms 12-17 11:13:13.523: INFO/AndroidRuntime(373): NOTE: attach of thread 'Binder Thread #3' failed 12-17 11:13:13.523: DEBUG/dalvikvm(373): Debugger has detached; object registry had 1 entries 12-17 11:13:14.113: DEBUG/AndroidRuntime(383): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< 12-17 11:13:14.113: DEBUG/AndroidRuntime(383): CheckJNI is ON 12-17 11:13:14.853: DEBUG/AndroidRuntime(383): Calling main entry com.android.commands.am.Am 12-17 11:13:14.894: INFO/ActivityManager(62): Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.phonegap.sample/.sample } from pid 383 12-17 11:13:14.973: INFO/ActivityManager(62): Start proc com.phonegap.sample for activity com.phonegap.sample/.sample: pid=391 uid=10031 gids={1006, 3003, 1015} 12-17 11:13:14.983: DEBUG/AndroidRuntime(383): Shutting down VM 12-17 11:13:15.053: DEBUG/dalvikvm(383): GC_CONCURRENT freed 102K, 69% free 319K/1024K, external 0K/0K, paused 2ms+2ms 12-17 11:13:15.093: INFO/AndroidRuntime(383): NOTE: attach of thread 'Binder Thread #3' failed 12-17 11:13:15.143: DEBUG/dalvikvm(383): Debugger has detached; object registry had 1 entries 12-17 11:13:15.523: DEBUG/dalvikvm(33): GC_EXPLICIT freed 11K, 54% free 2520K/5379K, external 716K/1038K, paused 467ms 12-17 11:13:15.663: DEBUG/dalvikvm(33): GC_EXPLICIT freed <1K, 54% free 2520K/5379K, external 716K/1038K, paused 132ms 12-17 11:13:15.772: DEBUG/dalvikvm(33): GC_EXPLICIT freed <1K, 54% free 2520K/5379K, external 716K/1038K, paused 113ms 12-17 11:13:16.333: INFO/ARMAssembler(62): generated scanline__00000177:03515104_00001002_00000000 [ 87 ipp] (110 ins) at [0x43aff6f0:0x43aff8a8] in 686000 ns 12-17 11:13:17.493: INFO/ActivityManager(62): Displayed com.phonegap.sample/.sample: +2s540ms 12-17 11:13:18.163: DEBUG/szipinf(391): Initializing inflate state 12-17 11:13:18.173: DEBUG/szipinf(391): Initializing zlib to inflate 12-17 11:13:18.573: WARN/dalvikvm(391): JNI WARNING: jarray 0x40567330 points to non-array object (Ljava/lang/String;) 12-17 11:13:18.593: INFO/dalvikvm(391): "WebViewCoreThread" prio=5 tid=9 NATIVE 12-17 11:13:18.603: INFO/dalvikvm(391): | group="main" sCount=0 dsCount=0 obj=0x4051b880 self=0x1af760 12-17 11:13:18.603: INFO/dalvikvm(391): | sysTid=400 nice=0 sched=0/0 cgrp=default handle=1778000 12-17 11:13:18.623: INFO/dalvikvm(391): | schedstat=( 851184092 892639082 140 ) 12-17 11:13:18.633: INFO/dalvikvm(391): at android.webkit.LoadListener.nativeFinished(Native Method) 12-17 11:13:18.633: INFO/dalvikvm(391): at android.webkit.LoadListener.nativeFinished(Native Method) 12-17 11:13:18.653: INFO/dalvikvm(391): at android.webkit.LoadListener.tearDown(LoadListener.java:1200) 12-17 11:13:18.653: INFO/dalvikvm(391): at android.webkit.LoadListener.handleEndData(LoadListener.java:721) 12-17 11:13:18.653: INFO/dalvikvm(391): at android.webkit.LoadListener.handleMessage(LoadListener.java:219) 12-17 11:13:18.672: INFO/dalvikvm(391): at android.os.Handler.dispatchMessage(Handler.java:99) 12-17 11:13:18.672: INFO/dalvikvm(391): at android.os.Looper.loop(Looper.java:123) 12-17 11:13:18.672: INFO/dalvikvm(391): at android.webkit.WebViewCore$WebCoreThread.run(WebViewCore.java:629) 12-17 11:13:18.672: INFO/dalvikvm(391): at java.lang.Thread.run(Thread.java:1019) 12-17 11:13:18.672: ERROR/dalvikvm(391): VM aborting 12-17 11:13:18.887: INFO/DEBUG(31): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-17 11:13:18.887: INFO/DEBUG(31): Build fingerprint: 'generic/sdk/generic:2.3/GRH55/79397:eng/test-keys' 12-17 11:13:18.893: INFO/DEBUG(31): pid: 391, tid: 400 >>> com.phonegap.sample <<< 12-17 11:13:18.893: INFO/DEBUG(31): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr deadd00d 12-17 11:13:18.893: INFO/DEBUG(31): r0 fffffebc r1 deadd00d r2 00000026 r3 00000000 12-17 11:13:18.893: INFO/DEBUG(31): r4 81da45c8 r5 40567330 r6 81d8592c r7 001b2a48 12-17 11:13:18.893: INFO/DEBUG(31): r8 43640b58 r9 42dd1ecc 10 42dd1eb4 fp 4168d82c 12-17 11:13:18.893: INFO/DEBUG(31): ip 81da4728 sp 43640410 lr afd19375 pc 81d45a02 cpsr 20000030 12-17 11:13:19.183: INFO/DEBUG(31): #00 pc 00045a02 /system/lib/libdvm.so 12-17 11:13:19.183: INFO/DEBUG(31): #01 pc 000376fc /system/lib/libdvm.so 12-17 11:13:19.183: INFO/DEBUG(31): #02 pc 000399c4 /system/lib/libdvm.so 12-17 11:13:19.193: INFO/DEBUG(31): #03 pc 0003a4a0 /system/lib/libdvm.so 12-17 11:13:19.203: INFO/DEBUG(31): #04 pc 0032b6d6 /system/lib/libwebcore.so 12-17 11:13:19.203: INFO/DEBUG(31): #05 pc 002a4da4 /system/lib/libwebcore.so 12-17 11:13:19.203: INFO/DEBUG(31): #06 pc 001a6136 /system/lib/libwebcore.so 12-17 11:13:19.213: INFO/DEBUG(31): #07 pc 002a5870 /system/lib/libwebcore.so 12-17 11:13:19.223: INFO/DEBUG(31): #08 pc 00359e36 /system/lib/libwebcore.so 12-17 11:13:19.223: INFO/DEBUG(31): #09 pc 0035d30e /system/lib/libwebcore.so 12-17 11:13:19.223: INFO/DEBUG(31): #10 pc 003638be /system/lib/libwebcore.so 12-17 11:13:19.233: INFO/DEBUG(31): #11 pc 0019f6fa /system/lib/libwebcore.so 12-17 11:13:19.233: INFO/DEBUG(31): #12 pc 0019f780 /system/lib/libwebcore.so 12-17 11:13:19.243: INFO/DEBUG(31): #13 pc 001a3d8a /system/lib/libwebcore.so 12-17 11:13:19.243: INFO/DEBUG(31): #14 pc 000d0dca /system/lib/libwebcore.so 12-17 11:13:19.253: INFO/DEBUG(31): #15 pc 000d0f28 /system/lib/libwebcore.so 12-17 11:13:19.253: INFO/DEBUG(31): #16 pc 000d106e /system/lib/libwebcore.so 12-17 11:13:19.253: INFO/DEBUG(31): #17 pc 000ddef0 /system/lib/libwebcore.so 12-17 11:13:19.263: INFO/DEBUG(31): #18 pc 000ddf62 /system/lib/libwebcore.so 12-17 11:13:19.263: INFO/DEBUG(31): #19 pc 000f3ce2 /system/lib/libwebcore.so 12-17 11:13:19.273: INFO/DEBUG(31): #20 pc 002739ae /system/lib/libwebcore.so 12-17 11:13:19.273: INFO/DEBUG(31): #21 pc 000eac5e /system/lib/libwebcore.so 12-17 11:13:19.273: INFO/DEBUG(31): #22 pc 001b152c /system/lib/libwebcore.so 12-17 11:13:19.283: INFO/DEBUG(31): #23 pc 00017d34 /system/lib/libdvm.so 12-17 11:13:19.283: INFO/DEBUG(31): #24 pc 00048ec0 /system/lib/libdvm.so 12-17 11:13:19.283: INFO/DEBUG(31): #25 pc 00041a6a /system/lib/libdvm.so 12-17 11:13:19.293: INFO/DEBUG(31): #26 pc 0001cf94 /system/lib/libdvm.so 12-17 11:13:19.303: INFO/DEBUG(31): #27 pc 0002209c /system/lib/libdvm.so 12-17 11:13:19.303: INFO/DEBUG(31): #28 pc 00020f90 /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): #29 pc 0005f328 /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): #30 pc 0005f54e /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): #31 pc 00053b06 /system/lib/libdvm.so 12-17 11:13:19.313: INFO/DEBUG(31): code around pc: 12-17 11:13:19.313: INFO/DEBUG(31): 81d459e0 447a4479 ed0cf7d1 20004c09 ee34f7d1 12-17 11:13:19.323: INFO/DEBUG(31): 81d459f0 447c4808 6bdb5823 d0002b00 49064798 12-17 11:13:19.323: INFO/DEBUG(31): 81d45a00 700a2226 eea0f7d1 0004355f 0004511d 12-17 11:13:19.323: INFO/DEBUG(31): 81d45a10 0005ebd2 fffffebc deadd00d b510b40e 12-17 11:13:19.323: INFO/DEBUG(31): 81d45a20 4c0a4b09 447bb083 aa05591b 6b5bca02 12-17 11:13:19.323: INFO/DEBUG(31): code around lr: 12-17 11:13:19.333: INFO/DEBUG(31): afd19354 b0834a0d 589c447b 26009001 686768a5 12-17 11:13:19.333: INFO/DEBUG(31): afd19364 220ce008 2b005eab 1c28d003 47889901 12-17 11:13:19.333: INFO/DEBUG(31): afd19374 35544306 d5f43f01 2c006824 b003d1ee 12-17 11:13:19.333: INFO/DEBUG(31): afd19384 bdf01c30 000281a8 ffffff88 1c0fb5f0 12-17 11:13:19.333: INFO/DEBUG(31): afd19394 43551c3d a904b087 1c16ac01 604d9004 12-17 11:13:19.333: INFO/DEBUG(31): stack: 12-17 11:13:19.333: INFO/DEBUG(31): 436403d0 00000015 12-17 11:13:19.333: INFO/DEBUG(31): 436403d4 afd18407 /system/lib/libc.so 12-17 11:13:19.333: INFO/DEBUG(31): 436403d8 afd4270c /system/lib/libc.so 12-17 11:13:19.343: INFO/DEBUG(31): 436403dc afd426b8 /system/lib/libc.so 12-17 11:13:19.343: INFO/DEBUG(31): 436403e0 00000000 12-17 11:13:19.343: INFO/DEBUG(31): 436403e4 afd19375 /system/lib/libc.so 12-17 11:13:19.353: INFO/DEBUG(31): 436403e8 001af760 [heap] 12-17 11:13:19.353: INFO/DEBUG(31): 436403ec afd183d9 /system/lib/libc.so 12-17 11:13:19.353: INFO/DEBUG(31): 436403f0 001b2a48 [heap] 12-17 11:13:19.353: INFO/DEBUG(31): 436403f4 0005ebd2 [heap] 12-17 11:13:19.353: INFO/DEBUG(31): 436403f8 40567330 /dev/ashmem/dalvik-heap (deleted) 12-17 11:13:19.363: INFO/DEBUG(31): 436403fc 81d8592c /system/lib/libdvm.so 12-17 11:13:19.363: INFO/DEBUG(31): 43640400 001b2a48 [heap] 12-17 11:13:19.363: INFO/DEBUG(31): 43640404 afd18437 /system/lib/libc.so 12-17 11:13:19.363: INFO/DEBUG(31): 43640408 df002777 12-17 11:13:19.363: INFO/DEBUG(31): 4364040c e3a070ad 12-17 11:13:19.363: INFO/DEBUG(31): #00 43640410 00000001 12-17 11:13:19.363: INFO/DEBUG(31): 43640414 81d37701 /system/lib/libdvm.so 12-17 11:13:19.363: INFO/DEBUG(31): #01 43640418 00000001 12-17 11:13:19.363: INFO/DEBUG(31): 4364041c 81d399c9 /system/lib/libdvm.so 12-17 11:13:22.753: INFO/BootReceiver(62): Copying /data/tombstones/tombstone_09 to DropBox (SYSTEM_TOMBSTONE) 12-17 11:13:22.943: DEBUG/dalvikvm(62): GC_CONCURRENT freed 876K, 48% free 4240K/8135K, external 2269K/3469K, paused 9ms+10ms 12-17 11:13:23.133: DEBUG/dalvikvm(62): GC_FOR_MALLOC freed 348K, 47% free 4318K/8135K, external 2269K/3469K, paused 147ms 12-17 11:13:23.243: DEBUG/Zygote(33): Process 391 terminated by signal (11) 12-17 11:13:23.253: ERROR/InputDispatcher(62): channel '406defc8 com.phonegap.sample/com.phonegap.sample.sample (server)' ~ Consumer closed input channel or an error occurred. events=0x8 12-17 11:13:23.253: ERROR/InputDispatcher(62): channel '406defc8 com.phonegap.sample/com.phonegap.sample.sample (server)' ~ Channel is unrecoverably broken and will be disposed! 12-17 11:13:23.323: DEBUG/dalvikvm(62): GC_FOR_MALLOC freed 134K, 47% free 4376K/8135K, external 2269K/3469K, paused 174ms 12-17 11:13:23.323: INFO/ActivityManager(62): Process com.phonegap.sample (pid 391) has died. 12-17 11:13:23.333: INFO/WindowManager(62): WIN DEATH: Window{406defc8 com.phonegap.sample/com.phonegap.sample.sample paused=false} 12-17 11:13:23.542: DEBUG/dalvikvm(124): GC_EXPLICIT freed 61K, 51% free 2836K/5767K, external 1973K/2288K, paused 907ms 12-17 11:13:23.693: WARN/InputManagerService(62): Got RemoteException sending setActive(false) notification to pid 391 uid 10031 Sorry about the gigantic log posts, but I don't know what is of importance here...

    Read the article

  • C: Running Unix configure file in Windows

    - by Shiftbit
    I would like to port a few applications that I use on Linux to Windows. In particular I have been working on wdiff. A program that compares the differences word by word of two files. Currently I have been able to successfully compile the program on windows through Cygwin. However, I would like to run the program natively on Windows similar to the Project: UnixUtils. How would I go about porting unix utilities on a windows environment? My possible guess it to manually create the ./configure file so that I can create a proper makefile. Am I on the right track? Has anyone had experience porting GNU software to windows? Update: I've compiled it on Code::Blocks and I get two errors: wdiff.c|226|error: `SIGPIPE' undeclared (first use in this function) readpipe.c:71: undefined reference to `_pipe' readpipe.c:74: undefined reference to `_fork This is a linux signal that is not supported by windows... equvilancy? wdiff.c|1198|error: `PRODUCT' undeclared (first use in this function)| this is in the configure.in file... hardcode would probably be the fastest solution... Outcome: MSYS took care of the configure problems, however MinGW couldnt solve the posix issues. I attempt to utilize pthreads as recommended by mrjoltcola. However, after several hours I couldnt get it to compile nor link using the provided libraries. I think if this had worked it would have been the solution I was after. Special mention to Michael Madsen for MSYS.

    Read the article

  • django+uploadify - don't working

    - by Erico
    Hi, I'm trying to use an example posted on the "github" the link is http://github.com/tstone/django-uploadify. And I'm having trouble getting work. can you help me? I followed step by step, but does not work. Accessing the "URL" / upload / the only thing is that returns "True" part of settings.py import os PROJECT_ROOT_PATH = os.path.dirname(os.path.abspath(file)) MEDIA_ROOT = os.path.join(PROJECT_ROOT_PATH, 'media') TEMPLATE_DIRS = ( os.path.join(PROJECT_ROOT_PATH, 'templates')) urls.py from django.conf.urls.defaults import * from django.conf import settings from teste.uploadify.views import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/', include(admin.site.urls)), url(r'upload/$', upload, name='uploadify_upload'), ) views.py from django.http import HttpResponse import django.dispatch upload_received = django.dispatch.Signal(providing_args=['data']) def upload(request, *args, **kwargs): if request.method == 'POST': if request.FILES: upload_received.send(sender='uploadify', data=request.FILES['Filedata']) return HttpResponse('True') models.py from django.db import models def upload_received_handler(sender, data, **kwargs): if file: new_media = Media.objects.create( file = data, new_upload = True, ) new_media.save() upload_received.connect(upload_received_handler, dispatch_uid='uploadify.media.upload_received') class Media(models.Model): file = models.FileField(upload_to='images/upload/', null=True, blank=True) new_upload = models.BooleanField() uploadify_tags.py from django import template from teste import settings register = template.Library() @register.inclusion_tag('uploadify/multi_file_upload.html', takes_context=True) def multi_file_upload(context, upload_complete_url): """ * filesUploaded - The total number of files uploaded * errors - The total number of errors while uploading * allBytesLoaded - The total number of bytes uploaded * speed - The average speed of all uploaded files """ return { 'upload_complete_url' : upload_complete_url, 'uploadify_path' : settings.UPLOADIFY_PATH, # checar essa linha 'upload_path' : settings.UPLOADIFY_UPLOAD_PATH, } template - uploadify/multi_file_upload.html {% load uploadify_tags }{ multi_file_upload '/media/images/upload/' %} <script type="text/javascript" src="{{ MEDIA_URL }}js/swfobject.js"></script> <script type="text/javascript" src="{{ MEDIA_URL }}js/jquery.uploadify.js"></script> <div id="uploadify" class="multi-file-upload"><input id="fileInput" name="fileInput" type="file" /></div> <script type="text/javascript">// <![CDATA[ $(document).ready(function() { $('#fileInput').uploadify({ 'uploader' : '/media/swf/uploadify.swf', 'script' : '{% url uploadify_upload %}', 'cancelImg' : '/media/images/uploadify-remove.png/', 'auto' : true, 'folder' : '/media/images/upload/', 'multi' : true, 'onAllComplete' : allComplete }); }); function allComplete(event, data) { $('#uploadify').load('{{ upload_complete_url }}', { 'filesUploaded' : data.filesUploaded, 'errorCount' : data.errors, 'allBytesLoaded' : data.allBytesLoaded, 'speed' : data.speed }); // raise custom event $('#uploadify') .trigger('allUploadsComplete', data); } // ]]</script>

    Read the article

  • HttpUtility does not exist in the current context

    - by Shaihi
    I get this error when compiling a C# application. Looks like a trivial error, but I can't get around it. My setup is Windows 7 64 bit. Visual-Studio 2010 C# express B2Rel. I added a reference to System.Web.dll located at C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0, but it has a yellow exclamation symbol and I still get the above error. I also have the using System.Web declaration. What am I doing wrong? Update: After getting the prompt answer pointing me at the root cause, I searched a bit in Google to where it states that System.Web.dll is for the full framework. I did not find such a reference. For newbies like me, this blog summarizes the difference between the frameworks (client and full) nicely. I could not find a spot that says whether a certain Dll is supported in the client framework or not. I guess the exclamation mark in Visual Studio should be the first signal...

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >