Search Results

Search found 7211 results on 289 pages for 'enable'.

Page 246/289 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • How to start networking on a wired interface before logon in Ubuntu Desktop Edition

    - by Burly
    Problem Ubuntu 9.10 Desktop Edition (and possibly previous versions as well, I haven't tested them) has no network connections after boot until at least 1 user logs in. This means any services that require networking (e.g. openssh-server) are not available until someone logs in locally either via gdm, kdm, or a TTY. Background Ubuntu 9.10 Desktop Edition uses the NetworkManager service to take commands from the nm-applet in Gnome (or it's equivalent in KDE). As I understand it, while NetworkManager is running at boot, it is not issued any commands to connect until you login for the first time because nm-applet isn't running until you login and your Gnome session starts (or similar for KDE). I'm not sure what prompts NetworkManager to connect to the network when you login via a TTY. There are several relevant variables involved in starting up the network connections including: Wired vs Wireless (and the resulting drivers, SSID, passwords, and priorities) Static vs DHCP Multiple interfaces Constraints Support Ubuntu 9.10 Karmic Koala (bonus points for additional supported versions) Support wired eth0 interface Receive an IP address via DHCP Receive DNS information via DHCP (obviously the DHCP server must provide this information) Enable networking at the proper time (e.g. some time after file systems are loaded but before network services like ssh start) Switching distros or versions (e.g. to Server Edition) is not an acceptable solution Switching to a Static IP configuration is not an acceptable solution Question How to start networking on a wired interface before logon in Ubuntu Desktop Edition? What I have tried Per this guide, adding the following entry into /etc/network/interfaces so that NetworkManager won't manage the eth0 interface: auth eth0 iface inet dhcp After reboot eth0 is down. Issuing ifconfig eth0 up brings the interface up but it receives no IP address. Issuing dhclient eth0 instead Does bring up the interface and it Does receive an IP address. Completely removing the NetworkManager package in addition to the settings above. I'm a bit confused with the whole UpStart/SysVinit mangling that's going in Ubuntu currently (I'm more familiar with the CentOS world). However, directly issuing sudo /etc/init.d/networking start Or sudo start networking does not bring up the eth0 interface at all, much less get an IP address. See-Also How to force NetworkManager to make a connection before login? References Ubuntu Desktop Edition Ubuntu Networking Configuration Using Command Line Automatic Network Configuration Via Command-Line Start network connection before login

    Read the article

  • Why won't ruby recognize Haml under ubuntu64 while using jekyll static blog generator?

    - by oldmanjoyce
    I have been trying, quite unsuccessfully, to run henrik's fork of the jekyll static blog generator on Ubuntu 64-bit. I just can't seem to figure this out and I've tried a bunch of different things. Originally I posted this over at stackoverflow, but this is probably the better spot for it. The base stats of my machine: Ubuntu 9.04, 64 bit, ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux], rubygems 1.3.1. When I attempt to build the site, this is what happens: $ jekyll --pygments Configuration from ./_config.yml Using Sass for CSS generation You must have the haml gem installed first Using rdiscount for Markdown Building site: . - ./_site /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/core_ext.rb:27:in `method_missing': undefined method 'header' for #, page=# ..... cut ..... (NoMethodError) from (haml):9:in `render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'instance_eval' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:72:in 'render_haml_in_context' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:105:in 'do_layout' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/post.rb:226:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:172:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'each' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:210:in 'transform_pages' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:126:in 'process' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/jekyll:135 from /home/chris/.gem/bin/jekyll:19:in `load' from /home/chris/.gem/bin/jekyll:19 I added spaces to the left of the ClosedStruct to enable better visibility - sorry that my inline html/formatting isn't perfect. I also cut out some middle text that is just data. $ gem list *** LOCAL GEMS *** actionmailer (2.3.4) actionpack (2.3.4) activerecord (2.3.4) activeresource (2.3.4) activesupport (2.3.4) classifier (1.3.1) directory_watcher (1.2.0) haml (2.2.3) haml-edge (2.3.27) henrik-jekyll (0.5.2) liquid (2.0.0) maruku (0.6.0) open4 (0.9.6) rack (1.0.0) rails (2.3.4) rake (0.8.7) rdiscount (1.3.5) RedCloth (4.2.2) stemmer (1.0.1) syntax (1.0.0) Some showing for path verification: $ echo $PATH /home/chris/.gem/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games $ which haml /home/chris/.gem/bin/haml $ which jekyll /home/chris/.gem/bin/jekyll

    Read the article

  • [Wireless LAN]hostapd is giving error whwn running in target board

    - by Renjith G
    hi, I got the following error when i tried to run the hostapd command in my target board. Any idea about this? /etc # hostapd -dd hostapd.conf Configuration file: hostapd.conf madwifi_set_iface_flags: dev_up=0 madwifi_set_privacy: enabled=0 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 3) Could not connect to kernel driver. Deauthenticate all stations madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=2 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 2) madwifi_set_privacy: enabled=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=1 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=2 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=3 Using interface ath0 with hwaddr 00:0b:6b:33:8c:30 and ssid '"RG_WLAN Testing Renjith G"' SSID - hexdump_ascii(len=27): 22 52 47 5f 57 4c 41 4e 20 54 65 73 74 69 6e 67 "RG_WLAN Testing 20 52 65 6e 6a 69 74 68 20 47 22 Renjith G" PSK (ASCII passphrase) - hexdump_ascii(len=12): 6d 79 70 61 73 73 70 68 72 61 73 65 mypassphrase PSK (from passphrase) - hexdump(len=32): 70 6f a6 92 da 9c a8 3b ff 36 85 76 f3 11 9c 5e 5d 4a 4b 79 f4 4e 18 f6 b1 b8 09 af 6c 9c 6c 21 madwifi_set_ieee8021x: enabled=1 madwifi_configure_wpa: group key cipher=1 madwifi_configure_wpa: pairwise key ciphers=0xa madwifi_configure_wpa: key management algorithms=0x2 madwifi_configure_wpa: rsn capabilities=0x0 madwifi_configure_wpa: enable WPA=0x1 WPA: group state machine entering state GTK_INIT (VLAN-ID 0) GMK - hexdump(len=32): [REMOVED] GTK - hexdump(len=32): [REMOVED] WPA: group state machine entering state SETKEYSDONE (VLAN-ID 0) madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=1 madwifi_set_privacy: enabled=1 madwifi_set_iface_flags: dev_up=1 ath0: Setup of interface done. l2_packet_receive - recvfrom: Network is down Wireless event: cmd=0x8b1a len=40 Register Fail Register Fail WPA: group state machine entering state SETKEYS (VLAN-ID 0) GMK - hexdump(len=32): [REMOVED] GTK - hexdump(len=32): [REMOVED] wpa_group_setkeys: GKeyDoneStations=0 WPA: group state machine entering state SETKEYSDONE (VLAN-ID 0) madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=2 Signal 2 received - terminating Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 3) Could not connect to kernel driver. Deauthenticate all stations madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=2 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 2) madwifi_set_privacy: enabled=0 madwifi_set_ieee8021x: enabled=0 madwifi_set_iface_flags: dev_up=0

    Read the article

  • Creating a dynamic lacp trunk from HP Procurve 2412zl to Proliant DL380 G7

    - by Maalobs
    I'm configuring an IEEE 802.3ad (LACP) dynamic trunk from a HP Procurve 2412zl (firmware version K.15.07) switch to a HP Proliant DL380 G7 server. The DL380 has 4 NICs and is running Win2008 R2, so I'm teaming the NICs together and leaving everything on the recommended "automatic" setting in the HP NIC configuration tool. The server is one of two, they'll be connected on interfaces F17-F20 and F21-F24 respectively on the switch. I need the servers in a separate VLAN, here is the configuration for the VLAN: vlan 10 name "Lab_Mgmt" untagged B2,F17-F24 ip address 172.22.71.3 255.255.255.0 tagged B21 exit There is a DHCP-relay into the VLAN 10 from another device beyond interface B21. The Advanced Traffic Management Guide says that in order to run a dynamic LACP trunk on another VLAN besides the DEFAULT_VLAN, you need to first enable GVRP and then use "forbid" to stop the interfaces from automatically joining DEFAULT_VLAN when the dynamic trunk is created. GVRP brings some other stuff with it that I don't want or need, so I disable it with "unknown-vlans disable" on all other interfaces. Here is how I do it: procurve-5412zl-1(config)# gvrp procurve-5412zl-1(config)# interface A1-A24,B1-B24,C1-C24,D1-D10,D13-D24,E1-E24, F1-F16,K1,K2 unknown-vlans disable procurve-5412zl-1(config)# vlan 1 forbid F17-F24 procurve-5412zl-1(config)# interface F17-F20 lacp active The result afterwards looks all successful: procurve-5412zl-1(config)# show trunks Load Balancing Method: L3-based (Default), L2-based if non-IP traffic Port | Name Type | Group Type ---- + -------------------------------- --------- + ------ -------- F17 | XYZTEAM3_NIC1 100/1000T | Dyn2 LACP F18 | XYZTEAM3_NIC2 100/1000T | Dyn2 LACP F19 | XYZTEAM3_NIC3 100/1000T | Dyn2 LACP F20 | XYZTEAM3_NIC4 100/1000T | Dyn2 LACP procurve-5412zl-1(config)# vlan 10 procurve-5412zl-1(vlan-10)# show lacp LACP LACP Trunk Port LACP Admin Oper Port Enabled Group Status Partner Status Key Key ---- ------- ------- ------- ------- ------- ------ ------ F17 Active Dyn2 Up Yes Success 0 0 F18 Active Dyn2 Up Yes Success 0 0 F19 Active Dyn2 Up Yes Success 0 0 F20 Active Dyn2 Up Yes Success 0 0 On the Proliant server, the NIC configuration Tool is also indicating that a 802.3ad dynamic trunk has been established. Everything should be good, but it isn't. The server is not getting an IP-address from the DHCP, which it does if I'm not enabling LACP. If I configure the server to a static IP-address on the VLAN 10 subnet, it can't even ping the switch IP-address, much less anything outside of the VLAN. The switch can't ping the server either. I did another attempt with F17-F20 tagged, and checking the box "Default Native Tag (VLAN 10)" in the NIC configuration tool on the server, but there was no difference. Does anyone have any idea what I might have missed?

    Read the article

  • Can't seem to install imagick

    - by PolishHurricane
    I'm trying to install the PHP PEAR PECL extension "imagick" (image magick), but failing horribly. It seems that I keed installing packages to progress, but this one has me stumped. It seems to fail all the way at the bottom. Please Note: I'm using ArchLinux, apt-get doesn't save me. [root@Crux tmp]# pecl install imagick downloading imagick-3.0.1.tgz ... Starting to download imagick-3.0.1.tgz (93,920 bytes) .....................done: 93,920 bytes 13 source files, building running: phpize Configuring for: PHP Api Version: 20100412 Zend Module Api No: 20100525 Zend Extension Api No: 220100525 Please provide the prefix of Imagemagick installation [autodetect] : building in /tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1 running: /tmp/pear/temp/imagick/configure --with-imagick checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/modules checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... 0.13.5 (ok) checking for gawk... gawk checking whether to enable the imagick extension... yes, shared checking whether to enable the imagick GraphicsMagick backend... no checking ImageMagick MagickWand API configuration program... found in /usr/bin/MagickWand-config checking if ImageMagick version is at least 6.2.4... found version 6.7.8 Q16 checking for MagickWand.h header file... found in /usr/include/ImageMagick/wand/MagickWand.h checking PHP version is at least 5.1.3... yes. found 5.4.6 checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 1572864 checking command to parse /usr/bin/nm -B output from cc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC checking if cc PIC flag -fPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking whether the cc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/libtool --mode=compile cc -I. -I/tmp/pear/temp/imagick -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/include -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/main -I/tmp/pear/temp/imagick -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -I/usr/include/ImageMagick -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/imagick/imagick_class.c -o imagick_class.lo mkdir .libs cc -I. -I/tmp/pear/temp/imagick -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/include -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/main -I/tmp/pear/temp/imagick -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -I/usr/include/ImageMagick -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/imagick/imagick_class.c -fPIC -DPIC -o .libs/imagick_class.o /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagematteâ: /tmp/pear/temp/imagick/imagick_class.c:268:2: warning: âMagickGetImageMatteâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:82) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsizeoffsetâ: /tmp/pear/temp/imagick/imagick_class.c:406:2: warning: passing argument 2 of âMagickGetSizeOffsetâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:87:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_paintfloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:1016:3: warning: âMagickPaintFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:99) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:1019:3: warning: âMagickPaintFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:99) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagepropertiesâ: /tmp/pear/temp/imagick/imagick_class.c:1083:2: warning: passing argument 3 of âMagickGetImagePropertiesâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:35:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageprofilesâ: /tmp/pear/temp/imagick/imagick_class.c:1131:2: warning: passing argument 3 of âMagickGetImageProfilesâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:33:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_recolorimageâ: /tmp/pear/temp/imagick/imagick_class.c:1402:2: warning: âMagickRecolorImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:109) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setfontâ: /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âstruct _php_core_globalsâ has no member named âsafe_modeâ /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âCHECKUID_CHECK_FILE_AND_DIRâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c:1442:3: note: each undeclared identifier is reported only once for each function it appears in /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âCHECKUID_NO_ERRORSâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_queryformatsâ: /tmp/pear/temp/imagick/imagick_class.c:2580:2: warning: passing argument 2 of âMagickQueryFormatsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:41:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_queryfontsâ: /tmp/pear/temp/imagick/imagick_class.c:2607:2: warning: passing argument 2 of âMagickQueryFontsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:40:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_colorfloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:3396:2: warning: âMagickColorFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:75) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mapimageâ: /tmp/pear/temp/imagick/imagick_class.c:3730:2: warning: âMagickMapImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:86) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mattefloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:3763:2: warning: âMagickMatteFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:88) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_medianfilterimageâ: /tmp/pear/temp/imagick/imagick_class.c:3790:2: warning: âMagickMedianFilterImageâ is deprecated (declared at /usr/include/ImageMagick/wand/magick-image.h:217) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_paintopaqueimageâ: /tmp/pear/temp/imagick/imagick_class.c:3853:2: warning: âMagickPaintOpaqueImageChannelâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:104) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_painttransparentimageâ: /tmp/pear/temp/imagick/imagick_class.c:3916:2: warning: âMagickPaintTransparentImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:107) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_reducenoiseimageâ: /tmp/pear/temp/imagick/imagick_class.c:4059:2: warning: âMagickReduceNoiseImageâ is deprecated (declared at /usr/include/ImageMagick/wand/magick-image.h:266) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageattributeâ: /tmp/pear/temp/imagick/imagick_class.c:5068:2: warning: âMagickGetImageAttributeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:59) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagechannelextremaâ: /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: âMagickGetImageChannelExtremaâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:78) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: passing argument 3 of âMagickGetImageChannelExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:78:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: passing argument 4 of âMagickGetImageChannelExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:78:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageextremaâ: /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: âMagickGetImageExtremaâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:80) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: passing argument 2 of âMagickGetImageExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:80:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: passing argument 3 of âMagickGetImageExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:80:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagehistogramâ: /tmp/pear/temp/imagick/imagick_class.c:5629:2: warning: passing argument 2 of âMagickGetImageHistogramâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:415:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagepageâ: /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 2 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 3 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 4 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 5 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageindexâ: /tmp/pear/temp/imagick/imagick_class.c:6344:2: warning: âMagickGetImageIndexâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:65) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageindexâ: /tmp/pear/temp/imagick/imagick_class.c:6369:2: warning: âMagickSetImageIndexâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:113) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagesizeâ: /tmp/pear/temp/imagick/imagick_class.c:6447:2: warning: âMagickGetImageSizeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:140) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageattributeâ: /tmp/pear/temp/imagick/imagick_class.c:6796:2: warning: âMagickSetImageAttributeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:111) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_flattenimagesâ: /tmp/pear/temp/imagick/imagick_class.c:7043:2: warning: âMagickFlattenImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:132) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_averageimagesâ: /tmp/pear/temp/imagick/imagick_class.c:8079:2: warning: âMagickAverageImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:131) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mosaicimagesâ: /tmp/pear/temp/imagick/imagick_class.c:8516:2: warning: âMagickMosaicImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:135) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getpageâ: /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 2 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 3 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 4 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 5 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getquantumdepthâ: /tmp/pear/temp/imagick/imagick_class.c:9154:2: warning: passing argument 1 of âMagickGetQuantumDepthâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:52:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getquantumrangeâ: /tmp/pear/temp/imagick/imagick_class.c:9176:2: warning: passing argument 1 of âMagickGetQuantumRangeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:53:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsamplingfactorsâ: /tmp/pear/temp/imagick/imagick_class.c:9247:2: warning: passing argument 2 of âMagickGetSamplingFactorsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:59:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsizeâ: /tmp/pear/temp/imagick/imagick_class.c:9273:2: warning: passing argument 2 of âMagickGetSizeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:86:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:9273:2: warning: passing argument 3 of âMagickGetSizeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:86:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getversionâ: /tmp/pear/temp/imagick/imagick_class.c:9299:2: warning: passing argument 1 of âMagickGetVersionâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:55:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageprogressmonitorâ: /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âstruct _php_core_globalsâ has no member named âsafe_modeâ /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âCHECKUID_CHECK_FILE_AND_DIRâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âCHECKUID_NO_ERRORSâ undeclared (first use in this function) make: *** [imagick_class.lo] Error 1 ERROR: `make' failed

    Read the article

  • Setup MSSQL replication with peer to peer topology: problem setting up Conflict Detection

    - by Roel
    Hi, I'm setting up a SQL Replication strategy, using MSSQL2008 with peer-to-peer publications (2 servers, each one subscribes to the other). I followed this HOWTO from MSDN, and the setup seems to be working fine: add a record to one table on server A, query on server B shows the new record. So far, so good. So far I only have one table 'Templates': Id PK (calculated field) NodeId int default 1/2 (Server A = 1, Server B = 2) LocalId int autoid Name nvarchar(100) Now, I would like to enable 'Conflict detection', which should be enabled by default. But every time I try to save the 'Conflict Detection' feature in the Publication Properties I get the following error: Cannot save Peer conflict detection properties. An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) Program Location: at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand) at Microsoft.SqlServer.Replication.ReplicationObject.ExecCommand(String commandIn) at Microsoft.SqlServer.Replication.TransPublication.SetPeerConflictDetection(Boolean enablePeerConflictDetection, Int32 peerOriginatorID) at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveP2PConflictDetection() at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveProperties(ExecutionMode& executionResult) Column name 'Id' does not exist in the target table or view. Changed database context to 'TestDB'. (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.2531&EvtSrc=MSSQLServer&EvtID=1911&LinkId=20476 Server Name: SERVER_A Error Number: 1911 Severity: 16 State: 1 Line Number: 2 Program Location: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Now, I googled the hell out of this error, and nothing shows up. I also can't seem to find out what the exact target table of the error "Column name 'Id' does not exist..." is. Has anyone every done this successfully? Am I missing something? Having this setup without conflict detection feels pretty useless... EDIT OK, so after some more research and setting up with different databases etc, I found out that the calculated 'Id' column of the Templates table is the culprit. I don't know why, but the replication doesn't seem to allow calculated columns (which are also primary key). It works now too, without the 'Id' column, and using the NodeId and LocalId as a combined PK. So now the question is, why isn't it allowed to have a calculated column as PK for replication with conflict detection?

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Laptop LCD sometimes stops working on reboot. Please help.

    - by J Ringle
    I have a Gateway P-6831FX Laptop with Vista Ultimate. The Laptop LCD will sometimes not come on after I reboot the computer. I don't even close the lid and it happens. It isn't dim, it doesn't come on at all. No posting of CMOS (BIOS), nothing. Please note... this happens sometimes, not every time. Frustrating! When plugged into an external monitor, which works fine, Vista display properties can't even "sense" the laptop LCD. I try to enable the laptop LCD for dual display, turning on the laptop LCD, and it does nothing. It's like the laptop LCD is not even there. Manually taking a magnet in my hand to the laptop lid sensing switch (the sensor that turns off display/sleep mode when you close lid), sometimes causes the LCD backlight to "turn on" but not display any images. By "turn on" I mean I can see the screen backlight turn on to a 'dark gray' screen instead of pitch black. Subsequent reboot the laptop display is not working again! Here are the facts: Only happens at random and only after a reboot. Waking from Sleep mode isn't a problem. Pressing F4 function key for dual display does nothing when this happens. Closing lid doesn't seem to be related. (unless it is only after reboot.) using external magnet from laptop screen sensor sometimes triggers backlight to turn on but reboot back to square one with no LCD display. an external display always works fine. I have taken apart LCD, checked all wires and ribbons for loose connections or damage. I have replaced the Inverter. It doesn't seem to be heat related as I can put in sleep mode and resume fine when very hot. (external monitor works fine too). Sometimes the screen works fine as if there is not a problem at all. Even after a reboot... This is random. Any ideas out there? If it is a bad part... which one? The LCD seems to be fine. What are the odds of 2 bad inverters? The backlight is fine. The LCD wires/ribbons seem to be fine. I am at a loss. No warranty left and Gateway tech support is clueless. Thanks for any feedback that might help.

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • Graphics driver for ubuntu on dell latitude XT

    - by marc.riera
    Hi, we have a laptop (dell latitude xt) on our company, and we would like to install ubuntu on it. windows 7 works fine out of the box, so the hardware is fine. since this laptop has a touchscreen we just installed ubuntu 10.10 netbook edition 32x. But, we do not manage to enable the touchscreen, neither the vga graphic drivers. this is the output from lspci, if somebody cares. 00:00.0 Host bridge: ATI Technologies Inc Radeon Xpress 7930 Host Bridge 00:01.0 PCI bridge: ATI Technologies Inc RS7932 PCI Bridge 00:04.0 PCI bridge: ATI Technologies Inc Device 7934 00:06.0 PCI bridge: ATI Technologies Inc RS7936 PCI Bridge 00:07.0 PCI bridge: ATI Technologies Inc Device 7937 00:13.0 USB Controller: ATI Technologies Inc SB600 USB (OHCI0) 00:13.1 USB Controller: ATI Technologies Inc SB600 USB (OHCI1) 00:13.2 USB Controller: ATI Technologies Inc SB600 USB (OHCI2) 00:13.3 USB Controller: ATI Technologies Inc SB600 USB (OHCI3) 00:13.4 USB Controller: ATI Technologies Inc SB600 USB (OHCI4) 00:13.5 USB Controller: ATI Technologies Inc SB600 USB Controller (EHCI) 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 14) 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB600 PCI to LPC Bridge 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon Xpress 1250 03:01.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 03:01.1 FireWire (IEEE 1394): Texas Instruments PCIxx12 OHCI Compliant IEEE 1394 Host Controller 03:01.3 SD Host controller: Texas Instruments PCIxx12 SDA Standard Compliant SD Host Controller 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express 0b:00.0 Network controller: Broadcom Corporation BCM4321 802.11a/b/g/n (rev 03) I've tryied to install ati drivers 9.3 , which I downloaded and installed, unpacked and installed, builded and installed, but nothing worked. Looks like the latests version is just accepted to work on jaunty 9.04, so they are kind of old. what else I can do? thanks. Marc Information added: lsusb and lspci -n |grep 01:05.0 sysop@wl083517:~$ lspci -n |grep 01:05.0 01:05.0 0300: 1002:7942 sysop@wl083517:~$ lsusb Bus 006 Device 002: ID 413c:8138 Dell Computer Corp. Wireless 5520 Voda I Mobile Broadband (3G HSDPA) Minicard EAP-SIM Port Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 413c:8140 Dell Computer Corp. Wireless 360 Bluetooth Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 0483:2016 SGS Thomson Microelectronics Fingerprint Reader Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 1b96:0001 N-Trig Duosense Transparent Electromagnetic Digitizer Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 03f0:1807 Hewlett-Packard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub sysop@wl083517:~$

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • setting up a basic mod_proxy virtual host

    - by SevenProxies
    I'm trying to set up a basic virtual host to proxy all requests to test.local to a WEBrick server I have running on 127.0.0.1:8080 while keeping all requests to localhost going to my static files in /var/www. I'm running Ubuntu 10.04. I have libapache2-mod-proxy-html installed and I have the module enabled with a2enmod proxy. I also have my virtual host enabled. However, whenever I go to test.local I always get a cryptic 500 server error and all my logs are telling me is: [Thu Mar 03 01:43:10 2011] [warn] proxy: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. Here's my virtual host: <VirtualHost test.local:80> LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so ServerAdmin webmaster@localhost ServerName test.local ProxyPreserveHost On # prevents this folder from being proxied ProxyPass /static ! DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined and here's my settings for mod_proxy: <IfModule mod_proxy.c> #turning ProxyRequests on and allowing proxying from all may allow #spammers to use your proxy to send email. ProxyRequests Off <Proxy *> # default settings #AddDefaultCharset off #Order deny,allow #Deny from all ##Allow from .example.com AddDefaultCharset off Order allow,deny Allow from all </Proxy> # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block ProxyVia On </IfModule> Does anybody know what I'm doing wrong? Thanks

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • How to change mouse pointer icon in Xfce Debian 7 Wheezy?

    - by kadaj
    I copied the cursor theme (oxy-neon or Oxygen Neon) to /usr/share/icons and from Applications Menu - Settings - Mouse, I am able to see the new theme. I clicked on it and the pointer doesn't change. However the text typing icon ('I'), busy icon, hand icon, and resize window icons got changed. The pointer icon remains the same, the black Adwaita. I removed the Adwaita folder from the icons folder, and still the mouse pointer doesn't change. Is the pointer theme specified elsewhere? I have no setting under home directory. I tried logging out, restart, restarting xfwm4, but nothing works. I just found that the icon pointer changes when the pointer is inside Firefox, but it's not consistent. It keeps changing when I click menu items. Very weird. Any idea how to fix this? This is the output of running: gsettings list-recursively org.gnome.desktop.interface : ~$ gsettings list-recursively org.gnome.desktop.interface org.gnome.desktop.interface automatic-mnemonics true org.gnome.desktop.interface buttons-have-icons false org.gnome.desktop.interface can-change-accels false org.gnome.desktop.interface clock-format '24h' org.gnome.desktop.interface clock-show-date false org.gnome.desktop.interface clock-show-seconds false org.gnome.desktop.interface cursor-blink true org.gnome.desktop.interface cursor-blink-time 1200 org.gnome.desktop.interface cursor-blink-timeout 10 org.gnome.desktop.interface cursor-size 24 org.gnome.desktop.interface cursor-theme 'Adwaita' org.gnome.desktop.interface document-font-name 'Sans 11' org.gnome.desktop.interface enable-animations true org.gnome.desktop.interface font-name 'Cantarell 11' org.gnome.desktop.interface gtk-color-palette 'black:white:gray50:red:purple:blue:light blue:green:yellow:orange:lavender:brown:goldenrod4:dodger blue:pink:light green:gray10:gray30:gray75:gray90' org.gnome.desktop.interface gtk-color-scheme '' org.gnome.desktop.interface gtk-im-module '' org.gnome.desktop.interface gtk-im-preedit-style 'callback' org.gnome.desktop.interface gtk-im-status-style 'callback' org.gnome.desktop.interface gtk-key-theme 'Default' org.gnome.desktop.interface gtk-theme 'Adwaita' org.gnome.desktop.interface gtk-timeout-initial 200 org.gnome.desktop.interface gtk-timeout-repeat 20 org.gnome.desktop.interface icon-theme 'gnome' org.gnome.desktop.interface menubar-accel 'F10' org.gnome.desktop.interface menubar-detachable false org.gnome.desktop.interface menus-have-icons false org.gnome.desktop.interface menus-have-tearoff false org.gnome.desktop.interface monospace-font-name 'Monospace 11' org.gnome.desktop.interface show-input-method-menu true org.gnome.desktop.interface show-unicode-menu true org.gnome.desktop.interface text-scaling-factor 1.0 org.gnome.desktop.interface toolbar-detachable false org.gnome.desktop.interface toolbar-icons-size 'large' org.gnome.desktop.interface toolbar-style 'both-horiz' org.gnome.desktop.interface toolkit-accessibility false ~$

    Read the article

  • libvirt qemu/kvm migration problem

    - by Panda
    I am using kvm and libvirt on my Dell server. Now i am trying to migrate one virtual machine from a physical server to another. However, I failed everytime. In virsh on physicalServer1, I typed: virsh # migrate virtualmachine1 qemu+ssh://username@physicalServer2/system error: operation failed: migration to 'tcp:physicalServer2:49163' failed: migration failed Then I searched FAQ part on libvirt.org. It says: error: operation failed: migration to '...' failed: migration failed This is an error often encountered when trying to migrate with QEMU/KVM. This typically happens with plain migration, when the source VM cannot connect to the destination host. You will want to make sure your hosts are properly configured for migration (see the migration section of this FAQ) I managed to ssh physicalServer2 from a shell on virtualmachine1 so the above red part did not explain my failure. I also open ports on physicalServer2, iptables -L shows following information: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:49152:49215 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer2: 2011-05-06 13:37:30.708: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name openjudge-test -uuid a8c704bc-a4f9-90db-3e57-40e60b00aac1 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/virtualmachine1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -drive file=/media/nfs/virtualmachine1.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:16:36:8a:22 :a0,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:2 -vga cirrus -incoming tcp:0.0.0.0:49163 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/0 2011-05-06 13:37:30.915: shutting down The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer1 is empty. Both physical servers are using Ubuntu 11.04. The libvirt and kvm used are installed by apt-get. The libvirt version is 0.8.8.

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

  • Configure Postfix to Port other than 25

    - by bwheeler96
    I've done quite a bit of googling on how to reconfigure postfix to work on a different port, but I still can't fond the line(s) people keep talking about in my master.cf. I'm using OS X Mountain Lion, and my ISP blocks traffic both ways on port 25. people have said to look for a line that says smtp inet n - n - - smtpd I can't find it. This is (what I believe to be) unmodified # ==== Begin auto-generated section ======================================== # This section of the master.cf file is auto-generated by the Server Admin # Mail backend plugin whenever mails settings are modified. smtp inet n - n - 1 postscreen smtpd pass - - n - - smtpd dnsblog unix - - n - 0 dnsblog tlsproxy unix - - n - 0 tlsproxy submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt smtp unix - - n - - smtp # === End auto-generated section =========================================== # Modern SMTP clients communicate securely over port 25 using the STARTTLS command. # Some older clients, such as Outlook 2000 and its predecessors, do not properly # support this command and instead assume a preconfigured secure connection # on port 465. This was sometimes called "smtps", but such usage was never # approved by the IANA and therefore conflicts with another, legitimate assignment. # For more details about managing secure SMTP connections with postfix, please see: # http://www.postfix.org/TLS_README.html # To read more about configuring secure connections with Outlook 2000, please read: # http://support.microsoft.com/default.aspx?scid=kb;en-us;Q307772 # Apple does not support the use of port 465 for this purpose. # After determining that connecting clients do require this behavior, you may choose # to manually enable support for these older clients by uncommenting the following # four lines. #465 inet n - n - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - n - - smtp pickup fifo n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify sacl-cache unix - - n - 1 sacl-cache flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants.

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • SSH closing by itself - root works fine

    - by Antti
    I'm trying to connect to a server but if i use any other user than root the connection closes itself after a successful login: XXXXXXX:~ user$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to XXXXXXX.XXXXXX.XXX [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file /Users/user/.ssh/identity type -1 debug1: identity file /Users/user/.ssh/id_rsa type -1 debug1: identity file /Users/user/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXXXXXX.XXXXXX.XXX' is known and matches the RSA host key. debug1: Found key in /Users/user/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /Users/user/.ssh/woo_openssh debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Offering public key: /Users/user/.ssh/sidlee.dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/user/.ssh/identity debug1: Trying private key: /Users/user/.ssh/id_rsa debug1: Offering public key: /Users/user/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. Last login: Mon Mar 29 01:41:51 2010 from 193.67.179.2 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to XXXXXXX.XXXXXX.XXX closed. Transferred: sent 2976, received 2136 bytes, in 0.5 seconds Bytes per second: sent 5892.2, received 4229.1 debug1: Exit status 1 If i log in as root the exact same way it works as expected. I've added the users i want to log in with to a group (sshusers) and added that group to /etc/sshd_config: AllowGroups sshusers I'm not sure what to try next as i don't get a clear error anywhere. I would like to enable specific accounts to log in so that i can disable root. This is a GridServer/Media Temple (CentOS).

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Cliq Wireless questions

    - by Nathan Adams
    Heres the deal: I am by no means a Linux expert, even less when it comes to the Android OS but lets see if we can't solve this problem. The problem I am having is that on the Cliq we have a broadcom chip. In order to use the wireless card you must first insert the module into the kernel. Fine: # insmod /system/lib/dhd.ko insmod /system/lib/dhd.ko # lsmod lsmod dhd 164936 0 - Live 0xbf000000 # BUT netcfg (or ifconfig in busybox) does not recognize that there is a wireless adapter there: # netcfg netcfg lo UP 127.0.0.1 255.0.0.0 0x00000049 dummy0 DOWN 0.0.0.0 0.0.0.0 0x00000082 rmnet0 UP 14.67.164.2 255.255.255.252 0x00001043 rmnet1 DOWN 0.0.0.0 0.0.0.0 0x00001002 rmnet2 DOWN 0.0.0.0 0.0.0.0 0x00001002 usb0 DOWN 0.0.0.0 0.0.0.0 0x00001002 # busybox ifconfig busybox ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:282 errors:0 dropped:0 overruns:0 frame:0 TX packets:282 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:18754 (18.3 KiB) TX bytes:18754 (18.3 KiB) rmnet0 Link encap:Ethernet HWaddr EE:83:E8:B4:4A:ED inet addr:14.x.x.x Bcast:14.67.164.3 Mask:255.255.255.252 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7148 errors:0 dropped:0 overruns:0 frame:0 TX packets:7659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2609236 (2.4 MiB) TX bytes:908575 (887.2 KiB) # For giggles if we attempt to launch wpa_supplicant anyways we get this: # wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf ioctl[SIOCSIWPMKSA]: No such device ioctl[SIOCSIWMODE]: No such device Could not configure driver to use managed mode ioctl[SIOCGIFFLAGS]: No such device Could not set interface 'eth0' UP ioctl[SIOCGIWRANGE]: No such device ioctl[SIOCGIFINDEX]: No such device CTRL-EVENT-STATE-CHANGE id=-1 state=0 ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWAUTH]: No such device WEXT auth param 7 value 0x0 - Failed to disable WPA in the driver. ioctl[SIOCSIWAUTH]: No such device WEXT auth param 5 value 0x0 - ioctl[SIOCSIWAUTH]: No such device WEXT auth param 4 value 0x0 - ioctl[SIOCSIWAP]: No such device ioctl[SIOCGIFFLAGS]: No such device # In dmesg we get: <4>[18300.494065] dhd_oob_enable_intr : enable <4>[18305.019976] dhd_net_start failed bus is not ready <4>[18305.020278] dhdsdio_probe: dhd_net_start failed! Do I need to specify the firmware with insmod? Why are we trying to control the interface manually instead of through the Android API? The Android API doesn't support ad-hoc connections as far as I can tell. The card, I am sure, most certainly can.

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >