Search Results

Search found 16001 results on 641 pages for 'convention over configuration'.

Page 38/641 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • wso2 ESB: server configuration CRITICAL

    - by nuvio
    My Scenario: I have server_1 (192.168.10.1) with wso2-ESB and server_2 (192.168.10.2) with Glassfish-v3 + web services. Problem: I am trying to create a proxy in ESB using the java Web Services, but the created proxy does not respond properly. The log says: Unable to sendViaPost for http or https does not change the result. I think I should configure the axis2.xml but I am having trouble, and don't know what to do. What is the configuration for my scenario? Please help me! EDIT: To be clear, I can directly consume the WebService in the Glassfish server, it works normal, both port and url are accessible. Only when I create a "Pass through Proxy" in the ESB, it does not work. I don't think is matter of Proxy configuration...I never had problems while deployed locally, problems started once I have uploaded the ESB to a remote server. I really would need someone to point me what is the correct procedure when installing the ESB on a remote host: configuration of axis2.xml and carbon.xml, ports, transport receivers etc... P.S. I had a look at the official (wso2 esb and carbon) guides with no luck, but I am missing something... Endpoint of Java Web Service: http://192.168.10.2:8080/HelloWorld/Hello?wsdl ESB Proxy Enpoint: http://192.168.10.1:8280/services/HelloProxy The following is my axis2.xml configuration, please check it: <transportReceiver name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOListener"> <parameter name="port" locked="false">8280</parameter> <parameter name="non-blocking" locked="false">true</parameter> <parameter name="bind-address" locked="false">192.168.10.1</parameter> <parameter name="WSDLEPRPrefix" locked="false">https//192.168.10.1:8280</parameter> <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>--> </transportReceiver> <!-- the non blocking https transport based on HttpCore + SSL-NIO extensions --> <transportReceiver name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener"> <parameter name="port" locked="false">8243</parameter> <parameter name="non-blocking" locked="false">true</parameter> <parameter name="bind-address" locked="false">192.168.10.1</parameter> <parameter name="WSDLEPRPrefix" locked="false">https://192.168.10.1:8243</parameter> <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>--> <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> <parameter name="keystore" locked="false"> <KeyStore> <Location>repository/resources/security/wso2carbon.jks</Location> <Type>JKS</Type> <Password>wso2carbon</Password> <KeyPassword>wso2carbon</KeyPassword> </KeyStore> </parameter> <parameter name="truststore" locked="false"> <TrustStore> <Location>repository/resources/security/client-truststore.jks</Location> <Type>JKS</Type> <Password>wso2carbon</Password> </TrustStore> </parameter> <!--<parameter name="SSLVerifyClient">require</parameter> supports optional|require or defaults to none --> </transportReceiver>

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so

    Read the article

  • Log4r : logger inheritance, yaml configuration, alternatives ?

    - by devlearn
    Hello, I'm pretty new to ruby environments and I was looking for a nice logging framework to use it my ruby and rails applications. In my previous experiences I have successfully used log4j and log4p (the perl port) and was expecting the same level of usability (and maturity) with log4r. However I must say that there are a number of things that are not clear at all in the log4r framework. 1 Logger Inheritance The logger inheritance does not seem to be managed at all ! If I declare a logger named 'myapp' and then try to get a logger name 'myapp::engine', the lookup will end with a NameError. I would expect that the framework returns the root logger according to the naming scheme and to use the 'myapp' logger. Q1 : Of course I can work around this and manage the names by myself with a lookup method, however is there a cleaner way to do this without any extra coding ? 2 YAML configuration Second thing that confuses me is the yaml configuration. On the log4r site there are literally no information about this system, the doc links forward to missing pages, so all the info I can find about is contained in the examples directory of the gem. I was pretty confused with the fact that the yaml configuration must contain the pre_config section, and that I need to define my own levels. If I remove the pre_config secion, or replace all the “custom” levels by the standard ones ( debug, info, warn, fatal ) , the example will throw the following error : log4r/yamlconfigurator.rb:68:in `decode_yaml': Log level must be in 0..7 (ArgumentError) So there seems to be no way of using a simple file where we only declare the loggers and appenders for the framework. Q2 : I realy think that I missed something and that must be a way of providing a simple yaml conf file. Do you have any examples of such an usage ? 3 Variables substitution in XML file Q3 : The Yaml configuration system seems to provide such a feature however I was unable to find a similar feature with XML files. Any ideas ? 4 Alternatives ? I must say that I'm very disappointed by the feature level and the maturity of log4r compared to the log4j and other log4j ports. I run into this framework with a solid background of logging APIs in other languages and find myself working around in all kinds just to make 'basic things' running in a “real world application”. By that I mean a complex application composed of several gems, console/scripting apps, and a rails web front end where the configuration must be mutualized and where we make intensive usage of namespaces and inheritance. I've run several searches in order to find something more suitable or mature, but did not find anything similar. Q4 : Do you guys know any (serious) alternatives to log4r framework that could be used in a enterprise class app ? Thanks reading all of this ! I'd really appreciate any pointers, Kind Regards,

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Problem with Android emulator

    - by benasio
    Projects do not run, on screen emulator only "ANDROID" WinXP pro SP3/Eclipse Galileo java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing) My actions: 1.Start the emulator(Platform:2.1 API Level:7), wait until the window DDMS status will change to ONLINE 2.Launches helloandroid from examples - Run as Android Application Console: Android Launch! [2010-05-03 21:44:34 - HelloAndroid] adb is running normally. [2010-05-03 21:44:34 - HelloAndroid] Performing com.example.helloandroid.HelloAndroid activity launch [2010-05-03 21:44:34 - HelloAndroid] Automatic Target Mode: using existing emulator 'emulator-5554' running compatible AVD 'my_vm' [2010-05-03 21:44:34 - HelloAndroid] WARNING: Application does not specify an API level requirement! [2010-05-03 21:44:34 - HelloAndroid] Device API version is 7 (Android 2.1) [2010-05-03 21:44:34 - HelloAndroid] Uploading HelloAndroid.apk onto device 'emulator-5554' [2010-05-03 21:44:35 - HelloAndroid] Installing HelloAndroid.apk... [2010-05-03 21:45:07 - HelloAndroid] Success! [2010-05-03 21:45:08 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: DDM dispatch reg wait timeout [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 52454151: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 48454c4f: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 46454154: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 4d505251: no handler defined [2010-05-03 21:45:52 - HelloAndroid] Device not ready. Waiting 3 seconds before next attempt. [2010-05-03 21:45:52 - HelloAndroid] ActivityManager: android.util.AndroidException: Can't connect to activity manager; is the system running? [2010-05-03 21:45:55 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device [2010-05-03 21:46:11 - HelloAndroid] ActivityManager: DDM dispatch reg wait timeout ...... DDMS console (only errors and warnings) 05-03 17:43:52.437: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 05-03 17:48:34.036: WARN/Zygote(29): Preloaded drawable resource #0x1080093 (res/drawable-mdpi/sym_def_app_icon.png) that varies with configuration!! 05-03 17:48:34.406: WARN/Zygote(29): Preloaded drawable resource #0x1080002 (res/drawable-mdpi/arrow_down_float.png) that varies with configuration!! 05-03 17:48:35.836: WARN/Zygote(29): Preloaded drawable resource #0x10800b4 (res/drawable/btn_check.xml) that varies with configuration!! 05-03 17:48:36.076: WARN/Zygote(29): Preloaded drawable resource #0x10800b7 (res/drawable-mdpi/btn_check_label_background.9.png) that varies with configuration!! 05-03 17:48:36.106: WARN/Zygote(29): Preloaded drawable resource #0x10800b8 (res/drawable-mdpi/btn_check_off.png) that varies with configuration!! 05-03 17:48:36.147: WARN/Zygote(29): Preloaded drawable resource #0x10800bd (res/drawable-mdpi/btn_check_on.png) that varies with configuration!! 05-03 17:48:36.437: WARN/Zygote(29): Preloaded drawable resource #0x1080004 (res/drawable/btn_default.xml) that varies with configuration!! 05-03 17:48:36.716: WARN/Zygote(29): Preloaded drawable resource #0x1080005 (res/drawable/btn_default_small.xml) that varies with configuration!! 05-03 17:48:36.966: WARN/Zygote(29): Preloaded drawable resource #0x1080006 (res/drawable/btn_dropdown.xml) that varies with configuration!! 05-03 17:48:37.326: WARN/Zygote(29): Preloaded drawable resource #0x1080008 (res/drawable/btn_plus.xml) that varies with configuration!! 05-03 17:48:37.707: WARN/Zygote(29): Preloaded drawable resource #0x1080007 (res/drawable/btn_minus.xml) that varies with configuration!! 05-03 17:48:38.057: WARN/Zygote(29): Preloaded drawable resource #0x1080009 (res/drawable/btn_radio.xml) that varies with configuration!! 05-03 17:48:38.776: WARN/Zygote(29): Preloaded drawable resource #0x108000a (res/drawable/btn_star.xml) that varies with configuration!! 05-03 17:48:39.327: WARN/Zygote(29): Preloaded drawable resource #0x1080125 (res/drawable/btn_toggle.xml) that varies with configuration!! 05-03 17:48:39.416: WARN/Zygote(29): Preloaded drawable resource #0x1080187 (res/drawable-mdpi/ic_emergency.png) that varies with configuration!! 05-03 17:48:39.506: WARN/Zygote(29): Preloaded drawable resource #0x1080012 (res/drawable-mdpi/divider_horizontal_bright.9.png) that varies with configuration!! 05-03 17:48:39.576: WARN/Zygote(29): Preloaded drawable resource #0x1080014 (res/drawable-mdpi/divider_horizontal_dark.9.png) that varies with configuration!! 05-03 17:48:40.126: WARN/Zygote(29): Preloaded drawable resource #0x1080016 (res/drawable/edit_text.xml) that varies with configuration!! 05-03 17:48:40.507: WARN/Zygote(29): Preloaded drawable resource #0x1080161 (res/drawable/expander_group.xml) that varies with configuration!! 05-03 17:48:41.036: WARN/Zygote(29): Preloaded drawable resource #0x1080062 (res/drawable/list_selector_background.xml) that varies with configuration!! 05-03 17:48:41.177: WARN/Zygote(29): Preloaded drawable resource #0x1080217 (res/drawable-mdpi/menu_background.9.png) that varies with configuration!! 05-03 17:48:41.256: WARN/Zygote(29): Preloaded drawable resource #0x1080218 (res/drawable-mdpi/menu_background_fill_parent_width.9.png) that varies with configuration!! 05-03 17:48:41.567: WARN/Zygote(29): Preloaded drawable resource #0x1080219 (res/drawable/menu_selector.xml) that varies with configuration!! 05-03 17:48:41.706: WARN/Zygote(29): Preloaded drawable resource #0x1080224 (res/drawable-mdpi/panel_background.9.png) that varies with configuration!! 05-03 17:48:41.849: WARN/Zygote(29): Preloaded drawable resource #0x108022e (res/drawable-mdpi/popup_bottom_bright.9.png) that varies with configuration!! 05-03 17:48:42.026: WARN/Zygote(29): Preloaded drawable resource #0x108022f (res/drawable-mdpi/popup_bottom_dark.9.png) that varies with configuration!! 05-03 17:48:42.156: WARN/Zygote(29): Preloaded drawable resource #0x1080230 (res/drawable-mdpi/popup_bottom_medium.9.png) that varies with configuration!! 05-03 17:48:42.276: WARN/Zygote(29): Preloaded drawable resource #0x1080231 (res/drawable-mdpi/popup_center_bright.9.png) that varies with configuration!! 05-03 17:48:42.376: WARN/Zygote(29): Preloaded drawable resource #0x1080232 (res/drawable-mdpi/popup_center_dark.9.png) that varies with configuration!! 05-03 17:48:42.507: WARN/Zygote(29): Preloaded drawable resource #0x1080235 (res/drawable-mdpi/popup_full_dark.9.png) that varies with configuration!! 05-03 17:48:42.606: WARN/Zygote(29): Preloaded drawable resource #0x1080238 (res/drawable-mdpi/popup_top_bright.9.png) that varies with configuration!! 05-03 17:48:42.696: WARN/Zygote(29): Preloaded drawable resource #0x1080239 (res/drawable-mdpi/popup_top_dark.9.png) that varies with configuration!! 05-03 17:48:42.946: WARN/Zygote(29): Preloaded drawable resource #0x108006d (res/drawable/progress_indeterminate_horizontal.xml) that varies with configuration!! 05-03 17:48:43.076: WARN/Zygote(29): Preloaded drawable resource #0x108023f (res/drawable/progress_small.xml) that varies with configuration!! 05-03 17:48:43.456: WARN/Zygote(29): Preloaded drawable resource #0x1080240 (res/drawable/progress_small_titlebar.xml) that varies with configuration!! 05-03 17:48:43.957: WARN/Zygote(29): Preloaded drawable resource #0x1080262 (res/drawable-mdpi/scrollbar_handle_horizontal.9.png) that varies with configuration!! 05-03 17:48:44.036: WARN/Zygote(29): Preloaded drawable resource #0x1080263 (res/drawable-mdpi/scrollbar_handle_vertical.9.png) that varies with configuration!! 05-03 17:48:44.176: WARN/Zygote(29): Preloaded drawable resource #0x1080071 (res/drawable/spinner_dropdown_background.xml) that varies with configuration!! 05-03 17:48:44.317: WARN/Zygote(29): Preloaded drawable resource #0x1080326 (res/drawable-mdpi/title_bar_shadow.9.png) that varies with configuration!! 05-03 17:48:44.496: WARN/Zygote(29): Preloaded drawable resource #0x10801c6 (res/drawable-mdpi/indicator_code_lock_drag_direction_green_up.png) that varies with configuration!! 05-03 17:48:44.607: WARN/Zygote(29): Preloaded drawable resource #0x10801c7 (res/drawable-mdpi/indicator_code_lock_drag_direction_red_up.png) that varies with configuration!! 05-03 17:48:45.956: WARN/Zygote(29): Preloaded drawable resource #0x10801c8 (res/drawable-mdpi/indicator_code_lock_point_area_default.png) that varies with configuration!! 05-03 17:48:46.407: WARN/Zygote(29): Preloaded drawable resource #0x10801c9 (res/drawable-mdpi/indicator_code_lock_point_area_green.png) that varies with configuration!! 05-03 17:48:46.696: WARN/Zygote(29): Preloaded drawable resource #0x10801ca (res/drawable-mdpi/indicator_code_lock_point_area_red.png) that varies with configuration!! 05-03 17:48:56.307: ERROR/BatteryService(170): usbOnlinePath not found 05-03 17:48:56.336: ERROR/BatteryService(170): batteryVoltagePath not found 05-03 17:48:56.350: ERROR/BatteryService(170): batteryTemperaturePath not found 05-03 17:48:56.696: ERROR/SurfaceFlinger(170): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 05-03 17:48:57.847: WARN/SurfaceFlinger(170): ro.sf.lcd_density not defined, using 160 dpi by default. 05-03 17:49:02.116: WARN/UsageStats(170): Usage stats version changed; dropping 05-03 17:49:05.036: WARN/zipro(182): Unable to open zip '/data/local/bootanimation.zip': No such file or directory 05-03 17:49:06.297: WARN/zipro(182): Unable to open zip '/system/media/bootanimation.zip': No such file or directory 05-03 17:49:50.637: WARN/PackageManager(170): Running ENG build: no pre-dexopt! 05-03 17:53:59.196: WARN/PackageManager(170): Unknown permission com.google.android.providers.gmail.permission.WRITE_GMAIL in package com.android.settings 05-03 17:53:59.238: WARN/PackageManager(170): Unknown permission com.google.android.providers.gmail.permission.READ_GMAIL in package com.android.settings 05-03 17:53:59.286: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.settings 05-03 17:53:59.517: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.providers.contacts 05-03 17:53:59.656: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.cp in package com.android.providers.contacts 05-03 17:53:59.717: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.mail in package com.android.contacts 05-03 17:53:59.796: WARN/PackageManager(170): Unknown permission android.permission.ADD_SYSTEM_SERVICE in package com.android.phone 05-03 17:54:00.126: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.development 05-03 17:54:00.206: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.ALL_SERVICES in package com.android.development 05-03 17:54:00.206: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.YouTubeUser in package com.android.development 05-03 17:54:00.237: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD in package com.android.development 05-03 17:54:00.258: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.browser 05-03 17:54:25.456: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f0700e5 05-03 17:54:25.486: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f020031 05-03 17:54:25.536: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f020030 05-03 17:54:25.576: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f050000 05-03 17:54:38.708: WARN/SharedBufferStack(182): waitForCondition(LockCondition) timed out (identity=0, status=0). CPU may be pegged. trying again.

    Read the article

  • Batch file script for Enable & disable the "use automatic Configuration Script"

    - by Tijo Joy
    My intention is to create a .bat file that toggles the check box of "use automatic Configuration Script" in Internet Settings. The following is my script @echo OFF setlocal ENABLEEXTENSIONS set KEY_NAME="HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" set VALUE_NAME=AutoConfigURL FOR /F "usebackq skip=1 tokens=1-3" %%A IN (`REG QUERY %KEY_NAME% /v %VALUE_NAME% 2^>nul`) DO ( set ValueName=%%A set ValueType=%%B set ValueValue=%%C ) @echo Value Name = %ValueName% @echo Value Type = %ValueType% @echo Value Value = %ValueValue% IF NOT %ValueValue%==yyyy ( reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" /v AutoConfigURL /t REG_SZ /d "yyyy" /f echo Proxy Enabled ) else ( echo Hai reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" /v AutoConfigURL /t REG_SZ /d "" /f echo Proxy Disabled ) The output i'm getting for the Proxy Enabled part is Value Name = AutoConfigURL Value Type = REG_SZ **Value Value =yyyy** Hai The operation completed successfully. Proxy Disabled But the Proxy Enable part isn't working fine the output i get is : Value Name = AutoConfigURL Value Type = REG_SZ **Value Value =** ( was unexpected at this time. The variable "Value Value" is not getting set when we try to do the Proxy enable

    Read the article

  • Getting "The WebResource.axd handler must be registered in the configuration to process this request

    - by rwponu
    I'm getting this error while running my ASP.NET app on IIS7. I've tried doing what it says to do but it doesn't help. The WebResource.axd handler must be registered in the configuration to process this request. > <!-- Web.Config Configuration File --> > > <configuration> > <system.web> > <httpHandlers> > <add path="WebResource.axd" verb="GET" type="System.Web.Handlers.AssemblyResourceLoader" validate="True" /> > </httpHandlers> > </system.web> > </configuration> I'm using a little bit of AJAX which is what I think is causing the issue. Has anyone encountered this before?

    Read the article

  • Castle Windsor and its own configuration file

    - by Coppermill
    I am using Castle Windsor for IoC, and have the configuration held in the web.config/app.config, using the following factory: public static TYPE Factory(string component) { var windsorContainer = new WindsorContainer(new XmlInterpreter()); var service = windsorContainer.Resolve<TYPE>(component); if (service == null) throw new ArgumentNullException(string.Format("Unable to find container {0}", component)); return service; } and my web.config looking like this: <configuration> <configSections> <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor"/> </configSections> <castle> <components> <component id="Data" service="Data.IData, Data" type="Data.DataService, Data"/> </components> </castle> <appSettings>....... Which works fine, but I'd like to place the configuration for Castle Windsor in a file called castle.config. How do I do this?

    Read the article

  • SQLite NHibernate configuration with .Net 4.0 and vs 2010

    - by Berryl
    I am having way too much trouble getting my environment straight after switching to 2010 and .net 4.0, so I'd like to break the whole process down once and for all. 1) Which SQLite dll?? I think it is SQLite-1.0.65.1-vs2010rc-net4-setup.zip. Yes? 2) I ran the installer so the dll is in the GAC but I usually find there are less problems if I can just reference the dll stand alone. Is there any reason it needs to be in the GAC, and if not, what's the best way to uninstall it from the GAC (I can get to the GAC folder but it says I need permission to delete the files; should I leave the SQLite Designer dll's there?)? 3) x64. There is an x64 dll in the download. I had problems with SQLite in the past though that I could only resolve by compiling to x86. Can I safely reference the x64 dll and compile to Any CPU now? 4) what is the right NHib config? I have been using the one below, but since the error I get says "Could not create the driver from NHibernate.Driver.SQLite20Driver." that configuration is guilty until proven innocent too? 5) could FNH be a problem too? I don't use the pre-configured fluent SQLite method but FNH has to provide a reference to it for that to work, no? TIA & Cheers, Berryl <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> ... <property name="dialect">NHibernate.Dialect.SQLiteDialect</property> <property name="connection.driver_class">NHibernate.Driver.SQLite20Driver</property> <property name="connection.connection_string">Data Source=:memory:;Version=3;New=True;</property> .... </hibernate-configuration>

    Read the article

  • Windows could not update the computer's boot configuration.

    - by Luke Puplett
    Hello, I am trying to install Windows Web Server 2008 x64 (have tried 32-bit, too) onto an IBM eServer 326m and am getting the following error message some time after the unpacking files section: Windows could not update the computer's boot configuration. Installation cannot proceed. I can repair the boot information using the Repair option in the WinPE bit of the setup, and it reboots into the Windows installation, so it has good boot data on the drive. Just to complicate things, the server does not have a DVD-ROM so I'm installing from HDD to HDD, both SATA, one an SSD. I've tried each drive in isolation, e.g. removing the SSD and installing from spindle onto itself, same error every time. Flashed IBM BIOS, too. Thanks for your help. Luke

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • Mercurial hgwebdir configuration URL

    - by Jonathan Sternberg
    I'm setting up an hgwebdir configuration for the first time with Mercurial on apache2. I can see the three repositories I've set up in the first page, and I've figured out how to modify their names so they don't resemble the directory path. But when I click to go to one of the repositories, the URL becomes http://localhost/hg/hgweb.cgi/path/to/repos. I would like the directory to be http://localhost/hg/name instead as that is easier to remember for people who want to clone the repository. Is there anyway to do that with hgwebdir?

    Read the article

  • Dedicated server configuration - some tips -Xenserver/Debian

    - by Sanjay S
    I am migrating from a vps based hosting to a dedicated hosting (8GB RAM/1TB HD). I need to run multiple Drupal and Ruby based applications? what would be the recommended configuration. I was thinking of two options. 1) Install multiple Debian os on top of Xen (like VPS). Each may be 2GB Memory and run Drupal and Ruby and MYSQL on separate partitions . 2) Install one instance of Debian. and Install Drupal (Apache, php) Ruby (lighttpd, ruby) ,MySQL all in the same partition I was little worried that option 2 could lead to some performance issues later..

    Read the article

  • OpenBSD configuration: Client unable to automount via NFS using amd

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • apache2 mod_proxy configuration for single threaded servers

    - by The Doctor What
    I have a multiple instances of thin running behind apache 2.2's mod_proxy. The problem I have is that a couple pages, by design, take a while to run. If I just configure apache the obvious way (just add the thin urls as BalanceMember lines and no other configurations) then what happens is if someone clicks on the long-running page, then if enough web requests happen while it is running, someone eventually gets the same thin server and has to wait. Does anyone have some best practices or suggested configuration for mod_proxy and thin? Ciao!

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • Enabling Session Directory under Terminal Server Configuration Tool and Server Settings

    - by LPE
    Yello, I'm trying to add up a Terminal Server Session Directory client to an already fully functional Session Directory cluster which today runs two clients as well as the server. I've been reading up on both Google, Microsoft KB's as well as old documentation from an earlier employee but to no avail. The step I'm stuck at is when I open up Terminal Server Configuration Tool (tscc.msc), chooses ServerSettings. I know there should be an option saying "Session Directory" on the right hand side along with Active Desktop, Licensing and whatnot, but it's not there. I've logged on to both the other already functional clients and checked the same list and there the Session Directory option sure is both visible as well as working good with the specified information. This picture is the same view that I'm looking at at the moment, but mine is missing the bottom option that says "Session Directory" http://www.inetnj.com/doc/images/TerminalServerConfiguration.jpg Any help would be greatly appriciated. Regards LPE

    Read the article

  • Proper network configuration for a KVM guest to be on the same networks at the host

    - by Steve Madsen
    I am running a Debian Linux server on Lenny. Within it, I am running another Lenny instance using KVM. Both servers are externally available, with public IPs, as well as a second interface with private IPs for the LAN. Everything works fine, except the VM sees all network traffic as originating from the host server. I suspect this might have something to do with the iptables-based firewall I'm running on the host. What I'd like to figure out is: how to I properly configure the host's networking such that all of these requirements are met? Both host and VMs have 2 network interfaces (public and private). Both host and VMs can be independently firewalled. Ideally, VM traffic does not have to traverse the host firewall. VMs see real remote IP addresses, not the host's. Currently, the host's network interfaces are configured as bridges. eth0 and eth1 do not have IP addresses assigned to them, but br0 and br1 do. /etc/network/interfaces on the host: # The primary network interface auto br1 iface br1 inet static address 24.123.138.34 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 gateway 24.123.138.33 bridge_ports eth1 bridge_stp off auto br1:0 iface br1:0 inet static address 24.123.138.36 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 # Internal network auto br0 iface br0 inet static address 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 bridge_ports eth0 bridge_stp off This is the libvirt/qemu configuration file for the VM: <domain type='kvm'> <name>apps</name> <uuid>636b6620-0949-bc88-3197-37153b88772e</uuid> <memory>393216</memory> <currentMemory>393216</currentMemory> <vcpu>1</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='cdrom'> <target dev='hdc' bus='ide'/> <readonly/> </disk> <disk type='file' device='disk'> <source file='/raid/kvm-images/apps.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='bridge'> <mac address='54:52:00:27:5e:02'/> <source bridge='br0'/> <model type='virtio'/> </interface> <interface type='bridge'> <mac address='54:52:00:40:cc:7f'/> <source bridge='br1'/> <model type='virtio'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain> Along with the rest of my firewall rules, the firewalling script includes this command to pass packets destined for a KVM guest: # Allow bridged packets to pass (for KVM guests). iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT (Not applicable to this question, but a side-effect of my bridging configuration appears to be that I can't ever shut down cleanly. The kernel eventually tells me "unregister_netdevice: waiting for br1 to become free" and I have to hard reset the system. Maybe a sign I've done something dumb?)

    Read the article

  • fedora 11 server won't boot from SATA disk, won't boot from CD, BIOS configuration problems

    - by Tom
    Hi all, Yesterday our fc11 file/print server didn't boot, and had stopped on the BIOS page with a configuration problem. (with a distinct lack of foresight) I reset the BIOS settings to default without recording the message and booted the server. The server ran until it was to be booted this morning, and it was failing to mount the root partition from the SATA disk. It also failed to boot from a known good diagnostics CD. After a few more tries, it now fails part way through the Phoenix - AwardBIOS screen where it is listing the SATA/IDE devices, and it is showing garbage for the identity of one of the disks, which should actually be "none" It looks like the motherboard has gone kaput. The motherboard is an EVGA NF790i, are there any diagnostic tools that I can use to determine this? (as I would prefer to not send the motherboard back, only to discover that it is the RAM or the CPU) ps I can't get it to boot from the memTest disk, so I can't run that diagnostic. Thanks!

    Read the article

  • Virtualmin Configuration

    - by Allen
    I am trying to get Virtualmin setup and have reached a point where my noobish sysadmin skills aren't getting the job done. This is the message I get now when I try and refresh the configuration of Virtualmin. BIND DNS server is installed, and the system is configured to use it. However, the default master DNS server XXXXXX is not a fully qualified domain name. Sendmail is only accepting SMTP connections on the following ports : 127.0.0.1 port smtp. Email from other systems on the Internet will not be accepted. This can be changed in the Sendmail Mail Server module. Please advise what I need to do to get Sendmail configured properly. Thanks!

    Read the article

  • FortiGate firewall configuration with /30 and /28 networks

    - by slyderc
    I have fiber coming in from a new ISP which is being handed off via Ethernet on a single physical port. I'm having doubts about how to approach the configuration on my FortiGate 200A firewall because I've been given a /30 containing the ISP's gateway and another /28 for external IPs I can use: x.y.76.12/30 (.13 is the GW) x.y.76.64/28 (public IP space) How do I configure the FG200A's WAN1 interface to be aware of the two networks? As I only have one physical ISP port, will I need to plug it into a switch to break-out two cables and use a DMZ port on the FG200A for setting up the /28? Thanks in advance for your insight!

    Read the article

  • Oracle 11g network configuration

    - by Kylo
    I installed Oracle 11g Enterprise Edition on my Windows 7 Pro. My problem is that I cannot log into database from other host (local network). When I connect to database using Oracle SQLdeveloper everything is ok as long as I specify 'localhost' in connection configuration. However, when I change it to '192.168.0.190' which is my host IP address I get 'The Network Adapter could not establish the connection'. I get the same error when logging in from other host in local network. What is the problem?

    Read the article

  • Postfix sendmail -f configuration

    - by William
    I have Postfix installed on two servers. One of them writes e-mail (satellite) and the other one delivers the e-mails (smarthost). When I write e-mail from the satellite server I'm using the sendmail command. My problem is that when e-mail arrives the Return-Path is set to the user@hostname where user is the user that is running sendmail and hostname is the servers hostname. If I use the parameter -f with sendmail I change that, but I'm hoping there is a way to do it in a configuration file for Postfix. Is this possible or should I just deal with having to configure all my software to add the -f argument? Thanks in advanced.

    Read the article

  • MX record configuration for hosted email?

    - by Paul Sanwald
    I am helping a friend with his website, and am having a problem with his webmail configuration, which I suspect is due to a misconfigured MX record. His domain is registered and hosted by hostmonster, they have a webmail option. A record: Host Points To TTL webmail 12.345.789.101 14400 CNAME: mail webmail.d.com 14400 MX Record: 0 @ mail.d.com 14400 I've created an email account on hostmonster, [email protected]: however, when I sent an email to this account, it appears to be routing to /dev/null. I know that it's not actually, but am unsure of the steps I can take to track this down? I've tried using dig, but am unsure where to start. How can I track down where this email is being routed to?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >