Search Results

Search found 15400 results on 616 pages for 'log4net configuration'.

Page 78/616 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • BIND9 Forwarding by view

    - by Triztian
    Hi I think this is a simple issue, I'd like to forward only to certain IPs in the LAN network, for example I have 2 acl lists: acl "office1" { 192.168.1.15; // With internet access }; acl "production" { 192.168.1.101; // No internet access }; I know that there probably should be more efficient ways to restrict internet access, but at the moment this is what I'd like to try.Here's what I've tried in named.conf.local // Inlcude my acl definitions include "/etc/bind/acls.conf"; view "no-internet" { match-clients { production; }; include "/etc/bind/named.conf.default-zones"; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; } view "internet" { match-clients { office1; }; include "/etc/bind/named.conf.default-zones"; forwarders { 201.56.59.14; // Made Up 201.56.59.15; // Made Up }; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; }; As you can see I want a localdomain.com defined for every computer in my network and forward internet access to the computers in the office but not to the ones on the production floor. I've modified my conf file, however the IP in the "no-internet" acl is able to resolve the domains, even though I've rebooted the computer, flushed the DNS using ipconfig /flushdns and set my DNS Server as the only one, why is this still happening? Thanks in advance.

    Read the article

  • Why does compiz window decorator crashes unity

    - by user32509
    I used to run Unity 2D on my work machine. Now I am trying to use Unity. Everthing looks fine except for the missing window decoration. I have installed the compiz manager. Every time I activate the plugin "window decoration" Unity "crashes" the Unity bar along with the top panel . unity --reset oder replace does nothing. The command for the window manager is /usr/bin/compiz-decorator Maybe I need to specify another window decorator?

    Read the article

  • Which DNS settings are used when setting up server

    - by Saif Bechan
    I have a server and want to run my own name server service. Now I have set it up already and it works not, but I do not know where the exact settings are stored. On my server I use Plesk. When I edit DNS settings there I think it is stored in named.conf. Named is installed on the server, and BIND. Now I also have a panel from my registrar. This is separate from my server. Both places I can add the normal MX,A,CNAME, etc records. Now where is the best way to place this settings. Currently I have the same records on both places, on the server and at the registrar panel. I am correct to just add all the records at the registrar panel, and remove everything from within PLESK, and just don't run DNS on my server, because it is already done in the registrar panel. Or should I add the records in both places.

    Read the article

  • Nginx config with try_files and rewrite : precedence?

    - by Penegal
    Good morning, everybody. Firstly, this question may have been already asked, but I searched ServerFault during about 15 minutes without finding it, so, if it was already asked, please accept my apologies. I'm trying to rationalize my Nginx server config, but I have a rather dumb question that I couldn't solve, even with extensive Web search, except if I totally f*cked my search. Here is the question : is try_files parsed before or after rewrite ? Asked differently, Do I have to put try_files after all rewrite directives, or is Nginx config parser smart enough to evaluate try_files after all relevant rewrite directives ? The link with the config rationalization is that the answer to this question will change the organisation of the config, ie if config file order of try_files and rewrite changes the config behaviour, it will force me to disperse my includes, some of them containing try_files and other ones containing rewrite, because I also have rewrite directly in nginx.conf. Hoping you can help me, Regards.

    Read the article

  • different user group can not upload file in the server

    - by Dallal
    I have a CentOS server running in Thailand, and I'm in Canada. The guy at the computer center who set up the server for me doesn't really understand much about linux and left me off an issue to solve myself. I just moved from Mac Server to Linux server, and the first thing I'm facing a problem now is `file name` has failed to upload due to an error The uploaded file could not be moved to `location name` So what happen is that I knew from my experiences of these problem is all about permissions. So I go ahead and checked on my whole folder and found that everything in the folder permission is like myusername mygroupname then I checked the httpd file in the server and it is default to apache apache. My question is that how can I make my user to be in the same group with apache group so that I don't have to have any problem about uploading, changing data in my file....? But without having to affect other user in the same server. I'm holding Administrator account, but not root account, but I can change stuff on the server root no problem. When I was with godaddy.com there never been any problem about the permission and I wish I know how they configure that :(

    Read the article

  • Upgrade PHP v5.3.3 to v5.3.4

    - by Ty01
    I currently have PHP v5.3.3 installed and configured. Everything is working perfectly, but I would like to keep PHP up to date and upgrade to v5.3.4. Can someone please describe the usual upgrade process for PHP when compiling manually? For example: is it just as easy as downloading the newest source, uncompressing it, compiling it using the same (or comparable) options that were used on the last version? Is there anything that has to be removed or changed from the previous version installed? I'm clueless. Please help!

    Read the article

  • Apache routing vhosts to /var/www

    - by FHannes
    One user at my site has reported that he reaches the content at /var/www when browsing to any of the vhosts at my server. As far as I’m aware, my Apache server does not contain a document root that references this folder. On top of that, this user seems to be the only one experiencing the issue. According to his ISP, the issue isn’t caused by them, yet, on his mobile connection, he can access the site. When browsing to my server’s IP, he also receives the correct content from the default vhost. What could be the possible causes of this issue and how can I get it to stop? I’ve explored pretty much every option I could think of.

    Read the article

  • Looking for a product configurator

    - by Netsrac
    I am looking for a product configurator for products with high complexity. The main goal is to allow a sales person to configure the product in a correct and working manner. The product is a combination of hard- and software options. The options for sure have dependecies (so option A needs B and C) and can also exclude each other. The performance requirements of the software related to the hardware need to be considered. So some rules need to be defineable. Does anybody know a tool (preferred open source) doing that job? Thanks for your help.

    Read the article

  • problem with accessing a php page

    - by EquinoX
    So I have a info.php page which is located on the folder /var/www/nginx-default, however when I go to my ip address/info.php, it always redirects me to this site: http://www.iana.org/domains/example/ is this because I have a virtual host that I called example? Here is my config for the example website: server { listen 80; server_name www.example.com; rewrite ^/(.*) http://example.com/$1 permanent; } server { listen 80; server_name example.com; access_log /var/www/example.com/logs/access.log; error_log /var/www/example.com/logs/error.log; location / { root /var/www/example.com/public/; index index.html; } } The way I access this site is by changing my /var/hosts in my macbook so that example.com is mapped to my server IP address... however now when I do xxx.xxx.xxx.xxx/info.php.. it redirects me to that site I posted above

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Internet connection fails in Ubuntu on VirtualBox when virtual machine is created from "Import appli

    - by Sanoj
    I have installed Ubuntu Server 9.10 in a virtual machine in VirtualBox, then I made a cope/clone and exported it with "Export appliance" so I can create many cloned virtual machines. But when I try to import an appliance, everthing seams to be fine with the Ubuntu except that it can't connect to Internet and doesn't get an IP-address. The machine is used in Bridged mode. And it doesn't help to change to NAT-mode either. The machine that I cloned seams to work fine, and get an IP address. How to fix this? Where am I doing wrong?

    Read the article

  • Emacs doesn't use ~/.ssh/config when accessing files on a remote machine

    - by Yotam
    I have a fresh install of arch Linux. I've installed Emacs from the rpos, and my home directory is mounted from a separate partition. I have old settings I've used on my ~/.ssh/config along with authentication keys I've regularly used before. Now, when I try to connect to a remote machine using Emacs, Emacs asks for my password and uses the wrong username. Clearly, Emacs doesn't access my config file. When I try to ssh or scp directly to the machine, things work fine. What do I need to update?

    Read the article

  • What is the `/etc/hostname` used/required for?

    - by static
    I found in the /etc/hostname my IP-address, than I deleted it and each time I use sudo - I get a message and a system email sudo: unable to resolve host (none) or if in the /etc/hostname is saved myhostname than sudo: unable to resolve host (myhostname). I know it is used to set the system's hostname via /etc/init.d/hostname.sh while booting process, but what is this setting required for (programs, services, daemons ...)? What if i set to localhost (so it doesn't happen any sudo: unable to resolve host (none) anymore, but is it still ok?)? UPD1: I found some information here: http://jblevins.org/log/hostname, but it is all about how to use it and not about - why it is required.

    Read the article

  • Should Production Windows Web Servers (IIS & SQL) be in a domain?

    - by tlianza
    We have a few web servers and a few database servers. To date, they've been standalone machines that are not part of a domain. The web servers don't talk to each other, and the web servers talk to the database servers via SQL Auth. My concern with putting the machines in a domain together were added complexity - it's one more "thing" running, and doing "things" that could go wrong. risk - if a domain controller fails, am I now putting other machines at risk? However, in certain scenarios it does seem convenient for them to be on a domain, sharing credentials. For example, if I want to give the "services" control on one machine access to another machine (because Remote Desktop craps out) I need to go in and assign privileges on multiple machines - something that I believe Active Directory and Domain Accounts set to simplify. My question: I'm sure there are things I'm not considering here. Is there a best practice?

    Read the article

  • How do I configure apache to accept a client ssl certificate (if present) or authenticate using ldap (if the cert is absent)?

    - by jmwood051
    I have an Apache server that serves up mercurial repositories and it currently authenticates using ldap credentials. I want to permit a single user (to start with) to use a SSL client certificate, with all remaining users still able to use the ldap credentials authentication method. I have looked through Stack Overflow and other wider (google) searches but can not find information/guidance on how to set this up.

    Read the article

  • WCF configuration for WebHttpBinding(Restful) for supporting both HTTP and HTTPS

    - by KSS
    We are using AJAX Cascading dropdown and AutoComplete functionality with Restful WebService Services providing data. With one endpoint(non-secured) eveything was working fine, until we tried same web page with https. Our Webappplication needs to support both. Our of very few articiles/blogs on this issue I found 2 which applies to my requirements. 1. http://blog.abstractlabs.net/2009/02/ajax-wcf-services-and-httphttps.html 2. _http://www.mydotnetworld.com/post/2008/10/18/Use-a-WCF-Service-with-HTTP-and-HTTPS-in-C.aspx I followed same pattern, added 2 endpoints, assuming WCF will pickup appropriate endpoint looking at HTTP or HTTPS protocol. Worked like a charm in my dev machine(XP-IIS5) and 1 Server 2003R2(IIS6), however did work in Production server 2003-IIS6. Website in IIS is exact same(including permission etc). The error it throws - Error 500(Could not find a base address that matches scheme https for the endpoint with binding WebHttpBinding. Registered base address schemes are [http]..) Here's the sample configuration(ignore typos) <system.serviceModel> <bindings> <webHttpBinding> <binding name="SecureBinding"> <security mode="Transport"/> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="SearchServiceAspNetAjaxBehavior"> <enableWebScript /> </behavior> </endpointBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service name="SearchService"> <endpoint address="" behaviorConfiguration="SearchServiceAspNetAjaxBehavior" binding="webHttpBinding" contract="SearchServiceContract" /> <endpoint address="" behaviorConfiguration="SearchServiceAspNetAjaxBehavior" binding="webHttpBinding" bindingConfiguration="SecureBinding" contract="SearchServiceContract" /> </service> </services> </system.serviceModel> Any help on this is highly appreciated ? Thanks KSS

    Read the article

  • Including configuration files while compiling a Flex application with MXMLC

    - by Daniel
    Hello there, I'm using: - Flex SDK 3.5.0 - Parsley 2.2.2. - Flash Builder 4 Down in my src folder (which is configured as part of the source path in the Flash Builder), I have a logging.xml which I configure via Parsley: FlexLoggingXmlSupport.initialize(); XmlContextBuilder.build("com/company/product/util/log/logging.xml"); When I run my application through Flash Builder, the XmlContentBuilder seems to locate the logging.xml (the implementation is a regular URLLoader one). When I compile my application using MXMLC (whether in Ant or command-line), and then run the swf, I get the following error: Cause(0): Error loading com/company/product/util/log/logging.xml: Error in URLLoader - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml Here is the MXMLC tag in Ant: <mxmlc file="${product.src.dir}/com/company/product/view/Main.mxml" output="${product.target.dir}/${product.release.filename}" keep-generated-actionscript="false"> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml" /> <!-- source paths --> <source-path path-element="${FLEX_HOME}/frameworks" /> <compiler.source-path path-element="${product.src.dir}" /> <compiler.source-path path-element="${product.locale.dir}/{locale}" /> <compiler.library-path dir="${product.basedir}" append="true"> <include name="libs" /> </compiler.library-path> <warnings>false</warnings> <debug>false</debug> </mxmlc> And here is the command line: \mxmlc.exe -output "C:\temp\Rap.swf" -load-config "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks\flex-config.xml" -source-path "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks" C:\workspace\folder01\product\src C:\workspace\folder01\product\locale\en_US -library-path+=C:\workspace\folder01\product\libs -file-specs C:\workspace\folder01\product\src\com\company\product\view\main.mxml Now perhaps I don't get this correctly, but as far as I understand the SWF should be compiled with all of the resources in the paths I give MXMLC as source-paths. For some reason it seems that the XML file is not compiled into the SWF, hence the relative path of the XmlContentBuilder isn't located successfully. I could not find any argument to provide the MXMLC with that might solve this. I tried using the -dump-config option with the Flash Builder's compiler, then giving that configuration to MXMLC, but it didn't work either. I tried providing the XmlContentBuilder with an absolute path. That worked fine when I compiled with MXMLC via Ant, but still didn't work when I used MXMLC in the command-line... I'd be happy to be enlightened here, regarding all subjects - using MXMLC, accessing resources with relative paths, configuring logging in Parsley, etc. Many thanks in advance, Daniel

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • installing a package with xml based configuration in python

    - by saminny
    Hi, I am planning to write a generic python module for installing a package. This script should retrieve the module from a remote machine or locally and install it on a given host and user. However, there needs to be changes made to the package files based on the host, user and given environment. My approach is to use XML to describe changes to be made to package files based on environment. It will first extract the package to the user directory and then using an xml configuration file, it should replace the file values in the package directory. The xml would look something like this: <package version="1.3.3"> <environment type="prod"> <file dir="d1/d2" name="f1"> <var id="RECV_HOST" value="santo"> <var id="RECV_PORT" value="RECV_PORT_SERVICE" type="service"> <var id="JEPL_SERVICE_NAME" value="val_omgact"> </file> <var dir="d4/d3/s2" name="f2"> <var id="PRECISION" value="true"> <var id="SEND_STATUS_CODE" value="323"> <var id="JEPL_SERVICE_NAME" value="val_omgact"> </file> </environment> <environment type="qa"> <file dir="d1/d2" name="f1"> <var id="RECV_HOST" value="test"> <var id="RECV_PORT" value="1444"> <var id="JEPL_SERVICE_NAME" value="val_tsdd"> </file> <file dir="d4/d3/s2" name="f2"> <var id="PRECISION" value="false"> <var id="SEND_STATUS_CODE" value="323"> <var id="JEPL_SERVICE_NAME" value="val_dsd"> </file> </environment> </package> What are your thoughts on this approach? Is there an existing python module, package or script that I could use for this purpose since this seems fairly generic and can be used for any installation. Thanks! Sam

    Read the article

  • PHP, PEAR, and oci8 configuration

    - by zack_falcon
    I'll make this quick. I installed Oracle 11g (with appropriate database, users, etc), Apache 2.4.6, and PHP 5.5.4 on a Fedora 19 system. I wanted to connect PHP to Oracle. What I really wanted to do was to download MDB2_Driver_oci8, which I thought would be easy, but before I can do such a thing, PHP needs to have that plug-in enabled, so here's what I did: Tried to install oci8 via the following: pecl install oci8 When that didn't exactly work the first few times, I figured out I, for some reason, needed "Development tools" - via yum groupinstall "Development Tools" Then I figured out later that PHP actually doesn't do oci8 - it's PHP Devel. So, I had to install that too, via yum install php-devel. And then, I finally got to install oci8. It asked for the Oracle Directory, and that was that. But it said the following: Configuration option 'php_ini' is not set to php.ini location You should add 'extensions=oci8.so' to php.ini First, I did a locate oci8.so - found it in /usr/lib64/php/modules/ Second, I added what it told me to, to the php.ini file. Third, I checked the usual php_info() test page - no mention of OCI8. Uh-oh. Fourth, running both php -i and php -m listed oci8 as one of the modules. Weird. In desperation, I went ahead and downloaded the MDB2_Driver_oci8. Maybe that will fix things. Nope. When I loaded my PHP Webpage, it returned the following: Error message: extension oci8 is not compiled into PHP As well as: MDB2 error: not found Strange. And then I decided to check the error logs: PHP Startup - unable to load dynamic library '/usr/lib64/php/modules/oci8.so' - libclntsh.so.11.1: cannot open shared object file: No such file or directory in Unknown on line 0 And now I'm stuck. I tried going into the php.ini, and found that the extension_dir was commented out. I put it back in, which only seemed to break stuff. Things of note: I followed this (link) guide on how to configure PHP and install oci8. ./configure --with-oci8 doesn't work. Fedora says no such directory. As both the webpage files and the actual server reside on the same PC, I did not install the Oracle Client files. The extension_dir is commented out by default in the php.ini. This is just one of my problems in a long line of problems concerning the replication of an already existing and working, but dying, setup. It seems whenever I want to solve a problem, I have to do X first. And by doing X, I uncover another problem, which I have to solve by doing Y, which has its own problems, etc, etc. Any help would be much appreciated. Thanks.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >