Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 56/618 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • How to disable auto-generated WCF configuration

    - by user351025
    Every time my program runs vs adds the default configuration to my app.config file. At that run it works fine, but at the next run it actually tries to read the config. The problem is that the default configuration has errors, it adds the attribute "Address", but attritbutes are not allowed to have capitals so it throws an exception. This means I have to remove the bad section every run! I've tried to configure the .config but it gives errors. Here is the code that I use to host the server: private static System.Threading.AutoResetEvent stopFlag = new System.Threading.AutoResetEvent(false); ServiceHost host = new ServiceHost(typeof(Service), new Uri("http://localhost:8000")); host.AddServiceEndpoint(typeof(IService), new BasicHttpBinding(), "ChessServer"); host.Open(); stopFlag.WaitOne(); host.Close(); Here is the client code that calls the server: ChannelFactory<IChessServer> scf; scf = new ChannelFactory<IService> (new BasicHttpBinding(), "http://localhost:8000"); IService service = scf.CreateChannel(); Thanks for any help.

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • Creating a custom Configuration

    - by Rob
    I created a customer configuration class Reports. I then created another class called "ReportsCollection". When I try and do the "ConfigurationManager.GetSection()", it doesn't populate my collection variable. Can anyone see any mistakes in my code? Here is the collection class: public class ReportsCollection : ConfigurationElementCollection { public ReportsCollection() { } protected override ConfigurationElement CreateNewElement() { throw new NotImplementedException(); } protected override ConfigurationElement CreateNewElement(string elementName) { return base.CreateNewElement(elementName); } protected override object GetElementKey(ConfigurationElement element) { throw new NotImplementedException(); } public Report this[int index] { get { return (Report)BaseGet(index); } } } Here is the reports class: public class Report : ConfigurationSection { [ConfigurationProperty("reportName", IsRequired = true)] public string ReportName { get { return (string)this["reportName"]; } //set { this["reportName"] = value; } } [ConfigurationProperty("storedProcedures", IsRequired = true)] public StoredProceduresCollection StoredProcedures { get { return (StoredProceduresCollection)this["storedProcedures"]; } } [ConfigurationProperty("parameters", IsRequired = false)] public ParametersCollection Parameters { get { return (ParametersCollection)this["parameters"]; } } [ConfigurationProperty("saveLocation", IsRequired = true)] public string SaveLocation { get { return (string)this["saveLocation"]; } } [ConfigurationProperty("recipients", IsRequired = true)] public RecipientsCollection Recipients { get { return (RecipientsCollection)this["recipients"]; } } } public class StoredProcedure : ConfigurationElement { [ConfigurationProperty("storedProcedureName", IsRequired = true)] public string StoredProcedureName { get { return (string)this["storedProcedureName"]; } } } public class StoredProceduresCollection : ConfigurationElementCollection { protected override ConfigurationElement CreateNewElement() { throw new NotImplementedException(); } protected override ConfigurationElement CreateNewElement(string elementName) { return base.CreateNewElement(elementName); } protected override object GetElementKey(ConfigurationElement element) { throw new NotImplementedException(); } public StoredProcedure this[int index] { get { return (StoredProcedure)base.BaseGet(index); } } } } And here is the very straight-forward code to create the variable: ReportsCollection reportsCollection = (ReportsCollection) System.Configuration.ConfigurationManager.GetSection("ReportGroup");

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • Puppet : How to override / redefine outside child class (usecase and example detailled)

    - by alex8657
    The use case i try to illustrate is when to declare some item (eq mysqld service) with a default configuration that could be included on every node (class stripdown in the example, for basenode), and still be able to override this same item in some specific class (eg mysql::server), to be included by specific nodes (eg myserver.local) I illustrated this use case with the example below, where i want to disable mysql service on all nodes, but activate it on a specific node. But of course, Puppet parsing fails because the Service[mysql] is included twice. And of course, class mysql::server bears no relation to be a child of class stripdown Is there a way to override the Service["mysql"], or mark it as the main one, or whatever ? I was thinking about the virtual items and the realize function, but it only permits apply an item multiple times, not to redefine or override. # In stripdown.pp : class stripdown { service {"mysql": enable => "false", ensure => "stopped" } } # In mysql.pp : class mysql::server { service { mysqld: enable => true, ensure => running, hasrestart => true, hasstatus => true, path => "/etc/init.d/mysql", require => Package["mysql-server"], } } # Then nodes in nodes.pp : node basenode { include stripdown } node myserver.local inherits basenode { include mysql::server` # BOOM, fails here because of Service["mysql"] redefinition }

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • One monitor getting spilled over into other monitor: how to do a 100% reset of gnome graphics configuration

    - by Paul Nathan
    I had to kill a VMWare process and afterwards, my monitor's configuration is buggy. I have 2 monitors in a side-by-side configuration. My right-hand monitor is the secondary monitor. Upon its right-hand side there are about 50 pixels showing from the left side of the lefthand monitor (ie, as if it was wrapped around). Further, my mouse clicks are registering as about 50 pixels sideways from where they should be. It's as if those 50 pixels between monitors got gobbled. What have I done? I've reset the screen configuration in multiple ways, using xrandr, multiple monitors app, etc. This persists in different side-by-side configurations, and also persists with another user. It does not occur with XFCE. Resetting the Window manager with the Compiz reset WM app does not fix this. I've concluded the burn-to-the-ground approach is likely the best, and would like to do a 100% reset of my graphics settings. It's an Intel integrated chipset. Removing ~/.config/monitors.xml did not work. Also, interestingly, the mouse can mouse-over the 50 errant pixels on the rhs of the right-hand monitor. I hypothesize that it's a compositing problem occurring at the layer where the background, selection, and clicks are caught. Also, inverting the right-hand monitor removes the issue, but renders the screen unusable. Even more datapoints: This happens in KDE as well Sometimes logging into Gnome and running xrandr --output DVI1 --auto resets it, but the issue immediately reappears when I press alt-tab. With Compiz Application Switch turned on, the workspace is 'pushed back' a bit, and the slice on the RHS follows it as well. I'm wondering if it's a flaw in the compiz workspace compositing configuration. I suspect the error was in the compositing configuration. I installed 11.10.

    Read the article

  • Debian Wheezy IPv6 isn't configured with ifup post-up hook

    - by aef
    We recently set up a server on Debian Wheezy Beta 3 (x86_64) which has a native IPv6 connection. We configured the eth0 interface to get the IPv6 configuration through some post-up hook commands in /etc/network/interfaces. The result is, that after the booting the system up, there is only IPv4 and an auto-configured link-local IPv6 address configured on the interface, as if the command has never been executed. When we additionally place the commands after the call to ifup -a inside the /etc/init.d/networking init script, everything works as expected and we have a fully configured interface after booting up. This is quite an ugly way to configure the interface. What are we doing wrong with the ifup post-up hooks? Or is this a bug? The section from /etc/network/interfaces looks like this (IP-addresses changed): allow-hotplug eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld post-up ip -6 addr add 2001:db8:100:3022::2 dev eth0 post-up ip -6 route add fe80::1 dev eth0 post-up ip -6 route add default via fe80::1 dev eth0 I also tried it in this alternative way: auto eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld iface eth0 inet6 static address 2001:db8:100:3022::2 netmask 64 gateway fe80::1 What we added to /etc/init.d/networking: … case "$1" in start) process_options check_ifstate if [ "$CONFIGURE_INTERFACES" = no ] then log_action_msg "Not configuring network interfaces, see /etc/default/networking" exit 0 fi set -f exclusions=$(process_exclusions) log_action_begin_msg "Configuring network interfaces" if ifup -a $exclusions $verbose && ifup_hotplug $exclusions $verbose # Our additions ip -6 addr add 2001:db8:100:3022::2 dev eth0 ip -6 route add fe80::1 dev eth0 ip -6 route add default via fe80::1 dev eth0 then log_action_end_msg $? else log_action_end_msg $? fi ;; …

    Read the article

  • Configuring Vmware virtual machines to run under different IPs and PC specs

    - by Alex
    Right now I'm using a simple VmWare virtual machine with preinstalled Win 7. The IP is assigned automatically (it's the same as main OS IP). Is it possible to create several virtual machines that have different hardware specifications and different IP addresses? Here is what I mean regarding these issues: Specs: Certainly, you can easily change some specifications in the Settings menu (RAM size, HDD size), but what about advanced settings? For example: advanced settings for the Processor: is it AMD (2500+,4000+, etc.. ) or Intel (core 2, Pentium, etc..) Ram - is it Corsair 4 Gb 1333 Mhz or Kingston 2 x 2 Gb 866Mhz or something else? Hdd - Is it Seagate Barracuda 80 gb 5400 Rpm or is it Samsung 500Gb 7200 Rpm or some random SSD? Programs that work under a Virtual Machine shouldn't have a clue if that's a VmWare or not. IPs: Every program that's launched under main OS use the real IP: 93.56.xx.xx All programs that are launched under virtual machine A use IP 1: 74.78.xx.xx All programs that are launched under virtual machine B use IP 2: 84.159.xx.xx I believe that you have to use either VPN or Proxy to solve this problem. The Sum Up: The idea is to create 2-3 independent virtual machines with different hardware specifications and IP addresses. Programs that work under a certain Virtual Machine shouldn't have a clue if that's a VmWare or the real PC. Any ideas/tips or experience regarding configuration will be appreciated!

    Read the article

  • Core-Plot crash only in Release Configuration

    - by denbec
    Hey everyone, I really don't know a solution or even an idea to get around the following failure. It only happens in Release Configuration on the Device - Simulator and Debug Configuration work fine. It also only appears on the second launch. So if I have the phone connected to my mac, build the application and run it, everything works fine. If I then close the app and restart, it crashes. After long search, it seems that the error comes from the following line: x.majorIntervalLength = CPDecimalFromFloat(2.0f); The code before: CPLayerHostingView *chartView = [[CPLayerHostingView alloc] initWithFrame:CGRectMake(0, 0, 320, 160)]; [self addSubview:chartView]; // create an CPXYGraph and host it inside the view CPTheme *theme = [CPTheme themeNamed:kCPPlainWhiteTheme]; CPXYGraph *graph = (CPXYGraph *)[theme newGraph]; chartView.hostedLayer = graph; graph.paddingLeft = 20.0; graph.paddingTop = 10.0; graph.paddingRight = 10.0; graph.paddingBottom = 20.0; CPXYPlotSpace *plotSpace = (CPXYPlotSpace *)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(100)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(10)]; CPXYAxisSet *axisSet = (CPXYAxisSet *)graph.axisSet; CPXYAxis *x = axisSet.xAxis; x.majorIntervalLength = CPDecimalFromFloat(2.0f); If I comment the last line, everything works fine (off course the interval length is not correct). I would appreciate any help! Thanks in advance!

    Read the article

  • setting the openCV configuration in an openGL project produce several errors

    - by GolSa
    I have a win32 solution which is set for openGL; it works well; but I want to write a function which use functions of openCV; I set the configuration for openCV for both X86 and X64;;I commented the openCV function and just to test the correctness of configuration, I run it; but when I want to run it on X64 I faced with the error below: Error 1 error C2065: 'GWL_HINSTANCE' : undeclared identifier D:\matrix\matrixProjection\src\ControllerMain.cpp 35 1 matrixProjection Error 2 error C2664: 'CreateDialogParamW' : cannot convert parameter 4 from 'BOOL (__cdecl *)(HWND,UINT,WPARAM,LPARAM)' to 'DLGPROC' D:\matrix\matrixProjection\src\DialogWindow.cpp 47 1 matrixProjection Error 2 points to this line of code: HWND DialogWindow::create() { /*-->this line*/ handle = ::CreateDialogParam(instance, MAKEINTRESOURCE(id), parentHandle, Win::dialogProcedure, (LPARAM)controller); return handle; } but on Debug Win32 configure, it runs; I used openGL32 in my project; is there any probability to be the cause? is there any X64 version for openGL? I know that there is something needed in X64 mode which my solution can not handle it; I googled a lot about it but I did not find any solution; How can I solve that?

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • Taking wrong database configuration array

    - by user309076
    Hi, I have one application in which i have used 2 database connections. In my database config file i have given two arrays as below. $active_group = ‘default’; $active_record = TRUE; FIRST ARRAY $db[‘default’][‘hostname’] = ‘hostname’; .............. .......... SECOND ARRAY $db[‘another_db’][‘hostname’] = ‘hostname’; .............. .......... this is working fine. Now, I copied the entire CI folder to develop another application in which only one database connection needed. So, now, in the database config file, i deleted the second configuration array. But, the db class is taking first application’s second array i.e. “another_db” and it is giving the below error. “You have specified an invalid database connection group.” When i change the default (only one) array name to “another_db” in configuration file. It is working fine. Can’t understand from where it is taking the group name as “another_db”. My application autoloads database library. I have debugged the ci_auto_loader in Loader.php class where it calls $this-database() function with no parameters. But in function database($params, $, $) {}, if I echo $params it shows “another_db”.

    Read the article

  • How can I change the default program installation directory in Windows 7?

    - by Max
    Windows 7 is installed on my C drive, which is quite small. I am very tired of instructing new programs to put their files on my larger D drive during installation; I would like to change the default drive. This article says that you can use a registry hack, but I am giving Microsoft the benefit of the doubt and naively assuming that a configuration option exists somewhere. It's 2010... do I really have to hack my registry to make a simple tweak like this? Also, there's a ServerFault question that explains how to move the "Users" directory and create a symlink, which could also work. However, at the moment I have some apps in C:\Program Files, some apps in C:\Program Files (x86), and some apps in the corresponding folders on D:\, so it would be a hassle. Also, my small OS boot drive is a 10k RPM WD Raptor, and I feel like that probably gives a speed boost to apps installed on it that need to read & write to their directories a bunch. I wonder if it actually matters.

    Read the article

  • Nagios escalation debugging

    - by Oesor
    I'm having some issues with escalations happening properly and I'm not sure if it's because of my config or because the nagios binary is nonstandard and something may be broken. I've got little experience with nagios, and just want to make sure this is being set appropriately. Should the following config file definition allow the escalations to take over and increment the notification interval as expected? Is there somewhere else in the config files I should be looking at to figure out what's going on? I've enabled debug 32 in the config and it's simply spitting out 'Host notification will NOT be escalated.' for each notification. The configuration does pass the pre flight check with no issues, and reports that it's parsing the three host escalations in the config. # test host definition define host { host_name test alias test address 10.0.0.10 hostgroups test check_interval 0 retry_interval 1 max_check_attempts 2 flap_detection_enabled 0 icon_image windows.png icon_image_alt LOGO - Windows vrml_image windows.png statusmap_image windows.png action_url /info/host/275 check_period 24x7 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master check_command check_host_15!-H $HOSTADDRESS$ -t 3 -w 500.0,80% -c 1000.0,100% parents nagios notifications_enabled 1 notification_interval 3 notification_period 24x7 notification_options u,d,r use host-global } define hostescalation{ host_name test first_notification 3 last_notification 4 notification_interval 10 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 4 last_notification 5 notification_interval 30 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master } define hostescalation{ host_name test first_notification 5 last_notification 0 notification_interval 240 contact_groups hostgroup15_servicegroup1,hostgroup15_servicegroup10,hostgroup15_servicegroup13,hostgroup15_servicegroup14,hostgroup15_servicegroup2,hostgroup15_servicegroup3,hostgroup15_servicegroup4,hostgroup15_servicegroup42,hostgroup15_servicegroup45,hostgroup15_servicegroup46,hostgroup15_servicegroup47,hostgroup15_servicegroup5,hostgroup15_servicegroup8,hostgroup15_servicegroup9,ov_monitored_by_master }

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • One Apache server, multiple clients - best practices for config files?

    - by OttaSean
    First time user; please be gentle. :-) (And if you don't like my question I'd be grateful for a comment as to why...) I am doing a contract at a government server shop that provides web services for multiple client groups in other areas of the government. My employer has asked me to look into how other shops, in similar situations, handle configuration files, and whether there are any best practices on the subject. I'm pretty sure there are lots of installations out there running multiple VirtualHosts out of one Apache installation, but surprisingly I couldn't find anything online about how people handle config file layout, so was hoping some of you wise folks on ServerFault might have some thoughts or pointers for me. The current setup - which seems logical to me - is that each client site has its own directory off the root - so: /client/tps-reports/ /client/silly-walks/ /client/ministry-of-magic/ and so on - and each of those directories has a /htdocs, /cgi-bin, and /conf (among others). The main /etc/apache/httpd.conf only contains Include statements (and lots of comments), the last of which is: Include /etc/apache/vhosts/*.conf The vhosts directory contains symlinks: tpsrept.conf - /client/tps-reports/conf/tpsrept.conf sillywk.conf - /client/silly-walks/conf/sillywk.conf mom.conf - /client/ministry-of-magic/mom.conf Each of those .conf files contains the actual NameVirtualHost definition and a gigantic <VirtualHost 192.168.12.34> stanza - which contains all the stuff about the specific site. The idea is that clients have access to what's in their own /client/xx directory, so they can change stuff in the section of the config that is relevant to them. As I mentioned above, that seems fairly logical to me, but I'm wondering if any of you wise folks are aware of potential gotchas with this sort of layout, or any other thoughts on why it is or isn't a good idea. In particular, how do other places do it? Is there a "best practice" for this sort of thing? Many thanks in advance for your time and any thoughts you all might have.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >