Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 74/422 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Out of disk space - /boot at 100%

    - by uvasal
    My /boot is at 100%. When I run aptitude search ~ilinux-image I'm getting loads of unused images. When I try to delete one of them (after checking which one is currently in use by doing uname -r), e.g apt-get autoremove linux-image-3.2.0-44-generic I get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed linux-server : Depends: linux-headers-server (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And running apt-get -f install throws No space left on device. I've also tried doing apt-get purge but I am getting the same thing. Output of df -h and dpkg -l linux-*.: root@hb2088:/srv/www# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 9.4G 3.0G 6.0G 34% / udev 301M 4.0K 301M 1% /dev tmpfs 124M 228K 124M 1% /run none 5.0M 0 5.0M 0% /run/lock none 309M 0 309M 0% /run/shm /dev/sda1 92M 91M 0 100% /boot root@hb2088:/srv/www# dpkg -l linux-* Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-====================================================-====================================================-======================================================================================================================== un linux-doc-3.2.0 <none> (no description available) ii linux-firmware 1.79.6 Firmware for Linux kernel drivers iU linux-generic 3.2.0.51.61 Complete Generic Linux kernel un linux-headers <none> (no description available) un linux-headers-3 <none> (no description available) un linux-headers-3.0 <none> (no description available) ii linux-headers-3.2.0-44 3.2.0-44.69 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-44-generic 3.2.0-44.69 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-45 3.2.0-45.70 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-45-generic 3.2.0-45.70 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-48 3.2.0-48.74 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-48-generic 3.2.0-48.74 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-51 3.2.0-51.77 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-51-generic 3.2.0-51.77 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-52 3.2.0-52.78 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-52-generic 3.2.0-52.78 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-3.2.0-54 3.2.0-54.82 Header files related to Linux kernel version 3.2.0 iU linux-headers-3.2.0-54-generic 3.2.0-54.82 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-generic 3.2.0.54.64 Generic Linux kernel headers iU linux-headers-server 3.2.0.54.64 Linux kernel headers on Server Equipment. un linux-image <none> (no description available) un linux-image-3.0 <none> (no description available) ii linux-image-3.2.0-44-generic 3.2.0-44.69 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-45-generic 3.2.0-45.70 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-48-generic 3.2.0-48.74 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-51-generic 3.2.0-51.77 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-52-generic 3.2.0-52.78 Linux kernel image for version 3.2.0 on 64 bit x86 SMP in linux-image-3.2.0-54-generic <none> (no description available) iU linux-image-generic 3.2.0.51.61 Generic Linux kernel image iU linux-image-server 3.2.0.51.61 Linux kernel image on Server Equipment. un linux-initramfs-tool <none> (no description available) un linux-kernel-headers <none> (no description available) un linux-kernel-log-daemon <none> (no description available) ii linux-libc-dev 3.2.0-52.78 Linux Kernel Headers for development un linux-restricted-common <none> (no description available) iU linux-server 3.2.0.51.61 Complete Linux kernel on Server Equipment. un linux-source-3.2.0 <none> (no description available) un linux-tools <none> (no description available) Output of du -sh /boot/*: root@hb2088:~# du -sh /boot/* 781K /boot/abi-3.2.0-44-generic 781K /boot/abi-3.2.0-45-generic 781K /boot/abi-3.2.0-48-generic 781K /boot/abi-3.2.0-51-generic 781K /boot/abi-3.2.0-52-generic 139K /boot/config-3.2.0-44-generic 139K /boot/config-3.2.0-45-generic 139K /boot/config-3.2.0-48-generic 139K /boot/config-3.2.0-51-generic 139K /boot/config-3.2.0-52-generic 1.6M /boot/grub 14M /boot/initrd.img-3.2.0-44-generic 14M /boot/initrd.img-3.2.0-45-generic 14M /boot/initrd.img-3.2.0-48-generic 12K /boot/lost+found 174K /boot/memtest86+.bin 176K /boot/memtest86+_multiboot.bin 2.8M /boot/System.map-3.2.0-44-generic 2.8M /boot/System.map-3.2.0-45-generic 2.8M /boot/System.map-3.2.0-48-generic 2.8M /boot/System.map-3.2.0-51-generic 2.8M /boot/System.map-3.2.0-52-generic 4.8M /boot/vmlinuz-3.2.0-44-generic 4.8M /boot/vmlinuz-3.2.0-45-generic 4.8M /boot/vmlinuz-3.2.0-48-generic 4.8M /boot/vmlinuz-3.2.0-51-generic 4.8M /boot/vmlinuz-3.2.0-52-generic

    Read the article

  • How to install Sweetcron on XAMPP

    - by Sushaantu
    This tutorial will take you to the installation steps required to install Sweetcron in the XAMPP. I am taking the liberty to assume that you have already installed XAMPP. I First of all download sweetcron and copy the extracted “sweetcron” folder inside the htdocs folder in the XAMPP directory. You have to get few things in place before installing sweetcron: 1. Create a sweetcron database using Mysql. You can just put a name “sweetcron” and leave all the other settings such as Mysql connection collation as it is. Now that you have the database ready you can configure few settings before making sweetcron to work on XAMPP. 2. Open the config-sample PHP file with your text editor (something like Notepad++ or Komodo Edit is recommended) which is located in Sweetcron/system/application/config. You have to make few changes in it. a. In the first settings you have to edit the value of $config ['base_url']. The default value is             “http://www.your-site.com”; and you have to change that into “http://localhost/sweetcron/”; b. You also have to change the deafult settings of $config ['uri_protocol']. The value that you will see is “REQUEST_URI”; but you need to change that into “AUTO”; c. Now that we have made all the changes you can rename the file from config_sample to just config. 3. Open the database-sample.php in the text editor. You need to make the edits regarding the databse in here. a. The values of the databse at the moment are like this: $db['default']['hostname'] = “localhost”; $db['default']['username'] = “”; $db['default']['password'] = “”; $db['default']['database'] = “”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; You have to change that into $db['default']['hostname'] = “localhost”; $db['default']['username'] = “root”; $db['default']['password'] = “”; $db['default']['database'] = “sweetcron”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; We have written the username as root along with the name of database (sweetcron in my case). Since I was not using any password in xampp for the sweetcron database so I have left the password option empty. You can make suitable changes according to your system. Write down your password in the third line if you are using one in xampp. b. Now that we have made all the edits we can change the file name from database-sample to just database. 4. That leaves us with only one setting and that is editing values in the .htaccess file with our text editor. The default values you will have in the .htaccess file are: Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] and you just have to add “sweetcron” after Rewritebase in the third line. Options +FollowSymLinks RewriteEngine On RewriteBase /sweetcron RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] 5. Now you are all set and done. You can access sweetcron in the localhost by going to http://localhost/sweetcron/ you will see some text on the top which would prompt you to click on one script. Click that script and behold, you have your sweetcrom installation on xampp ready. Further it will ask you to add deatils such as Lifestream name, username and email address. Fill those deatils and you will reach the admin panel.

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Rails + Nginx + Unicorn multiple apps

    - by Mikhail Nikalyukin
    I get the server where is currently installed two apps and i need to add another one, here is my configs. nginx.conf user www-data www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Disable unknown domains ## server { listen 80 default; server_name _; return 444; } ## # Virtual Host Configs ## include /home/ruby/apps/*/shared/config/nginx.conf; } unicorn.rb deploy_to = "/home/ruby/apps/staging.domain.com" rails_root = "#{deploy_to}/current" pid_file = "#{deploy_to}/shared/pids/unicorn.pid" socket_file= "#{deploy_to}/shared/sockets/.sock" log_file = "#{rails_root}/log/unicorn.log" err_log = "#{rails_root}/log/unicorn_error.log" old_pid = pid_file + '.oldbin' timeout 30 worker_processes 10 # ????? ???? ? ??????????? ?? ????????, ???????? ??????? ? ??????? ???? ???? listen socket_file, :backlog => 1024 pid pid_file stderr_path err_log stdout_path log_file preload_app true GC.copy_on_write_friendly = true if GC.respond_to?(:copy_on_write_friendly=) before_exec do |server| ENV["BUNDLE_GEMFILE"] = "#{rails_root}/Gemfile" end before_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end Also im added capistrano to the project deploy.rb # encoding: utf-8 require 'capistrano/ext/multistage' require 'rvm/capistrano' require 'bundler/capistrano' set :stages, %w(staging production) set :default_stage, "staging" default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:forward_agent] = true set :scm, "git" set :user, "ruby" set :runner, "ruby" set :use_sudo, false set :deploy_via, :remote_cache set :rvm_ruby_string, '1.9.2' # Create uploads directory and link task :configure, :roles => :app do run "cp #{shared_path}/config/database.yml #{release_path}/config/database.yml" # run "ln -s #{shared_path}/db/sphinx #{release_path}/db/sphinx" # run "ln -s #{shared_path}/config/unicorn.rb #{release_path}/config/unicorn.rb" end namespace :deploy do task :restart do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -s USR2 `cat #{unicorn_pid}`; else cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D; fi" end task :start do run "cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D" end task :stop do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -QUIT `cat #{unicorn_pid}`; fi" end end before 'deploy:finalize_update', 'configure' after "deploy:update", "deploy:migrate", "deploy:cleanup" require './config/boot' nginx.conf in app shared path upstream staging_whotracker { server unix:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock; } server { listen 209.105.242.45; server_name beta.whotracker.com; rewrite ^/(.*) http://www.beta.whotracker.com/$1 permanent; } server { listen 209.105.242.45; server_name www.beta.hotracker.com; root /home/ruby/apps/staging.whotracker.com/current/public; location ~ ^/sitemaps/ { root /home/ruby/apps/staging.whotracker.com/current/system; if (!-f $request_filename) { break; } if (-f $request_filename) { expires -1; break; } } # cache static files :P location ~ ^/(images|javascripts|stylesheets)/ { root /home/ruby/apps/staging.whotracker.com/current/public; if ($query_string ~* "^[0-9a-zA-Z]{40}$") { expires max; break; } if (!-f $request_filename) { break; } } if ( -f /home/ruby/apps/staging.whotracker.com/shared/offline ) { return 503; } location /blog { index index.php index.html index.htm; try_files $uri $uri/ /blog/index.php?q=$uri; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location / { proxy_set_header HTTP_REFERER $http_referer; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; # If the file exists as a static file serve it directly without # running all the other rewite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://staging_whotracker; break; } } error_page 502 =503 @maintenance; error_page 500 504 /500.html; error_page 503 @maintenance; location @maintenance { rewrite ^(.*)$ /503.html break; } } unicorn.log executing ["/home/ruby/apps/staging.whotracker.com/shared/bundle/ruby/1.9.1/bin/unicorn_rails", "-c", "/home/ruby/apps/staging.whotracker.com/current/config/unicorn.rb", "-E", "staging", "-D", {5=>#<Kgio::UNIXServer:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock>}] (in /home/ruby/apps/staging.whotracker.com/releases/20120517114413) I, [2012-05-17T06:43:48.111717 #14636] INFO -- : inherited addr=/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock fd=5 I, [2012-05-17T06:43:48.111938 #14636] INFO -- : Refreshing Gem list worker=0 ready ... master process ready ... reaped #<Process::Status: pid 2590 exit 0> worker=6 ... master complete Deploy goes successfully, but when i try to access beta.whotracker.com or ip-address i get SERVER NOT FOUND error, while others app works great. Nothing shows up in error logs. Can you please point me where is my fault?

    Read the article

  • Problem with room/screen/menu controller in python game: old rooms are not removed from memory

    - by Jordan Magnuson
    I'm literally banging my head against a wall here (as in, yes, physically, at my current location, I am damaging my cranium). Basically, I've got a Python/Pygame game with some typical game "rooms", or "screens." EG title screen, high scores screen, and the actual game room. Something bad is happening when I switch between rooms: the old room (and its various items) are not removed from memory, or from my event listener. Not only that, but every time I go back to a certain room, my number of event listeners increases, as well as the RAM being consumed! (So if I go back and forth between the title screen and the "game room", for instance, the number of event listeners and the memory usage just keep going up and up. The main issue is that all the event listeners start to add up and really drain the CPU. I'm new to Python, and don't know if I'm doing something obviously wrong here, or what. I will love you so much if you can help me with this! Below is the relevant source code. Complete source code at http://www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip MAIN.PY class RoomController(object): """Controls which room is currently active (eg Title Screen)""" def __init__(self, screen, ev_manager): self.room = None self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.room = self.set_room(config.room) def set_room(self, room_const): #Unregister old room from ev_manager if self.room: self.room.ev_manager.unregister_listener(self.room) self.room = None #Set new room based on const if room_const == config.TITLE_SCREEN: return rooms.TitleScreen(self.screen, self.ev_manager) elif room_const == config.GAME_MODE_ROOM: return rooms.GameModeRoom(self.screen, self.ev_manager) elif room_const == config.GAME_ROOM: return rooms.GameRoom(self.screen, self.ev_manager) elif room_const == config.HIGH_SCORES_ROOM: return rooms.HighScoresRoom(self.screen, self.ev_manager) def notify(self, event): if isinstance(event, ChangeRoomRequest): if event.game_mode: config.game_mode = event.game_mode self.room = self.set_room(event.new_room) #Run game def main(): pygame.init() screen = pygame.display.set_mode(config.screen_size) ev_manager = EventManager() spinner = CPUSpinnerController(ev_manager) room_controller = RoomController(screen, ev_manager) pygame_event_controller = PyGameEventController(ev_manager) spinner.run() EVENT_MANAGER.PY class EventManager: #This object is responsible for coordinating most communication #between the Model, View, and Controller. def __init__(self): from weakref import WeakKeyDictionary self.last_listeners = {} self.listeners = WeakKeyDictionary() self.eventQueue= [] self.gui_app = None #---------------------------------------------------------------------- def register_listener(self, listener): self.listeners[listener] = 1 #---------------------------------------------------------------------- def unregister_listener(self, listener): if listener in self.listeners: del self.listeners[listener] #---------------------------------------------------------------------- def clear(self): del self.listeners[:] #---------------------------------------------------------------------- def post(self, event): # if isinstance(event, MouseButtonLeftEvent): # debug(event.name) #NOTE: copying the list like this before iterating over it, EVERY tick, is highly inefficient, #but currently has to be done because of how new listeners are added to the queue while it is running #(eg when popping cards from a deck). Should be changed. See: http://dr0id.homepage.bluewin.ch/pygame_tutorial08.html #and search for "Watch the iteration" print 'Number of listeners: ' + str(len(self.listeners)) for listener in list(self.listeners): #NOTE: If the weakref has died, it will be #automatically removed, so we don't have #to worry about it. listener.notify(event) def notify(self, event): pass #------------------------------------------------------------------------------ class PyGameEventController: """...""" def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.input_freeze = False #---------------------------------------------------------------------- def notify(self, incoming_event): if isinstance(incoming_event, UserInputFreeze): self.input_freeze = True elif isinstance(incoming_event, UserInputUnFreeze): self.input_freeze = False elif isinstance(incoming_event, TickEvent) or isinstance(incoming_event, BoardCreationTick): #Share some time with other processes, so we don't hog the cpu pygame.time.wait(5) #Handle Pygame Events for event in pygame.event.get(): #If this event manager has an associated PGU GUI app, notify it of the event if self.ev_manager.gui_app: self.ev_manager.gui_app.event(event) #Standard event handling for everything else ev = None if event.type == QUIT: ev = QuitEvent() elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 1: #Button 1 pos = pygame.mouse.get_pos() ev = MouseButtonLeftEvent(pos) elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 2: #Button 2 pos = pygame.mouse.get_pos() ev = MouseButtonRightEvent(pos) elif event.type == pygame.MOUSEBUTTONUP and not self.input_freeze: if event.button == 2: #Button 2 Release pos = pygame.mouse.get_pos() ev = MouseButtonRightReleaseEvent(pos) elif event.type == pygame.MOUSEMOTION: pos = pygame.mouse.get_pos() ev = MouseMoveEvent(pos) #Post event to event manager if ev: self.ev_manager.post(ev) # elif isinstance(event, BoardCreationTick): # #Share some time with other processes, so we don't hog the cpu # pygame.time.wait(5) # # #If this event manager has an associated PGU GUI app, notify it of the event # if self.ev_manager.gui_app: # self.ev_manager.gui_app.event(event) #------------------------------------------------------------------------------ class CPUSpinnerController: def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.clock = pygame.time.Clock() self.cumu_time = 0 self.keep_going = True #---------------------------------------------------------------------- def run(self): if not self.keep_going: raise Exception('dead spinner') while self.keep_going: time_passed = self.clock.tick() fps = self.clock.get_fps() self.cumu_time += time_passed self.ev_manager.post(TickEvent(time_passed, fps)) if self.cumu_time >= 1000: self.cumu_time = 0 self.ev_manager.post(SecondEvent(fps=fps)) pygame.quit() #---------------------------------------------------------------------- def notify(self, event): if isinstance(event, QuitEvent): #this will stop the while loop from running self.keep_going = False EXAMPLE CLASS USING EVENT MANAGER class Timer(object): def __init__(self, ev_manager, time_left): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time_left = time_left self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time_left -= 1

    Read the article

  • PHP submit problem

    - by TaG
    I'm trying to check if the username is available and display it for the user to see when they check there account settings, which I have done. BUT when the user tries to fill out another field I get the Your username is unavailable! which should not pop up because its the users username already. I want to know how can I fix this problem using PHP so that the users name is displayed every time the user views their account settings and it wont cause problems when a user submits additional info? Here is the PHP code. if (isset($_POST['submitted'])) { require_once '../htmlpurifier/library/HTMLPurifier.auto.php'; $config = HTMLPurifier_Config::createDefault(); $config->set('Core.Encoding', 'UTF-8'); $config->set('HTML.Doctype', 'XHTML 1.0 Strict'); $config->set('HTML.TidyLevel', 'heavy'); $config->set('HTML.SafeObject', true); $config->set('HTML.SafeEmbed', true); $purifier = new HTMLPurifier($config); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.* FROM users WHERE user_id=3"); $first_name = mysqli_real_escape_string($mysqli, $purifier->purify(htmlentities(strip_tags($_POST['first_name'])))); $username = mysqli_real_escape_string($mysqli, $purifier->purify(htmlentities(strip_tags($_POST['username'])))); if($_POST['username']) { $u = "SELECT user_id FROM users WHERE username = '$username'"; $r = mysqli_query ($mysqli, $u) or trigger_error("Query: $q\n<br />MySQL Error: " . mysqli_error($mysqli)); if (mysqli_num_rows($r) == TRUE) { $username = NULL; echo '<p class="error">Your username is unavailable!</p>'; } else if(mysqli_num_rows($r) == 0) { $username = mysqli_real_escape_string($mysqli, $purifier->purify(htmlentities(strip_tags($_POST['username'])))); if ($_POST['password1'] == $_POST['password2']) { $sha512 = hash('sha512', $_POST['password1']); $password = mysqli_real_escape_string($mysqli, $purifier->purify(strip_tags($sha512))); } else { $password = NULL; } if($password == NULL) { echo '<p class="error">Your password did not match the confirmed password!</p>'; } else { if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO users (user_id, first_name, username, password) VALUES ('$user_id', '$first_name', '$username', '$password')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE users SET first_name = '$first_name', username = '$username', password = '$password' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { print mysqli_error($mysqli); return; } } } } } Here is the html form. <form method="post" action="index.php"> <fieldset> <ul> <li><label for="first_name">First Name: </label><input type="text" name="first_name" id="first_name" size="25" class="input-size" value="<?php if (isset($_POST['first_name'])) { echo stripslashes(htmlentities(strip_tags($_POST['first_name']))); } else if(!empty($first_name)) { echo stripslashes(htmlentities(strip_tags($first_name))); } ?>" /></li> <li><label for="username">UserName: </label><input type="text" name="username" id="username" size="25" class="input-size" value="<?php if (isset($_POST['username'])) { echo stripslashes(htmlentities(strip_tags($_POST['username']))); } else if(!empty($username)) { echo stripslashes(htmlentities(strip_tags($username))); } ?>" /><br /><span>(ex: CSSKing, butterball)</span></li> <li><label for="password1">Password: </label><input type="password" name="password1" id="password1" size="25" class="input-size" value="<?php if (isset($_POST['password1'])) { echo stripslashes(htmlentities(strip_tags($_POST['password1']))); } ?>" /></li> <li><label for="password2">Confirm Password: </label><input type="password" name="password2" id="password2" size="25" class="input-size" value="<?php if (isset($_POST['password2'])) { echo stripslashes(htmlentities(strip_tags($_POST['password2']))); } ?>" /></li> <li><input type="submit" name="submit" value="Save Changes" class="save-button" /> <input type="hidden" name="submitted" value="true" /> <input type="submit" name="submit" value="Preview Changes" class="preview-changes-button" /></li> </ul> </fieldset> </form>

    Read the article

  • Can't create admin user on Heroku

    - by Nick5a1
    I am new to rails and I have gone through Kevin Skoglund's Ruby on Rails 3 Essential Training course on Lynda.com. Through the course you set up a simple cms, which I did. It doesn't cover Git or deployment but I've pushed my simple cms to github (https://github.com/nick5a1/Simple_CMS) and deployed to Heroku (http://nkarrasch.herokuapp.com/). In order to deploy to Heroku I followed the Heroku setup guide (https://devcenter.heroku.com/articles/rails3) and switched my database from MySQL to PostgreSQL. As instructed I changed gen'mysql2' to gen 'sqlite3' in my Gemfile and ran bundle install before pushing. I then ran heroku run rake db:migrate. I'm running into 2 problems. When I try to log in (http://nkarrasch.herokuapp.com/access) I get an error "We're sorry, but something went wrong". I should instead be getting a flash message with invalid username/password combination. This is what I'm getting on my test environment on my local machine. Secondly, when I log into the Heroku console to create and create an admin user, when I try to save that user I get the following error: irb(main):004:0> user.save (1.2ms) BEGIN AdminUser Exists (1.9ms) SELECT 1 AS one FROM "admin_users" WHERE "admin_users"."username" = 'Nick5a1' LIMIT 1 (1.7ms) ROLLBACK => false Any advice on how to troubleshoot would be greatly appreciated :). Thanks very much, Nick EDIT: Here are my Heroku logs: 2012-06-27T20:36:44+00:00 heroku[slugc]: Slug compilation started 2012-06-27T20:37:34+00:00 heroku[api]: Add shared-database:5mb add-on by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v2 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Add RAILS_ENV, LANG, PATH, RACK_ENV, GEM_PATH config by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v3 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v4 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Deploy 1d82839 by [email protected] 2012-06-27T20:37:35+00:00 heroku[slugc]: Slug compilation finished 2012-06-27T20:37:36+00:00 heroku[web.1]: Starting process with command `bundle exec rails server -p 45450` 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:44+00:00 app[web.1]: => Rails 3.2.6 application starting in production on http://0.0.0.0:45450 2012-06-27T20:37:44+00:00 app[web.1]: => Call with -d to detach 2012-06-27T20:37:44+00:00 app[web.1]: => Booting WEBrick 2012-06-27T20:37:44+00:00 app[web.1]: Connecting to database specified by DATABASE_URL 2012-06-27T20:37:44+00:00 app[web.1]: => Ctrl-C to shutdown server 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO WEBrick 1.3.1 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO ruby 1.9.2 (2011-07-09) [x86_64-linux] 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO WEBrick::HTTPServer#start: pid=2 port=45450 2012-06-27T20:37:45+00:00 heroku[web.1]: State changed from starting to up 2012-06-27T20:39:44+00:00 heroku[run.1]: Awaiting client 2012-06-27T20:39:44+00:00 heroku[run.1]: Starting process with command `bundle exec rake db:migrate` 2012-06-27T20:39:44+00:00 heroku[run.1]: State changed from starting to up 2012-06-27T20:39:51+00:00 heroku[run.1]: Process exited with status 0 2012-06-27T20:39:51+00:00 heroku[run.1]: State changed from up to complete 2012-06-27T20:41:05+00:00 heroku[run.1]: Awaiting client 2012-06-27T20:41:05+00:00 heroku[run.1]: Starting process with command `bundle exec rails console` 2012-06-27T20:41:05+00:00 heroku[run.1]: State changed from starting to up 2012-06-27T20:46:09+00:00 heroku[run.1]: Process exited with status 0 2012-06-27T20:46:09+00:00 heroku[run.1]: State changed from up to complete

    Read the article

  • JSF tags not being rendered as HTML

    - by Toto
    I'm following the Java EE firstcup tutorial using Netbeans and Glassfish. When I execute the JSF web tier I've been instructed to code, the browser gets the same JSF markup coded in the .xhtml file, and the tags are not rendered as HTML tags. I know this by using the view source code in my browser. For example, for this code: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>Page title here</title> </h:head> <h:body> <h2> <h:outputText value="#{bundle.WelcomeMessage}" /> </h2> </h:body> </html> The browser should get something like: <html ...> <head> <title>Page title here</title> </head> <body> <h2> the welcome message goes here </h2> </body> </html> Right? Well, my browser is getting jsf code (the first piece of code above) and not the html code (the second piece of code above). It seems to be a configuration problem in netbeans or glassfish but don't know what. Any ideas? This is my web.xml file: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/firstcup/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>greetings.xhtml</welcome-file> </welcome-file-list> </web-app> This is my faces-config.xml file: <?xml version='1.0' encoding='UTF-8'?> <!-- =========== FULL CONFIGURATION FILE ================================== --> <faces-config version="2.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd"> <application> <resource-bundle> <base-name>firstcup.web.WebMessages</base-name> <var>bundle</var> </resource-bundle> <locale-config> <default-locale>en</default-locale> <supported-locale>es</supported-locale> </locale-config> </application> <navigation-rule> <from-view-id>/greetings.xhtml</from-view-id> <navigation-case> <from-outcome>success</from-outcome> <to-view-id>/response.xhtml</to-view-id> </navigation-case> </navigation-rule> </faces-config> Moreover: The url I'm entering in the browser is http://localhost:8081/firstcup/ but I've also tried: http://localhost:8081/firstcup/greetings.xhtml I've checked Glassfish logs and there's no information about not being able to load FacesServlet

    Read the article

  • Error when saving document of custom type in Alfresco Share

    - by ht0ma
    I got this exception when trying to save a new document of custom type: org.alfresco.service.cmr.repository.MalformedNodeRefException: 06010026 Invalid node ref - does not contain forward slash: {node.nodeRef} Here is how the definition of the custom type looks like: <?xml version="1.0" encoding="UTF-8"?> <!-- Definition of new Model --> <model name="ht:channelmodel" xmlns="http://www.alfresco.org/model/dictionary/1.0"> <!-- Imports are required to allow references to definitions in other models --> <imports> <!-- Import Alfresco Dictionary Definitions --> <import uri="http://www.alfresco.org/model/dictionary/1.0" prefix="d" /> <!-- Import Alfresco Content Domain Model Definitions --> <import uri="http://www.alfresco.org/model/content/1.0" prefix="cm" /> </imports> <!-- Introduction of new namespaces defined by this model --> <namespaces> <namespace uri="http://www.someco.com/model/content/1.0" prefix="ht" /> </namespaces> <types> <!-- Here comes my type --> <type name="ht:doc"> <title>Custom Document</title> <parent>cm:content</parent> <mandatory-aspects> <aspect>cm:generalclassifiable</aspect> </mandatory-aspects> </type> </types> <aspects> <aspect name="ht:channel"> <title>Content Channel</title> <properties> <property name="ht:isWeb"> <type>d:boolean</type> </property> </properties> </aspect> </aspects> </model> and here is how I set the forms for displaying the creation of a new document of my custom type (inside share-config-custom.xml) <alfresco-config> <config evaluator="string-compare" condition="DocumentLibrary"> <create-content> <content id="plain-text" mimetype="text/plain" label="Prompt" itemid="ht:doc" /> </create-content> <aspects> <visible> <aspect name="ht:channel" /> </visible> <addable> </addable> <removeable> </removeable> </aspects> <types> <type name="cm:content"> <subtype name="ht:doc" /> </type> </types> </config> <config evaluator="model-type" condition="ht:doc"> <forms> <form> <field-visibility> <show id="cm:title" force="true" /> <show id="ht:isWeb" force="true" /> </field-visibility> <appearance> <field id="cm:title"> <control template="/org/alfresco/components/form/controls/textfield.ftl" /> </field> </appearance> </form> </forms> </config> </alfresco-config> Is is something wrong with the formatting or am I missing some fields in the type definition? Thanks

    Read the article

  • Pagination, next page doesn`t display

    - by user1738013
    if I click page 2 that`s error: Not Found The requested URL /rank/GetAll/30 was not found on this server. My link is: http://localhost/rank/GetAll/30 Model: Rank_Model <?php Class Rank_Model extends CI_Model { public function __construct() { parent::__construct(); } public function record_count() { return $this->db->count_all("ranking"); } public function fifa_rank($limit, $start) { $this->db->limit($limit, $start); $query = $this->db->get("ranking"); if ($query->num_rows() > 0) { foreach ($query->result() as $row) { $data[] = $row; } return $data; } return false; } } ?> Controller: Rank <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class rank extends CI_Controller { function __construct() { parent::__construct(); $this->load->helper("url"); $this->load->helper(array('form', 'url')); $this->load->model('Rank_Model','',TRUE); $this->load->library("pagination"); } function GetAll() { $config = array(); $config["base_url"] = base_url() . "rank/GetAll"; $config["total_rows"] = $this->Rank_Model->record_count(); $config["per_page"] = 30; $config["uri_segment"] = 3; $this->pagination->initialize($config); $page = ($this->uri->segment(3)) ? $this->uri->segment(3) : 0; $data["results"] = $this->Rank_Model->fifa_rank($config["per_page"], $page); $data['errors_login'] = array(); $data["links"] = $this->pagination->create_links(); $this->load->view('left_column/open_fifa_rank',$data); } } View Open: open_fifa_rank <?php $this->load->view('mains/header'); $this->load->view('login/loggin'); $this->load->view('mains/menu'); $this->load->view('left_column/left_column_before'); $this->load->view('left_column/menu_left'); $this->load->view('left_column/left_column'); $this->load->view('center/center_column_before'); $this->load->view('left_column/fifa_rank'); $this->load->view('center/center_column'); $this->load->view('right_column/right_column_before'); $this->load->view('login/zaloguj'); $this->load->view('right_column/right_column'); $this->load->view('mains/footer'); ?> and View: fifa_rank <table> <thead> <tr> <td>Pozycja</td> <td>Kraj</td> <td>Punkty</td> <td>Zmiana</td> </tr> </thead> <?php foreach($results as $data) {?&gt; <tbody> <tr> <td><?php print $data->pozycja;?></td> <td><?php print $data->kraj;?></td> <td><?php print $data->punkty;?></td> <td><?php print $data->zmiana;?></td> </tr> <?php } ?> </tbody> </table> <p><?php echo $links; ?></p> Maybe you know where is my problem? Now I know where is my problem. In first page I have link: http://localhost/index.php/rank/GetAll But on the next: http://localhost/rank/GetAll/30 In secend link, I don`t have index.php. How can I fix it?

    Read the article

  • Likewise DomainJoin hangs on Finishing krb5.conf configuration

    - by dreay
    Hello, I have a problem when joining a CentOS release 5.4 (Final) x64 machine to the domain after running domainjoin-cli --loglevel info --log . join domain.local password I obtain the following, which seems to hang on "20100428112821:INFO:Finishing krb5.conf configuration" 20100428112817:INFO:Domainjoin invoked with the join command (remaining arguments will be printed later): 20100428112817:INFO: [/opt/likewise/bin/domainjoin-cli] 20100428112817:INFO: [--loglevel] 20100428112817:INFO: [info] 20100428112817:INFO: [--log] 20100428112817:INFO: [/tmp/join_1.log] 20100428112817:INFO: [join] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwsmd] 20100428112817:INFO:Daemon [/etc/init.d/lwsmd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwsmd] 20100428112817:INFO:Daemon [/etc/init.d/lwsmd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwregd] 20100428112817:INFO:Daemon [/etc/init.d/lwregd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwregd] 20100428112817:INFO:Daemon [/etc/init.d/lwregd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/netlogond] 20100428112817:INFO:Daemon [/etc/init.d/netlogond]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/netlogond] 20100428112817:INFO:Daemon [/etc/init.d/netlogond]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwiod] 20100428112817:INFO:Daemon [/etc/init.d/lwiod]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwiod] 20100428112817:INFO:Daemon [/etc/init.d/lwiod]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/dcerpcd] 20100428112817:INFO:Daemon [/etc/init.d/dcerpcd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/dcerpcd] 20100428112817:INFO:Daemon [/etc/init.d/dcerpcd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/eventlogd] 20100428112817:INFO:Daemon [/etc/init.d/eventlogd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/eventlogd] 20100428112817:INFO:Daemon [/etc/init.d/eventlogd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lsassd] 20100428112817:INFO:Daemon [/etc/init.d/lsassd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lsassd] 20100428112817:INFO:Daemon [/etc/init.d/lsassd]: status [0] 20100428112817:INFO:Domainjoin invoked with 2 arg(s) to the join command: 20100428112817:INFO: [domain.local] 20100428112817:INFO: [default.user] 20100428112817:INFO:Adding ops (fqdn ops.domain.local) to /etc/hosts ip 192.168.246.5, removing ops, ops.domain.local, ops, ops.domain.local 20100428112817:INFO:Reading krb5 file /tmp/likewisetmpPkpAn5/etc/krb5.conf 20100428112817:INFO:Reading krb5 file /tmp/likewisetmpb6dkNX/etc/krb5.conf 20100428112817:INFO:Reading nsswitch file /etc/nsswitch.conf 20100428112817:INFO:Reading pam configuration 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/config-util.rpmnew 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/config-util 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/runuser-l 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/sshd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/other 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/smtp.postfix 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/su-l 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-switch-mail-nox 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/kshell 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/authconfig 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/ekshell 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/run_init 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/screen 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/eject 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-auth.rpmnew 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-config-network-cmd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-auth-ac 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/kbdrate 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/smtp.sendmail 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/chsh 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/setup 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-switch-mail 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/ksu 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/login 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/sudo-i 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/smtp 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/runuser 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/chfn 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/ppp 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/gssftp 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/remote 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/reboot 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/newrole 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-powersave 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-auth 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/halt 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/other.rpmnew 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/atd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/passwd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/authconfig-tui 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-hibernate 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/su 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-config-network 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/neat 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-suspend-hybrid 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/crond 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/sudo 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-suspend 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/poweroff 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.conf 20100428112817:INFO:File /tmp/likewisetmptrO2dQ/etc/pam.conf does not exist 20100428112817:INFO:Found config file /etc/ssh/sshd_config 20100428112817:INFO:Found binary /usr/sbin/sshd 20100428112817:INFO:Reading ssh file /etc/ssh/sshd_config 20100428112817:INFO:Found open sshd version 4.3.-1p2 20100428112817:INFO:Testing option ChallengeResponseAuthentication 20100428112817:INFO:Option ChallengeResponseAuthentication supported 20100428112817:INFO:Testing option UsePAM 20100428112817:INFO:Option UsePAM supported 20100428112817:INFO:Testing option PAMAuthenticationViaKBDInt 20100428112817:INFO:Option PAMAuthenticationViaKBDInt not supported 20100428112817:INFO:Testing option KbdInteractiveAuthentication 20100428112817:INFO:Option KbdInteractiveAuthentication supported 20100428112817:INFO:Testing option GSSAPIAuthentication 20100428112817:INFO:Option GSSAPIAuthentication supported 20100428112817:INFO:Testing option GSSAPICleanupCredentials 20100428112817:INFO:Option GSSAPICleanupCredentials supported 20100428112817:INFO:Found config file /etc/ssh/ssh_config 20100428112817:INFO:Found binary /usr/bin/ssh 20100428112817:INFO:Reading ssh file /etc/ssh/ssh_config 20100428112817:INFO:Testing option GSSAPIAuthentication 20100428112817:INFO:Option GSSAPIAuthentication supported 20100428112817:INFO:Testing option GSSAPIDelegateCredentials 20100428112817:INFO:Option GSSAPIDelegateCredentials supported 20100428112821:INFO:Running module join 20100428112821:INFO:Starting krb5.conf configuration (enabling) 20100428112821:INFO:Reading krb5 file /tmp/likewisetmpvgqQmT/etc/krb5.conf 20100428112821:WARNING:Short domain name not specified. Defaulting to 'betgenius' 20100428112821:INFO:Failed to run lwinet ads trusts. This is expected if not yet joined to the domain 20100428112821:INFO:Failed to run lwiinfo --details -m. This is expected if the auth daemon is not running 20100428112821:INFO:Writing krb5 file /tmp/likewisetmpvgqQmT/etc/krb5.conf 20100428112821:INFO:File /tmp/likewisetmpvgqQmT/etc/krb5.conf modified 20100428112821:INFO:Finishing krb5.conf configuration Has anyone seen this error before? and know of the fix?

    Read the article

  • Disable static content caching in IIS 7

    - by Lee Richardson
    I'm a developer having what should be a relatively simple problem in IIS 7 on Windows Server 2008 R2. The problem is that IIS 7 is overzealously caching all static content on the server. It's caching all .html and .js content and not noticing when the content changes on disk unless I iisreset. I've tried the following: Deleting the local cache in my browser (I'm 99% positive this is a server caching issue) In IIS Admin in OutputCaching adding an .html extension and unchecking "User mode caching" and unchecking "Kernel-mode caching" In IIS Admin in OutputCaching adding an .html extension and checking "User mode caching" and selecting the radio for "Prevent all caching" In IIS Admin editing Output Cache Feature settings and unchecking "Enable cache" and "Enable kernel cache under OutputCaching. Running "C:\Windows\System32\inetsrv\config\appcmd set config "SharePoint - 80" -section: system.webServer/caching -enabled:false" Looking through applicationHost.config and disabling anything related to caching I could find. Nothing seems to work. I'm getting very frustrated. Can anyone please help?

    Read the article

  • MOSS Upload Size?

    - by littlegeek
    Hi In MOSS we have done all of this to increase the file upload size, reset iis but still doesnt want to play = anyone any advice. UPDATE Just seen the Scott Gu article - http://weblogs.asp.net/pscott/archive/2009/02/26/404-errors-with-fileupload-with-iis7.aspx and JS example http://msmvps.com/blogs/cgross/archive/2009/02/25/large-files-in-sbs-2008-s-companyweb.aspx So need to say this is II6 on win2k3 Update 2 still not working at a loss anyone help? In SharePoint 3.0 Central Adminisration, Application Management tab, and Web application general settings configure the Maximum upload size to a maximum of 2047 MB. - We set ours to 250MB In Internet Information Services on the properties of the virtual server increase the Connection Timeout to greater than default 120 seconds depending on the time to upload large files in your environment for example, 360 seconds. - We set ours to 600 Secs Configure the web.config for the _layouts web.config with On the SharePoint server change C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS\web.config with an executionTimeout appropriate for the file size you are uploading. An example is included below. from: to: executionTimeout="999999" maxRequestLength="2097151" /

    Read the article

  • Generic HTTP 500 Error Message On Hosted Sites (like GoDaddy)

    - by Jimbo
    I decided to post this because I battled to find out how to do it and couldnt see anything on Stackoverflow about it. Often when you host with a provider like GoDaddy, they have "Custom Error Messages" set to ON. What I didnt realise was that the web.config settings dont just apply to ASP.NET, they apply to all applications on YOUR IIS site and hence will sort this problem out for Classic ASP as well (very few GoDaddy support people even know this) All you need to do is add the following to your web.config OR, for those using Classic ASP, just create a web.config file in your ROOT with this code in it. <configuration> <system.webServer> <asp scriptErrorSentToBrowser="true"/> <httpErrors errorMode="Detailed"/> </system.webServer> </configuration>

    Read the article

  • Continuing permissions issues - ASP.net, IIS 7, Server 2008 - 0x80070005 (http 500.19) error

    - by Re-Pieper
    I created an ASP.net MVC developed web application and I am trying to set up IIS. The Error: Http error 500.19, error code 0x80070005, Cannot read configuration file due to insufficient permissions, config file: C:\inetpub\wwwroot\BudgetManagerMain\BudgetManager\web.config If I set the AppPool to use 'administrator' i have no problems and can access the site just fine. If i set to NETWORK SERVICE (or anything else including self-created admin or non-admin user accounts), i get the above error. Things I have tried: identity for Application pool named 'test' is 'NetworkService' Set full access privs for wwwroot and all children files/folders verified effective permissions and NETWORK SERVICE has full access. Authentication on my site is set for anonymous and running under Application Pool Identity I do not have any physical path credentials set on the website confirmed website is set to run under the application pool named 'test' using Process Monitor, here is a summary of what i found on the ACCESS DENIED event EVENT TAB: Class: File System Operation: CreateFile Result: Access Denied Path: ..\web.config Desired Access: Generic Read Disposition: Open Options: Sybnchronous IO Non-Alert, Non-Directory file Attributes: N ShareMode: Read AllocaitonSize: n/a PROCESS TAB ...lots of stuff that seems irrelevant User: NT AUTHORITY\NETWORK SERVICE

    Read the article

  • Dynamips Virtual Router not pinging external host after bridging

    - by maiky
    I have setup a virtual router image with dynamips and setup bridging between tap0 interface of the virtual router and eth0 and br0 with commands [root@cisco_host]# brctl addbr br0 [root@cisco_host]# ifconfig br0 up [root@cisco_host]# ifconfig eth1 0.0.0.0 [root@cisco_host]# brctl addif br0 eth1 [root@cisco_host]# ifconfig br0 192.168.0.100 netmask 255.255.255.0 up In dynagen configuration I have: f0/0 = NIO_tap:tap0 So, [root@cisco_host]# brctl addif br0 tap0 [root@cisco_host]# ifconfig tap0 up router(config)#int fa0/0 router(config-if)#ip address 192.168.0.101 255.255.255.0 router(config-if)#no shutdown After the configuration above I was expecting that from inside the router I could ping an external machine with IP 192.168.0.1 Actually from the host I can ping the external host 192.168.0.1 as well as 192.168.0.100 and 192.168.0.101. So what am I missing here? tap0 is bridged with br0 and in turn with eth1. So why the router is not pingable from 192.168.0.1 and vice-versa??

    Read the article

  • How to disallow use of disable_functions in?

    - by Gaia
    I'm obviously not the first one to have this problem, but I cannot not find an answer to this situation. I want to lock down PHP a bit, more specifically the use disable_functions. The environment is CentOS 6.2/PHP 5.3.3 fcgid/Apache 2.2.15: Whats the proper apache config (AllowOverride, etc) to disable any PHP setting to be changed via .htaccess? All other overrides are ok (current setting is AllowOverride All) Whats the proper config to forbid effective use of disable_functions in all but the master php.ini (as in forbid use of disable_functions in /home/myvhost/etc/php5/php.ini or any directory within in that vhost public_html. another way to say this: the only effective disable_functions comes from the master php.ini)? If #2 is not possible, at least whats the proper config to disallow a vhost owner to effectively use any php.ini but the vhost main one (/home/myvhost/etc/php5/php.ini)? Thanks

    Read the article

  • Configure apache to reverse proxy for specific name

    - by Phrogz
    I have a working intranet server that: Properly serves some content from http://hqmktgwb01/ Is currently properly configured to reverse proxy from http://hqmktgwb01/dashstats to a round-robin of localhost:3000 - localhost:3003 Also has the DNS name dashstats (going to the same IP) The current working configuration file can be found here: http://pastie.org/1426082 I would like to modify the configuration so that:    4. http://dashstats/ performs the same reverse proxying http://hqmktgwb01/dashstats. I (naively) modified the config like this: http://pastie.org/1426047 (added lines 90-98) but this is not a valid Apache config. Please help me to modify the original config file to accomplish 1-4 above.

    Read the article

  • Running evrouter at boot with init.d, or after xserver starts

    - by J V
    I'm using evrouter to set up mouse button binds, and init.d to start it. My init.d file: #!/bin/bash #Simple init.d script to run evrouter ### BEGIN INIT INFO # Provides: evrouter # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Set evrouter bindings # Description: Set evrouter bindings at boot time. ### END INIT INFO config="/opt/hacks/evrouterrc" case "$1" in start|restart|reload|force-reload) evrouter -c "$config" /dev/input/event* ;; stop) echo "Evrouter is not a daemon, change settings file at '$config' and restart" ;; *) echo "Usage: $0 start" >&2 exit 3 ;; esac evrouter however complains that: evrouter: could not open display "". If evrouter requires xserver to be up, how do I get init to wait until after xserver starts to run this script? If xserver restarts will this script run automatically? Running this with sudo services evrouter start still results in this error, can init.d scripts not tell where my display is? (Not exactly familiar with init, runlevels, etc)

    Read the article

  • use of [!NOTFOUND=return] in nsswitch.conf

    - by Chris Phillips
    Has anyone come across the use of this config for passwd and groups config in nsswitch.conf? Where I'm working I've been told it's been shown to help situations where a group exists both locally and in ldap which was causing issues for group memberships etc. However this config seems to totally mess up nscd which will be aware of the groups and all their members but will not flip the data around to say the user is a member of all it's remote groups. Initially it seems, given a fully available environment, to be exactly the same as [FOUND=return] which is an implict default between stages anyway. However apparently a lengthy ticket with Redhat resulted in the recommended use of that configuration.

    Read the article

  • Tablet pen loses pressure sensitivity after screensaver

    - by Rohit Nair
    Tablet: Genius MousePen 8x6 System: Windows 7 64-bit The question says it all. When I first boot up my machine, the tablet works fine, with pressure sensitivity functionality. However, if I leave the machine long enough for the screensaver to start up, when I return, the tablet works just like a mouse, and does not respond to changes in pen pressure. Unplugging and replugging the device does not fix the problem. Disabling and enabling the device in device manager does not fix the problem. When I try to use the tablet config tool -before- screensaver, the pressure config works fine. It gives you a little area where you can draw to test out the pressure. When I try to use the same tool -after- screensaver, the moment I click on that drawing area, the config tool freezes.

    Read the article

  • Nagios configuration management

    - by HannesFostie
    I am going to implement Nagios (most likely anyway, could turn out to be another tool as well) and I was wondering if anyone would like to share their best practices when it comes to creating, managing and maintaining the config files when it comes to scalability and managability as I find that it might quickly become a real big mess. Any tips, examples or even full configurations would be most welcome and I'd happily look them over. Tools would be welcome as well. Tried out NConf so far, but the generated config files don't seem to do what was promised (not including the parent information for one, and just a PITA to get them working - they generate a ton of errors when checking the config files with the script supplied by nagios) Thanks

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >