Search Results

Search found 20846 results on 834 pages for 'current dir'.

Page 162/834 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Can i get RSpec to generate specs with expect syntax?

    - by papirtiger
    When generating specs with : rails g controller Home index A spec is generated with the older object.should syntax require 'spec_helper' describe HomeController do describe "GET 'index'" do it "returns http success" do get 'index' response.should be_success end end end Is it possible to configure the generator to use the expect syntax instead? Desired output: require 'spec_helper' describe HomeController do describe "GET 'index'" do it "returns http success" do get 'index' expect(response).to be_success end end end in config/application.rb: config.generators do |g| g.test_framework :rspec, fixture: true g.fixture_replacement :factory_girl, dir: 'spec/factories' g.view_specs false g.stylesheets = false g.javascripts = false end

    Read the article

  • Rewriting URL's in ASP.net (Simple question)

    - by Tom Gullen
    On my master page I have: <link rel="stylesheet" href="css/default.css" /> I also have a page "blog.aspx" in the root directory. I have the rule: <rewrite url="~/blog/blog.aspx" to="~/blog.aspx" /> My questions are: Am I meant to make all my links in my site point to blog/blog.aspx now instead of just blog.aspx where it is physically located How is best to cope with the paths of the stylesheets etc now being messed up because they are one dir up?

    Read the article

  • change all the label icon in a panel

    - by Josh
    I have a panel with several JLabel, I would like to change all their Icon, String path = System.getProperty("user.dir"); for (int x=0;x< 21;x++) { javax.swing.JLabel lab = boardPanel.getComponent(x).; lab.setIcon(new ImageIcon(path + "\\image\\blank.jpg")); } it gives me an error of incompatible type, all inside the boardPanel are JLabel, im using netbeans 6.8.

    Read the article

  • PHP shell command

    - by DGT
    Can anyone please tell me how to move file(s) into a dir in PHP? I did the following and it doesn't work. exec("temp/$file ../public/"); I would appreciate it.

    Read the article

  • How to find files older than N days from a given timestamp

    - by JGeZau
    I want to find files older than N days from a given timestamp in format YYYYMMDDHH I can find file older than 2 days with the below command, but this finds files with present time find /path/to/dir -mtime -2 -type f -ls Lets say I give the input timeSamp=2011093009 so I want to find files older than 2 days from 2011093009 Been doing my research, but can't seem to figure it out. ========================================== Found the solution...see below for my Answer.. Thanks

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • Metro: Understanding CSS Media Queries

    - by Stephen.Walther
    If you are building a Metro style application then your application needs to look great when used on a wide variety of devices. Your application needs to work on tiny little phones, slates, desktop monitors, and the super high resolution displays of the future. Your application also must support portable devices used with different orientations. If someone tilts their phone from portrait to landscape mode then your application must still be usable. Finally, your Metro style application must look great in different states. For example, your Metro application can be in a “snapped state” when it is shrunk so it can share screen real estate with another application. In this blog post, you learn how to use Cascading Style Sheet media queries to support different devices, different device orientations, and different application states. First, you are provided with an overview of the W3C Media Query recommendation and you learn how to detect standard media features. Next, you learn about the Microsoft extensions to media queries which are supported in Metro style applications. For example, you learn how to use the –ms-view-state feature to detect whether an application is in a “snapped state” or “fill state”. Finally, you learn how to programmatically detect the features of a device and the state of an application. You learn how to use the msMatchMedia() method to execute a media query with JavaScript. Using CSS Media Queries Media queries enable you to apply different styles depending on the features of a device. Media queries are not only supported by Metro style applications, most modern web browsers now support media queries including Google Chrome 4+, Mozilla Firefox 3.5+, Apple Safari 4+, and Microsoft Internet Explorer 9+. Loading Different Style Sheets with Media Queries Imagine, for example, that you want to display different content depending on the horizontal resolution of a device. In that case, you can load different style sheets optimized for different sized devices. Consider the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>U.S. Robotics and Mechanical Men</title> <link href="main.css" rel="stylesheet" type="text/css" /> <!-- Less than 1100px --> <link href="medium.css" rel="stylesheet" type="text/css" media="(max-width:1100px)" /> <!-- Less than 800px --> <link href="small.css" rel="stylesheet" type="text/css" media="(max-width:800px)" /> </head> <body> <div id="header"> <h1>U.S. Robotics and Mechanical Men</h1> </div> <!-- Advertisement Column --> <div id="leftColumn"> <img src="advertisement1.gif" alt="advertisement" /> <img src="advertisement2.jpg" alt="advertisement" /> </div> <!-- Product Search Form --> <div id="mainContentColumn"> <label>Search Products</label> <input id="search" /><button>Search</button> </div> <!-- Deal of the Day Column --> <div id="rightColumn"> <h1>Deal of the Day!</h1> <p> Buy two cameras and get a third camera for free! Offer is good for today only. </p> </div> </body> </html> The HTML page above contains three columns: a leftColumn, mainContentColumn, and rightColumn. When the page is displayed on a low resolution device, such as a phone, only the mainContentColumn appears: When the page is displayed in a medium resolution device, such as a slate, both the leftColumn and the mainContentColumns are displayed: Finally, when the page is displayed in a high-resolution device, such as a computer monitor, all three columns are displayed: Different content is displayed with the help of media queries. The page above contains three style sheet links. Two of the style links include a media attribute: <link href="main.css" rel="stylesheet" type="text/css" /> <!-- Less than 1100px --> <link href="medium.css" rel="stylesheet" type="text/css" media="(max-width:1100px)" /> <!-- Less than 800px --> <link href="small.css" rel="stylesheet" type="text/css" media="(max-width:800px)" /> The main.css style sheet contains default styles for the elements in the page. The medium.css style sheet is applied when the page width is less than 1100px. This style sheet hides the rightColumn and changes the page background color to lime: html { background-color: lime; } #rightColumn { display:none; } Finally, the small.css style sheet is loaded when the page width is less than 800px. This style sheet hides the leftColumn and changes the page background color to red: html { background-color: red; } #leftColumn { display:none; } The different style sheets are applied as you stretch and contract your browser window. You don’t need to refresh the page after changing the size of the page for a media query to be applied: Using the @media Rule You don’t need to divide your styles into separate files to take advantage of media queries. You can group styles by using the @media rule. For example, the following HTML page contains one set of styles which are applied when a device’s orientation is portrait and another set of styles when a device’s orientation is landscape: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Application1</title> <style type="text/css"> html { font-family:'Segoe UI Semilight'; font-size: xx-large; } @media screen and (orientation:landscape) { html { background-color: lime; } p.content { width: 50%; margin: auto; } } @media screen and (orientation:portrait) { html { background-color: red; } p.content { width: 90%; margin: auto; } } </style> </head> <body> <p class="content"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </body> </html> When a device has a landscape orientation then the background color is set to the color lime and the text only takes up 50% of the available horizontal space: When the device has a portrait orientation then the background color is red and the text takes up 90% of the available horizontal space: Using Standard CSS Media Features The official list of standard media features is contained in the W3C CSS Media Query recommendation located here: http://www.w3.org/TR/css3-mediaqueries/ Here is the official list of the 13 media features described in the standard: · width – The current width of the viewport · height – The current height of the viewport · device-width – The width of the device · device-height – The height of the device · orientation – The value portrait or landscape · aspect-ratio – The ratio of width to height · device-aspect-ratio – The ratio of device width to device height · color – The number of bits per color supported by the device · color-index – The number of colors in the color lookup table of the device · monochrome – The number of bits in the monochrome frame buffer · resolution – The density of the pixels supported by the device · scan – The values progressive or interlace (used for TVs) · grid – The values 0 or 1 which indicate whether the device supports a grid or a bitmap Many of the media features in the list above support the min- and max- prefix. For example, you can test for the min-width using a query like this: (min-width:800px) You can use the logical and operator with media queries when you need to check whether a device supports more than one feature. For example, the following query returns true only when the width of the device is between 800 and 1,200 pixels: (min-width:800px) and (max-width:1200px) Finally, you can use the different media types – all, braille, embossed, handheld, print, projection, screen, speech, tty, tv — with a media query. For example, the following media query only applies to a page when a page is being printed in color: print and (color) If you don’t specify a media type then media type all is assumed. Using Metro Style Media Features Microsoft has extended the standard list of media features which you can include in a media query with two custom media features: · -ms-high-contrast – The values any, black-white, white-black · -ms-view-state – The values full-screen, fill, snapped, device-portrait You can take advantage of the –ms-high-contrast media feature to make your web application more accessible to individuals with disabilities. In high contrast mode, you should make your application easier to use for individuals with vision disabilities. The –ms-view-state media feature enables you to detect the state of an application. For example, when an application is snapped, the application only occupies part of the available screen real estate. The snapped application appears on the left or right side of the screen and the rest of the screen real estate is dominated by the fill application (Metro style applications can only be snapped on devices with a horizontal resolution of greater than 1,366 pixels). Here is a page which contains style rules for an application in both a snap and fill application state: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MyWinWebApp</title> <style type="text/css"> html { font-family:'Segoe UI Semilight'; font-size: xx-large; } @media screen and (-ms-view-state:snapped) { html { background-color: lime; } } @media screen and (-ms-view-state:fill) { html { background-color: red; } } </style> </head> <body> <p class="content"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </body> </html> When the application is snapped, the application appears with a lime background color: When the application state is fill then the background color changes to red: When the application takes up the entire screen real estate – it is not in snapped or fill state – then no special style rules apply and the application appears with a white background color. Querying Media Features with JavaScript You can perform media queries using JavaScript by taking advantage of the window.msMatchMedia() method. This method returns a MSMediaQueryList which has a matches method that represents success or failure. For example, the following code checks whether the current device is in portrait mode: if (window.msMatchMedia("(orientation:portrait)").matches) { console.log("portrait"); } else { console.log("landscape"); } If the matches property returns true, then the device is in portrait mode and the message “portrait” is written to the Visual Studio JavaScript Console window. Otherwise, the message “landscape” is written to the JavaScript Console window. You can create an event listener which triggers code whenever the results of a media query changes. For example, the following code writes a message to the JavaScript Console whenever the current device is switched into or out of Portrait mode: window.msMatchMedia("(orientation:portrait)").addListener(function (mql) { if (mql.matches) { console.log("Switched to portrait"); } }); Be aware that the event listener is triggered whenever the result of the media query changes. So the event listener is triggered both when you switch from landscape to portrait and when you switch from portrait to landscape. For this reason, you need to verify that the matches property has the value true before writing the message. Summary The goal of this blog entry was to explain how CSS media queries work in the context of a Metro style application written with JavaScript. First, you were provided with an overview of the W3C CSS Media Query recommendation. You learned about the standard media features which you can query such as width and orientation. Next, we focused on the Microsoft extensions to media queries. You learned how to use –ms-view-state to detect whether a Metro style application is in “snapped” or “fill” state. You also learned how to use the msMatchMedia() method to perform a media query from JavaScript.

    Read the article

  • Minecraft Style Chunk building problem

    - by David Torrey
    I'm having some problems with speed in my chunk engine. I timed it out, and in its current state it takes a total ~5 seconds per chunk to fill each face's list. I have a check to see if each face of a block is visible and if it is not visible, it skips it and moves on. I'm using a dictionary (unordered map) because it makes sense memorywise to just not have an entry if there is no block. I've tracked my problem down to testing if there is an entry, and accessing an entry if it does exist. If I remove the tests to see if there is an entry in the dictionary for an adjacent block, or if the block type itself is seethrough, it runs within about 2-4 milliseconds. so here's my question: Is there a faster way to check for an entry in a dictionary than .ContainsKey()? As an aside, I tried TryGetValue() and it doesn't really help with the speed that much. If I remove the ContainsKey() and keep the test where it does the IsSeeThrough for each block, it halves the time, but it's still about 2-3 seconds. It only drops to 2-4ms if I remove BOTH checks. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Runtime.InteropServices; using OpenTK; using OpenTK.Graphics.OpenGL; using System.Drawing; namespace Anabelle_Lee { public enum BlockEnum { air = 0, dirt = 1, } [StructLayout(LayoutKind.Sequential,Pack=1)] public struct Coordinates<T1> { public T1 x; public T1 y; public T1 z; public override string ToString() { return "(" + x + "," + y + "," + z + ") : " + typeof(T1); } } public struct Sides<T1> { public T1 left; public T1 right; public T1 top; public T1 bottom; public T1 front; public T1 back; } public class Block { public int blockType; public bool SeeThrough() { switch (blockType) { case 0: return true; } return false ; } public override string ToString() { return ((BlockEnum)(blockType)).ToString(); } } class Chunk { private Dictionary<Coordinates<byte>, Block> mChunkData; //stores the block data private Sides<List<Coordinates<byte>>> mVBOVertexBuffer; private Sides<int> mVBOHandle; //private bool mIsChanged; private const byte mCHUNKSIZE = 16; public Chunk() { } public void InitializeChunk() { //create VBO references #if DEBUG Console.WriteLine ("Initializing Chunk"); #endif mChunkData = new Dictionary<Coordinates<byte> , Block>(); //mIsChanged = true; GL.GenBuffers(1, out mVBOHandle.left); GL.GenBuffers(1, out mVBOHandle.right); GL.GenBuffers(1, out mVBOHandle.top); GL.GenBuffers(1, out mVBOHandle.bottom); GL.GenBuffers(1, out mVBOHandle.front); GL.GenBuffers(1, out mVBOHandle.back); //make new list of vertexes for each face mVBOVertexBuffer.top = new List<Coordinates<byte>>(); mVBOVertexBuffer.bottom = new List<Coordinates<byte>>(); mVBOVertexBuffer.left = new List<Coordinates<byte>>(); mVBOVertexBuffer.right = new List<Coordinates<byte>>(); mVBOVertexBuffer.front = new List<Coordinates<byte>>(); mVBOVertexBuffer.back = new List<Coordinates<byte>>(); #if DEBUG Console.WriteLine("Chunk Initialized"); #endif } public void GenerateChunk() { #if DEBUG Console.WriteLine("Generating Chunk"); #endif for (byte i = 0; i < mCHUNKSIZE; i++) { for (byte j = 0; j < mCHUNKSIZE; j++) { for (byte k = 0; k < mCHUNKSIZE; k++) { Random blockLoc = new Random(); Coordinates<byte> randChunk = new Coordinates<byte> { x = i, y = j, z = k }; mChunkData.Add(randChunk, new Block()); mChunkData[randChunk].blockType = blockLoc.Next(0, 1); } } } #if DEBUG Console.WriteLine("Chunk Generated"); #endif } public void DeleteChunk() { //delete VBO references #if DEBUG Console.WriteLine("Deleting Chunk"); #endif GL.DeleteBuffers(1, ref mVBOHandle.left); GL.DeleteBuffers(1, ref mVBOHandle.right); GL.DeleteBuffers(1, ref mVBOHandle.top); GL.DeleteBuffers(1, ref mVBOHandle.bottom); GL.DeleteBuffers(1, ref mVBOHandle.front); GL.DeleteBuffers(1, ref mVBOHandle.back); //clear all vertex buffers ClearPolyLists(); #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void UpdateChunk() { #if DEBUG Console.WriteLine("Updating Chunk"); #endif ClearPolyLists(); //prepare buffers //for every entry in mChunkData map foreach(KeyValuePair<Coordinates<byte>,Block> feBlockData in mChunkData) { Coordinates<byte> checkBlock = new Coordinates<byte> { x = feBlockData.Key.x, y = feBlockData.Key.y, z = feBlockData.Key.z }; //check for polygonson the left side of the cube if (checkBlock.x > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x - 1, checkBlock.y, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } //check for polygons on the right side of the cube if (checkBlock.x < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x + 1, checkBlock.y, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } if (checkBlock.y > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y - 1, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } //check for polygons on the right side of the cube if (checkBlock.y < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y + 1, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } if (checkBlock.z > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z - 1)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } //check for polygons on the right side of the cube if (checkBlock.z < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z + 1)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } BuildBuffers(); #if DEBUG Console.WriteLine("Chunk Updated"); #endif } public void RenderChunk() { } public void LoadChunk() { #if DEBUG Console.WriteLine("Loading Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void SaveChunk() { #if DEBUG Console.WriteLine("Saving Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Saved"); #endif } private bool IsVisible(int pX,int pY,int pZ) { Block testBlock; Coordinates<byte> checkBlock = new Coordinates<byte> { x = Convert.ToByte(pX), y = Convert.ToByte(pY), z = Convert.ToByte(pZ) }; if (mChunkData.TryGetValue(checkBlock,out testBlock )) //if data exists { if (testBlock.SeeThrough() == true) //if existing data is not seethrough { return true; } } return true; } private void AddPoly(byte pX, byte pY, byte pZ, int BufferSide) { //create temp array GL.BindBuffer(BufferTarget.ArrayBuffer, BufferSide); if (BufferSide == mVBOHandle.front) { //front face mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.right) { //back face mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.top) { //left face mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.bottom) { //right face mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.front) { //top face mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.back) { //bottom face mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); } } private void BuildBuffers() { #if DEBUG Console.WriteLine("Building Chunk Buffers"); #endif GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.front); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.front.Count), mVBOVertexBuffer.front.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.back); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.back.Count), mVBOVertexBuffer.back.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.left); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.left.Count), mVBOVertexBuffer.left.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.right); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.right.Count), mVBOVertexBuffer.right.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.top); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.top.Count), mVBOVertexBuffer.top.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.bottom); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.bottom.Count), mVBOVertexBuffer.bottom.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer,0); #if DEBUG Console.WriteLine("Chunk Buffers Built"); #endif } private void ClearPolyLists() { #if DEBUG Console.WriteLine("Clearing Polygon Lists"); #endif mVBOVertexBuffer.top.Clear(); mVBOVertexBuffer.bottom.Clear(); mVBOVertexBuffer.left.Clear(); mVBOVertexBuffer.right.Clear(); mVBOVertexBuffer.front.Clear(); mVBOVertexBuffer.back.Clear(); #if DEBUG Console.WriteLine("Polygon Lists Cleared"); #endif } }//END CLASS }//END NAMESPACE

    Read the article

  • How to Run Low-Cost Minecraft on a Raspberry Pi for Block Building on the Cheap

    - by Jason Fitzpatrick
    We’ve shown you how to run your own blocktastic personal Minecraft server on a Windows/OSX box, but what if you crave something lighter weight, more energy efficient, and always ready for your friends? Read on as we turn a tiny Raspberry Pi machine into a low-cost Minecraft server you can leave on 24/7 for around a penny a day. Why Do I Want to Do This? There’s two aspects to this tutorial, running your own Minecraft server and specifically running that Minecraft server on a Raspberry Pi. Why would you want to run your own Minecraft server? It’s a really great way to extend and build upon the Minecraft play experience. You can leave the server running when you’re not playing so friends and family can join and continue building your world. You can mess around with game variables and introduce mods in a way that isn’t possible when you’re playing the stand-alone game. It also gives you the kind of control over your multiplayer experience that using public servers doesn’t, without incurring the cost of hosting a private server on a remote host. While running a Minecraft server on its own is appealing enough to a dedicated Minecraft fan, running it on the Raspberry Pi is even more appealing. The tiny little Pi uses so little resources that you can leave your Minecraft server running 24/7 for a couple bucks a year. Aside from the initial cost outlay of the Pi, an SD card, and a little bit of time setting it up, you’ll have an always-on Minecraft server at a monthly cost of around one gumball. What Do I Need? For this tutorial you’ll need a mix of hardware and software tools; aside from the actual Raspberry Pi and SD card, everything is free. 1 Raspberry Pi (preferably a 512MB model) 1 4GB+ SD card This tutorial assumes that you have already familiarized yourself with the Raspberry Pi and have installed a copy of the Debian-derivative Raspbian on the device. If you have not got your Pi up and running yet, don’t worry! Check out our guide, The HTG Guide to Getting Started with Raspberry Pi, to get up to speed. Optimizing Raspbian for the Minecraft Server Unlike other builds we’ve shared where you can layer multiple projects over one another (e.g. the Pi is more than powerful enough to serve as a weather/email indicator and a Google Cloud Print server at the same time) running a Minecraft server is a pretty intense operation for the little Pi and we’d strongly recommend dedicating the entire Pi to the process. Minecraft seems like a simple game, with all its blocky-ness and what not, but it’s actually a pretty complex game beneath the simple skin and required a lot of processing power. As such, we’re going to tweak the configuration file and other settings to optimize Rasbian for the job. The first thing you’ll need to do is dig into the Raspi-Config application to make a few minor changes. If you’re installing Raspbian fresh, wait for the last step (which is the Raspi-Config), if you already installed it, head to the terminal and type in “sudo raspi-config” to launch it again. One of the first and most important things we need to attend to is cranking up the overclock setting. We need all the power we can get to make our Minecraft experience enjoyable. In Raspi-Config, select option number 7 “Overclock”. Be prepared for some stern warnings about overclocking, but rest easy knowing that overclocking is directly supported by the Raspberry Pi foundation and has been included in the configuration options since late 2012. Once you’re in the actual selection screen, select “Turbo 1000MhHz”. Again, you’ll be warned that the degree of overclocking you’ve selected carries risks (specifically, potential corruption of the SD card, but no risk of actual hardware damage). Click OK and wait for the device to reset. Next, make sure you’re set to boot to the command prompt, not the desktop. Select number 3 “Enable Boot to Desktop/Scratch”  and make sure “Console Text console” is selected. Back at the Raspi-Config menu, select number 8 “Advanced Options’. There are two critical changes we need to make in here and one option change. First, the critical changes. Select A3 “Memory Split”: Change the amount of memory available to the GPU to 16MB (down from the default 64MB). Our Minecraft server is going to ruin in a GUI-less environment; there’s no reason to allocate any more than the bare minimum to the GPU. After selecting the GPU memory, you’ll be returned to the main menu. Select “Advanced Options” again and then select A4 “SSH”. Within the sub-menu, enable SSH. There is very little reason to keep this Pi connected to a monitor and keyboard, by enabling SSH we can remotely access the machine from anywhere on the network. Finally (and optionally) return again to the “Advanced Options” menu and select A2 “Hostname”. Here you can change your hostname from “raspberrypi” to a more fitting Minecraft name. We opted for the highly creative hostname “minecraft”, but feel free to spice it up a bit with whatever you feel like: creepertown, minecraft4life, or miner-box are all great minecraft server names. That’s it for the Raspbian configuration tab down to the bottom of the main screen and select “Finish” to reboot. After rebooting you can now SSH into your terminal, or continue working from the keyboard hooked up to your Pi (we strongly recommend switching over to SSH as it allows you to easily cut and paste the commands). If you’ve never used SSH before, check out how to use PuTTY with your Pi here. Installing Java on the Pi The Minecraft server runs on Java, so the first thing we need to do on our freshly configured Pi is install it. Log into your Pi via SSH and then, at the command prompt, enter the following command to make a directory for the installation: sudo mkdir /java/ Now we need to download the newest version of Java. At the time of this publication the newest release is the OCT 2013 update and the link/filename we use will reflect that. Please check for a more current version of the Linux ARMv6/7 Java release on the Java download page and update the link/filename accordingly when following our instructions. At the command prompt, enter the following command: sudo wget --no-check-certificate http://www.java.net/download/jdk8/archive/b111/binaries/jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz Once the download has finished successfully, enter the following command: sudo tar zxvf jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz -C /opt/ Fun fact: the /opt/ directory name scheme is a remnant of early Unix design wherein the /opt/ directory was for “optional” software installed after the main operating system; it was the /Program Files/ of the Unix world. After the file has finished extracting, enter: sudo /opt/jdk1.8.0/bin/java -version This command will return the version number of your new Java installation like so: java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b111) Java HotSpot(TM) Client VM (build 25.0-b53, mixed mode) If you don’t see the above printout (or a variation thereof if you’re using a newer version of Java), try to extract the archive again. If you do see the readout, enter the following command to tidy up after yourself: sudo rm jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz At this point Java is installed and we’re ready to move onto installing our Minecraft server! Installing and Configuring the Minecraft Server Now that we have a foundation for our Minecraft server, it’s time to install the part that matter. We’ll be using SpigotMC a lightweight and stable Minecraft server build that works wonderfully on the Pi. First, grab a copy of the the code with the following command: sudo wget http://ci.md-5.net/job/Spigot/lastSuccessfulBuild/artifact/Spigot-Server/target/spigot.jar This link should remain stable over time, as it points directly to the most current stable release of Spigot, but if you have any issues you can always reference the SpigotMC download page here. After the download finishes successfully, enter the following command: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Note: if you’re running the command on a 256MB Pi change the 256 and 496 in the above command to 128 and 256, respectively. Your server will launch and a flurry of on-screen activity will follow. Be prepared to wait around 3-6 minutes or so for the process of setting up the server and generating the map to finish. Future startups will take much less time, around 20-30 seconds. Note: If at any point during the configuration or play process things get really weird (e.g. your new Minecraft server freaks out and starts spawning you in the Nether and killing you instantly), use the “stop” command at the command prompt to gracefully shutdown the server and let you restart and troubleshoot it. After the process has finished, head over to the computer you normally play Minecraft on, fire it up, and click on Multiplayer. You should see your server: If your world doesn’t popup immediately during the network scan, hit the Add button and manually enter the address of your Pi. Once you connect to the server, you’ll see the status change in the server status window: According to the server, we’re in game. According to the actual Minecraft app, we’re also in game but it’s the middle of the night in survival mode: Boo! Spawning in the dead of night, weaponless and without shelter is no way to start things. No worries though, we need to do some more configuration; no time to sit around and get shot at by skeletons. Besides, if you try and play it without some configuration tweaks first, you’ll likely find it quite unstable. We’re just here to confirm the server is up, running, and accepting incoming connections. Once we’ve confirmed the server is running and connectable (albeit not very playable yet), it’s time to shut down the server. Via the server console, enter the command “stop” to shut everything down. When you’re returned to the command prompt, enter the following command: sudo nano server.properties When the configuration file opens up, make the following changes (or just cut and paste our config file minus the first two lines with the name and date stamp): #Minecraft server properties #Thu Oct 17 22:53:51 UTC 2013 generator-settings= #Default is true, toggle to false allow-nether=false level-name=world enable-query=false allow-flight=false server-port=25565 level-type=DEFAULT enable-rcon=false force-gamemode=false level-seed= server-ip= max-build-height=256 spawn-npcs=true white-list=false spawn-animals=true texture-pack= snooper-enabled=true hardcore=false online-mode=true pvp=true difficulty=1 player-idle-timeout=0 gamemode=0 #Default 20; you only need to lower this if you're running #a public server and worried about loads. max-players=20 spawn-monsters=true #Default is 10, 3-5 ideal for Pi view-distance=5 generate-structures=true spawn-protection=16 motd=A Minecraft Server In the server status window, seen through your SSH connection to the pi, enter the following command to give yourself operator status on your Minecraft server (so that you can use more powerful commands in game, without always returning to the server status window). op [your minecraft nickname] At this point things are looking better but we still have a little tweaking to do before the server is really enjoyable. To that end, let’s install some plugins. The first plugin, and the one you should install above all others, is NoSpawnChunks. To install the plugin, first visit the NoSpawnChunks webpage and grab the download link for the most current version. As of this writing the current release is v0.3. Back at the command prompt (the command prompt of your Pi, not the server console–if your server is still active shut it down) enter the following commands: cd /home/pi/plugins sudo wget http://dev.bukkit.org/media/files/586/974/NoSpawnChunks.jar Next, visit the ClearLag plugin page, and grab the latest link (as of this tutorial, it’s v2.6.0). Enter the following at the command prompt: sudo wget http://dev.bukkit.org/media/files/743/213/Clearlag.jar Because the files aren’t compressed in a .ZIP or similar container, that’s all there is to it: the plugins are parked in the plugin directory. (Remember this for future plugin downloads, the file needs to be whateverplugin.jar, so if it’s compressed you need to uncompress it in the plugin directory.) Resart the server: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Be prepared for a slightly longer startup time (closer to the 3-6 minutes and much longer than the 30 seconds you just experienced) as the plugins affect the world map and need a minute to massage everything. After the spawn process finishes, type the following at the server console: plugins This lists all the plugins currently active on the server. You should see something like this: If the plugins aren’t loaded, you may need to stop and restart the server. After confirming your plugins are loaded, go ahead and join the game. You should notice significantly snappier play. In addition, you’ll get occasional messages from the plugins indicating they are active, as seen below: At this point Java is installed, the server is installed, and we’ve tweaked our settings for for the Pi.  It’s time to start building with friends!     

    Read the article

  • CodePlex Daily Summary for Friday, June 24, 2011

    CodePlex Daily Summary for Friday, June 24, 2011Popular ReleasesTerraria World Viewer: Version 1.5: Update June 24th Made compatible with the new tiles found in Terraria 1.0.5TerrariViewer: TerrariViewer v3.2 [BETA]: This is a quick release to allow people to use this program with characters that have been used in the current version of Terraria. You will not be able to add the new items to your inventory. I am currently working on that. Use at your own risk.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9b: Changes: - fix critical issue 262334 (AccessViolationException while using events in a COMAddin) - remove x64 Assemblies (not necessary) Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...CuttingEdge.Conditions: CuttingEdge.Conditions v1.2: CuttingEdge.Conditions is a library that helps developers to write pre- and postcondition validations in their C# 3.0 and VB.NET 9 code base. Writing these validations is easy and it improves the readability and maintainability of code. This release adds IsNullOrWhiteSpace and IsNotNullOrWhiteSpace extension methods for string arguments and a adds a WithExceptionOnFailure<TException>() method on the Condition class which allows users to specify the type of exception that will be thrown. Fo...MiniTwitter: 1.70: MiniTwitter 1.70 ???? ?? ????? xAuth ?? OAuth ??????? 1.70 ??????????????????????????。 ???????????????? Twitter ? Web ??????????、PIN ????????????????????。??????????????????、???????????????????????????。Total Commander SkyDrive File System Plugin (.wfx): Total Commander SkyDrive File System Plugin 0.8.7b: Total Commander SkyDrive File System Plugin version 0.8.7b. Bug fixes: - BROKEN PLUGIN by upgrading SkyDriveServiceClient version 2.0.1b. Please do not forget to express your opinion of the plugin by rating it! Donate (EUR)Mini SQL Query: Mini SQL Query v1.0.0.59794: This release includes the following enhancements: Added a Most Recently Used file list Added Row counts to the query (per tab) and table view windows Added the Command Timeout option, only valid for MSSQL for now - see options If you have no idea what this thing is make sure you check out http://pksoftware.net/MiniSqlQuery/Help/MiniSqlQueryQuickStart.docx for an introduction. PK :-]HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.2.591 Beta Release: 1.2.591 Beta Releasepatterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...DotNetNuke® Community Edition: 06.00.00 Beta: Beta 1 (Build 2300) includes many important enhancements to the user experience. The control panel has been updated for easier access to the most important features and additional forms have been adapted to the new pattern. This release also includes many bug fixes that make it more stable than previous CTP releases. Beta ForumsESRI ArcGIS Silverlight Toolkit: June 2011 - v2.2: ESRI ArcGIS Silverlight Toolkit v2.2 New controls added: Attribution Control ScaleLine Control GpsLayer (WinPhone only)AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta7: ??AcDown???????????????,?????????????????????。????????????????????,??Acfun、Bilibili、???、???、?????,???????????、???????。 AcDown???????????????????????????,???,???????????????????。 AcDown???????C#??,?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ??v3.0 Beta7 ????????????? ???? ?? ????????????????? "??????"?????"?...BlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at ...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...IronPython: 2.7.1 Beta 1: This is the first beta release of IronPython 2.7. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. The highlights of this release are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython Tools for Visual Studio are disabled by default. See http://pytools.codeplex.com for the next generation of Python Visual Studio support. See...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...Candescent NUI: Candescent NUI (7774): This is the binary version of the source code in change set 7774.EffectControls-Silverlight controls with animation effects: EffectControls: EffectControlsMedia Companion: MC 3.408b weekly: Some minor fixes..... Fixed messagebox coming up during batch scrape Added <originaltitle></originaltitle> to movie nfo's - when a new movie is scraped, the original title will be set as the returned title. The end user can of course change the title in MC, however the original title will not change. The original title can be seen by hovering over the movie title in the right pane of the main movie tab. To update all of your current nfo's to add the original title the same as your current ...NLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesNew ProjectsAboutApprovalTests: About the open source testing library Approval Tests - pptx, sample demos and moreBBImageHandler - An image generator for DotNetNuke and ASP.NET: The BBImageHandler for DotNetNuke and ASP.NET gives users and module programmers the opportunity to use a handler that displays resized images, adds watermarks to images, creatse thumbnails of web pages or displays images stored in a database table without storing them to harddisk. Additionally it could be used to create a counter display. It's developed in C#.CheckBoxList: An ASP.net server control that databinds a list of items into a list of checkboxes. Supports AutoPostBack, WebControl styles, etc.CSLA Recursos .NET: Clases básicas de .NET Framework y desarrollo de aplicaciones Windows, para la Comunidad Hispana de CSLA.NET. Aquí encontrarás código de proyectos básicos de .NET Framework hasta los conceptos de CSLA.NET 3.8.x a 4.0. Siéntete libre de compartir código con nosotros. Visita la página web oficial de la Comunidad Hispana de CSLA.NET en http://www.cslanet.org/portalcsla/index.php DNN Task Manager: An open source Task Manager module for DotNetNuke, created during a series of free videos in the DotNetNuke Video Library.DTW Projects: DuckTapeWorks SW ProjectsDuAn1: Du an 1Element Overlay ASP.NET Extender Control: The Element Overlay ASP.NET extender control extends the Dynamic Populate Extender (from the Ajax Control Toolkit) to display dynamic content superimposed over a specified HTML page element with a wash-out effect similar in the Modal Popup extender.FWT: Fleet Tracking software for STM32Get TPB: .Hayabusa: Localisation app developp with my custom framework ITSP Event Receiver Config Utility: This SharePoint utility allows a SharePoint Admin to easily manage list and content type event receivers. The utility allows event receivers to be added/removed and listed for list and content type event receivers. The utility was created after trying other solutions which either did not work or only provided part of a solution. Its written in C#Joopk.com - Simple Asp.Net User Friendly CMS - Built in SEO and Url Rewriting: Joopk.com is a simple user friendly CMS for the end user. Too many CMS's are developed for the developer, not the end user. You need a book to understand how to use them. Joopk.com CMS is designed to allow the average internet user to create a url rewritten blog or website.Kinect Earth Move: KinectEarthMove, which use color image and skeleton from Kinect, needs Kinect for Windows SDK beta and was demonstrated on Kinect for Windows SDK beta launch at Channel 9 on June 17 2011. User in front of Kinect can rotate, translate and scale the earth between his hands.KinectNUI: Natural User Interface for Windows using the Microsoft Kinect SDKLightInject: An ultra lightweight zero config Inversion of Control Container Lottery: Lottery projectMotash - Monitoring Task Scheduler: Simple Windows service to monitor the results of tasks run by the Windows Task Scheduler. Sends an email notification if a task executed with an unexpected result.OjanPW: NET Framework 3.5Scout Training Manager: El objetivo de esta aplicación es ayudar a la gestión de los cursos que Scout de Argentina AC ofrece y a sus cursantes.Screenshot7: Windows Phone 7 screenshot tool for marketplace screenshots. Makes it easy to take emulator screenshots for your apps. It is developed in C# using WPF. Simple Recruitment solution for SharePoint 2010: Our Recruitment solution for SharePoint 2010 helps track open positions, candidates, and automate recruitment processes. Users can easily customize recruitment workflows right from the app without any advanced skills.Social.Facebook SDK: This Project was the result of a study made to the facebook API and the OAuth 2.0 Flow, resulting in a SDK to develop applications to Facebook platform and applications based on OAuth 2.0 Protocol Tibco Message Admin: Tibco Message Admin allows you to purge a queue, list messages in a queue, save a message from a queue to a file, edit the message in memory and send to another (or same) queue, load message from a file and send it to a queuetp - Asynchronous Pluggable Protocol test: tp - Asynchronous Pluggable Protocol testWCFRiaServicesBlogSeries: Varios post que estaré escribiendo referente al tema de WCF Ria Services v1.0 para Silvelight 4. Si bien el tema ha sido abordado en muchas páginas y blogs, la mayoría de ellos están en inglés, así que aquí una propuesta en español. Además de que trataré de hacerlo de manera...XMind API for C#: XMind API is a tool set for generating XMind workbook files from C#.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • CodePlex Daily Summary for Wednesday, November 09, 2011

    CodePlex Daily Summary for Wednesday, November 09, 2011Popular ReleasesMapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Paint: Kinect Paint 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details. Webdeploy packages sha1 checksum: e6bb913e591543ab292a753d1a16cdb779488c10?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Microsoft Media Platform: Player Framework: MMP Player Framework 2.6 (Silverlight and WP7): Additional DownloadsSMFv2.6 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...New ProjectsBBCode Orchard module: Makes the usage of various BBCodes available by adding a new text flavor (bbcode)Binary to Bytes: Covert a binary file to a .byte declaration for assembler systems.BrainfuckOS: -- English -- An operating system developed with Cosmos in C# and runnning special brainfuck-code. -- Deutsch -- Ein mit Cosmos in C# entwickeltes Betriebssystem, welches spezialisierten Brainfuck-Code ausführt.Cardiff WPUG: This is a Windows Phone 7 app, written using C# and using the MVVMLight framework. This app is intended as a training app for anyone interested in programming for Windows Phone and looking for a place to start. The app is used to show upcoming user group meetings, showcase members apps and show the latest news from the Windows Phone User Group in the UK. I'll be keeping this app up to date with the latest framework and will also be submitting versions to the marketplace (the current ve...Chapters2Markers: A simple utility to convert from the chapters format used by mkvextract to the marker format used by Expression Encoder.Confirm Leave Orchard Module: Add a content part that when attached to types, displays a confirm message when the user wants to leave the page despite having modified content in the editor.cp2011project: cp2011 projectCP3046 Sample Project in Beijing: This is a sample for CP3046 ICT Project in Beijing.Current Cost plug-in for HouseBot: This plug-in will interface with cc128 current cost module and display in HouseBot automation software. Please note that it requires the c# wrapper to work found here: http://www.housebot.com/forums/viewtopic.php?f=4&t=856395Delete Inactive TS Ports: The presence of Inactive TS Ports causes systems to hang or become sluggish and unresponsive. Printer redirection also suffers when there are a lot of Inactive TS Ports presents in the machine.Demo Team Exploer: demoDNN Quick Form: DNN Quick Form is a module that makes creating simple contact forms extremely easy. This is designed for those that extra flexibility when creating a simple contact form. Build your own form in the environment of your choice. Use all the standard ASP.NET Form controls to build the perfect form. The initial release only supports emailing the form to designated users. REQUIRES DotNetNuke 6.1Duet Enterprise: Create external list item workflow activity for SPD: This workflow activity for SharePoint designer creates item in exteral list and set list item property values from activity properties. Initially I develop this workflow activity to use it with Duet Enterprise. It creates new SAP entity. This activity utilize BCS object model.EntGP: Este es proyecto es para gestionar proyectosFly2Cloud: fly2cloud use for upload all file in folder to cloud blob storage with '/' delimiters. Usage 1. open _Fly2Cloud.exe.config_ for edit _ACCOUNTNAME_ & _ACCESSKEY_ 2. excute -> fly2cloud x:\xxx containerFolders size WPF: Simple WPF application to graphical show the occupied space by files and folders.Jogo: Jogo das coresLast Light: Last LightmEdit: simple text editor with notes organized with help of tree-view control.M-Files Autotask Connector: Connector project to read data from Autotask to M-Files using the connections to external databases. Using the connector you can e.g. replicate accounts, contacts and projects from Autotask to M-Files.My KB: Just my KBmysoccerdata: Allow owner to collect soccer schedule as matches going on around the worldNGS: Sistema ERP para agronegócio.NTerm: An open source .NET based terminal emulator.PC Trabalho 1 .Net: PC Trabalho 1 .NetPega Topeira: Jogo desenvolvido como projeto final do curso de C# da UFSCar Sorocaba (2011).Sequence Quality Control Studio (SeQCoS): Sequence Quality Control Studio (SeQCoS) is an open source .NET software suite designed to perform quality control (QC) of massively parallel sequencing reads. It includes tools for evaluating sequence and base quality of reads, as well as a set of basic post-QC sequence manipulation tools. SeQCoS was written in C# and integrates functionality provided by .NET Bio and Sho libraries.series operation system: series operation system. contact:694643393 ??SharePoint MUI Resource File Helper: This simple Windows Application allows you to easily compare SharePoint Resource files, allowing you to easily replicate keys across various files.SSIS Business Rules Component: A SSIS Custom Component that allows for the development and application of complex and hierarchical business rules within the SSIS data flow.TestBox: testbox

    Read the article

  • Creating a new instance, C#

    - by Dave Voyles
    This sounds like a very n00b question, but bear with me here: I'm trying to access the position of my bat (paddle) in my pong game and use it in my ball class. I'm doing this because I want a particle effect to go off at the point of contact where the ball hits the bat. Each time the ball hits the bat, I receive an error stating that I haven't created an instance of the bat. I understand that I have to (or can use a static class), but I'm not sure of how to do so in this example. I've included both my Bat and Ball classes. namespace Pong { #region Using Statements using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; #endregion public class Ball { #region Fields private readonly Random rand; private readonly Texture2D texture; private readonly SoundEffect warp; private double direction; private bool isVisible; private float moveSpeed; private Vector2 position; private Vector2 resetPos; private Rectangle size; private float speed; private bool isResetting; private bool collided; private Vector2 oldPos; private ParticleEngine particleEngine; private ContentManager contentManager; private SpriteBatch spriteBatch; private bool hasHitBat; private AIBat aiBat; private Bat bat; #endregion #region Constructors and Destructors /// <summary> /// Constructor for the ball /// </summary> public Ball(ContentManager contentManager, Vector2 ScreenSize) { moveSpeed = 15f; speed = 0; texture = contentManager.Load<Texture2D>(@"gfx/balls/redBall"); direction = 0; size = new Rectangle(0, 0, texture.Width, texture.Height); resetPos = new Vector2(ScreenSize.X / 2, ScreenSize.Y / 2); position = resetPos; rand = new Random(); isVisible = true; hasHitBat = false; // Everything to do with particles List<Texture2D> textures = new List<Texture2D>(); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/circle")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/star")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/diamond")); particleEngine = new ParticleEngine(textures, new Vector2()); } #endregion #region Public Methods and Operators /// <summary> /// Checks for the collision between the bat and the ball. Sends ball in the appropriate /// direction /// </summary> public void BatHit(int block) { if (direction > Math.PI * 1.5f || direction < Math.PI * 0.5f) { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(aiBat.Position.X, aiBat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(200); break; case 2: direction = MathHelper.ToRadians(195); break; case 3: direction = MathHelper.ToRadians(180); break; case 4: direction = MathHelper.ToRadians(180); break; case 5: direction = MathHelper.ToRadians(165); break; } } else { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(bat.Position.X, bat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(310); break; case 2: direction = MathHelper.ToRadians(345); break; case 3: direction = MathHelper.ToRadians(0); break; case 4: direction = MathHelper.ToRadians(15); break; case 5: direction = MathHelper.ToRadians(50); break; } } if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(3)); } else { direction -= MathHelper.ToRadians(rand.Next(3)); } AudioManager.Instance.PlaySoundEffect("hit"); } /// <summary> /// JEP - added method to slow down ball after powerup deactivates /// </summary> public void DecreaseSpeed() { moveSpeed -= 0.6f; } /// <summary> /// Draws the ball on the screen /// </summary> public void Draw(SpriteBatch spriteBatch) { if (isVisible) { spriteBatch.Begin(); spriteBatch.Draw(texture, size, Color.White); spriteBatch.End(); // Draws sprites for particles when contact is made particleEngine.Draw(spriteBatch); } } /// <summary> /// Checks for the current direction of the ball /// </summary> public double GetDirection() { return direction; } /// <summary> /// Checks for the current position of the ball /// </summary> public Vector2 GetPosition() { return position; } /// <summary> /// Checks for the current size of the ball (for the powerups) /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Grows the size of the ball when the GrowBall powerup is used. /// </summary> public void GrowBall() { size = new Rectangle(0, 0, texture.Width * 2, texture.Height * 2); } /// <summary> /// Was used to increased the speed of the ball after each point is scored. /// No longer used, but am considering implementing again. /// </summary> public void IncreaseSpeed() { moveSpeed += 0.6f; } /// <summary> /// Check for the ball to return normal size after the Powerup has expired /// </summary> public void NormalBallSize() { size = new Rectangle(0, 0, texture.Width, texture.Height); } /// <summary> /// Check for the ball to return normal speed after the Powerup has expired /// </summary> public void NormalSpeed() { moveSpeed += 15f; } /// <summary> /// Checks to see if ball went out of bounds, and triggers warp sfx /// </summary> public void OutOfBounds() { // Checks if the player is still alive or not if (isResetting) { AudioManager.Instance.PlaySoundEffect("warp"); { // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = false; AudioManager.Instance.Dispose(); } } } /// <summary> /// Speed for the ball when Speedball powerup is activated /// </summary> public void PowerupSpeed() { moveSpeed += 20.0f; } /// <summary> /// Check for where to reset the ball after each point is scored /// </summary> public void Reset(bool left) { if (left) { direction = 0; } else { direction = Math.PI; } // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = true; position = resetPos; // Resets the ball to the center of the screen isVisible = true; speed = 15f; // Returns the ball back to the default speed, in case the speedBall was active if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(30)); } else { direction -= MathHelper.ToRadians(rand.Next(30)); } } /// <summary> /// Shrinks the ball when the ShrinkBall powerup is activated /// </summary> public void ShrinkBall() { size = new Rectangle(0, 0, texture.Width / 2, texture.Height / 2); } /// <summary> /// Stops the ball each time it is reset. Ex: Between points / rounds /// </summary> public void Stop() { isVisible = true; speed = 0; } /// <summary> /// Updates position of the ball /// </summary> public void UpdatePosition() { size.X = (int)position.X; size.Y = (int)position.Y; oldPos.X = position.X; oldPos.Y = position.Y; position.X += speed * (float)Math.Cos(direction); position.Y += speed * (float)Math.Sin(direction); bool collided = CheckWallHit(); particleEngine.Update(); // Stops the issue where ball was oscillating on the ceiling or floor if (collided) { position.X = oldPos.X + speed * (float)Math.Cos(direction); position.Y = oldPos.Y + speed * (float)Math.Sin(direction); } } #endregion #region Methods /// <summary> /// Checks for collision with the ceiling or floor. 2*Math.pi = 360 degrees /// </summary> private bool CheckWallHit() { while (direction > 2 * Math.PI) { direction -= 2 * Math.PI; return true; } while (direction < 0) { direction += 2 * Math.PI; return true; } if (position.Y <= 0 || (position.Y > resetPos.Y * 2 - size.Height)) { direction = 2 * Math.PI - direction; return true; } return true; } #endregion } } namespace Pong { using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using System; public class Bat { public Vector2 Position; public float moveSpeed; public Rectangle size; private int points; private int yHeight; private Texture2D leftBat; public float turbo; public float recharge; public float interval; public bool isTurbo; /// <summary> /// Constructor for the bat /// </summary> public Bat(ContentManager contentManager, Vector2 screenSize, bool side) { moveSpeed = 7f; turbo = 15f; recharge = 100f; points = 0; interval = 5f; leftBat = contentManager.Load<Texture2D>(@"gfx/bats/batGrey"); size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); // True means left bat, false means right bat. if (side) Position = new Vector2(30, screenSize.Y / 2 - size.Height / 2); else Position = new Vector2(screenSize.X - 30, screenSize.Y / 2 - size.Height / 2); yHeight = (int)screenSize.Y; } public void IncreaseSpeed() { moveSpeed += .5f; } /// <summary> /// The speed of the bat when Turbo is activated /// </summary> public void Turbo() { moveSpeed += 8.0f; } /// <summary> /// Returns the speed of the bat back to normal after Turbo is deactivated /// </summary> public void DisableTurbo() { moveSpeed = 7.0f; isTurbo = false; } /// <summary> /// Returns the bat to the nrmal size after the Grow/Shrink powerup has expired /// </summary> public void NormalSize() { size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); } /// <summary> /// Checks for the size of the bat /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Adds point to the player or the AI after scoring. Currently Disabled. /// </summary> public void IncrementPoints() { points++; } /// <summary> /// Checks for the number of points at the moment /// </summary> public int GetPoints() { return points; } /// <summary> /// Sets thedefault starting position for the bats /// </summary> /// <param name="position"></param> public void SetPosition(Vector2 position) { if (position.Y < 0) { position.Y = 0; } if (position.Y > yHeight - size.Height) { position.Y = yHeight - size.Height; } this.Position = position; } /// <summary> /// Checks for the current position of the bat /// </summary> public Vector2 GetPosition() { return Position; } /// <summary> /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } /// <summary> /// Resets the bat to the center location after a new game starts /// </summary> public void ResetPosition() { SetPosition(new Vector2(GetPosition().X, yHeight / 2 - size.Height)); } /// <summary> /// Used for the Growbat powerup /// </summary> public void GrowBat() { // Doubles the size of the bat collision size = new Rectangle(0, 0, leftBat.Width * 2, leftBat.Height * 2); } /// <summary> /// Used for the Shrinkbat powerup /// </summary> public void ShrinkBat() { // 1/2 the size of the bat collision size = new Rectangle(0, 0, leftBat.Width / 2, leftBat.Height / 2); } /// <summary> /// Draws the bats /// </summary> public virtual void Draw(SpriteBatch batch) { batch.Draw(leftBat, size, Color.White); } } }

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • Locating Rogue Perl Script

    - by Gary Garside
    I've been trying to source the location of a perl script which is causing havoc on a server which i control. I'm also trying to find out exactly how this script was installed on the server - my best guess is through a wordpress exploit. The server is a basic web setup running Ubuntu 9.04, Apache and MySQL. I use IPTables for firewall, the site runs around 20 sites and the load never really creeps above 0.7. From what i can see the script is making outbound connection to other servers (most likely trying to brute force entry). Here is a top dump of one of the processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22569 www-data 20 0 22784 3216 780 R 100 0.2 47:00.60 perl The command the process is running is /usr/sbin/sshd . I've tried to find an exact file name but im having no luck... i've ran a lsof -p PID and here is the output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME perl 22569 www-data cwd DIR 8,6 4096 2 / perl 22569 www-data rtd DIR 8,6 4096 2 / perl 22569 www-data txt REG 8,6 10336 162220 /usr/bin/perl perl 22569 www-data mem REG 8,6 26936 170219 /usr/lib/perl/5.10.0/auto/Socket/Socket.so perl 22569 www-data mem REG 8,6 22808 170214 /usr/lib/perl/5.10.0/auto/IO/IO.so perl 22569 www-data mem REG 8,6 39112 145112 /lib/libcrypt-2.9.so perl 22569 www-data mem REG 8,6 1502512 145124 /lib/libc-2.9.so perl 22569 www-data mem REG 8,6 130151 145113 /lib/libpthread-2.9.so perl 22569 www-data mem REG 8,6 542928 145122 /lib/libm-2.9.so perl 22569 www-data mem REG 8,6 14608 145125 /lib/libdl-2.9.so perl 22569 www-data mem REG 8,6 1503704 162222 /usr/lib/libperl.so.5.10.0 perl 22569 www-data mem REG 8,6 135680 145116 /lib/ld-2.9.so perl 22569 www-data 0r FIFO 0,6 157216 pipe perl 22569 www-data 1w FIFO 0,6 197642 pipe perl 22569 www-data 2w FIFO 0,6 197642 pipe perl 22569 www-data 3w FIFO 0,6 197642 pipe perl 22569 www-data 4u IPv4 383991 TCP outsidesoftware.com:56869->server12.34.56.78.live-servers.net:www (ESTABLISHED) My gut feeling is outsidesoftware.com is also under attacK? Or possibly being used as a tunnel. I've managed to find a number of rouge files in /tmp and /var/tmp, here is a brief output of one of these files: #!/usr/bin/perl # this spreader is coded by xdh # xdh@xxxxxxxxxxx # only for testing... my @nickname = ("vn"); my $nick = $nickname[rand scalar @nickname]; my $ircname = $nickname[rand scalar @nickname]; #system("kill -9 `ps ax |grep httpdse |grep -v grep|awk '{print $1;}'`"); my $processo = '/usr/sbin/sshd'; The full file contents can be viewed here: http://pastebin.com/yenFRrGP Im trying to achieve a couple of things here... Firstly i need to stop these processes from running. Either by disabling outbound SSH or any IP Tables rules etc... these scripts have been running for around 36 hours now and my main concern is to stop these things running and respawning by themselves. Secondly i need to try and source where and how these scripts have been installed. If anybody has any advise on what to look for in access logs or anything else i would be really grateful. Thanks in advance

    Read the article

  • How to configure emacs by using this file?

    - by Andy Leman
    From http://public.halogen-dg.com/browser/alex-emacs-settings/.emacs?rev=1346 I got: (setq load-path (cons "/home/alex/.emacs.d/" load-path)) (setq load-path (cons "/home/alex/.emacs.d/configs/" load-path)) (defconst emacs-config-dir "~/.emacs.d/configs/" "") (defun load-cfg-files (filelist) (dolist (file filelist) (load (expand-file-name (concat emacs-config-dir file))) (message "Loaded config file:%s" file) )) (load-cfg-files '("cfg_initsplit" "cfg_variables_and_faces" "cfg_keybindings" "cfg_site_gentoo" "cfg_conf-mode" "cfg_mail-mode" "cfg_region_hooks" "cfg_apache-mode" "cfg_crontab-mode" "cfg_gnuserv" "cfg_subversion" "cfg_css-mode" "cfg_php-mode" "cfg_tramp" "cfg_killbuffer" "cfg_color-theme" "cfg_uniquify" "cfg_tabbar" "cfg_python" "cfg_ack" "cfg_scpaste" "cfg_ido-mode" "cfg_javascript" "cfg_ange_ftp" "cfg_font-lock" "cfg_default_face" "cfg_ecb" "cfg_browser" "cfg_orgmode" ; "cfg_gnus" ; "cfg_cyrillic" )) ; enable disabled advanced features (put 'downcase-region 'disabled nil) (put 'scroll-left 'disabled nil) (put 'upcase-region 'disabled nil) ; narrow cursor ;(setq-default cursor-type 'hbar) (cua-mode) ; highlight current line (global-hl-line-mode 1) ; AV: non-aggressive scrolling (setq scroll-conservatively 100) (setq scroll-preserve-screen-position 't) (setq scroll-margin 0) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(ange-ftp-passive-host-alist (quote (("redbus2.chalkface.com" . "on") ("zope.halogen-dg.com" . "on") ("85.119.217.50" . "on")))) '(blink-cursor-mode nil) '(browse-url-browser-function (quote browse-url-firefox)) '(browse-url-new-window-flag t) '(buffers-menu-max-size 30) '(buffers-menu-show-directories t) '(buffers-menu-show-status nil) '(case-fold-search t) '(column-number-mode t) '(cua-enable-cua-keys nil) '(user-mail-address "[email protected]") '(cua-mode t nil (cua-base)) '(current-language-environment "UTF-8") '(file-name-shadow-mode t) '(fill-column 79) '(grep-command "grep --color=never -nHr -e * | grep -v .svn --color=never") '(grep-use-null-device nil) '(inhibit-startup-screen t) '(initial-frame-alist (quote ((width . 80) (height . 40)))) '(initsplit-customizations-alist (quote (("tabbar" "configs/cfg_tabbar.el" t) ("ecb" "configs/cfg_ecb.el" t) ("ange\\-ftp" "configs/cfg_ange_ftp.el" t) ("planner" "configs/cfg_planner.el" t) ("dired" "configs/cfg_dired.el" t) ("font\\-lock" "configs/cfg_font-lock.el" t) ("speedbar" "configs/cfg_ecb.el" t) ("muse" "configs/cfg_muse.el" t) ("tramp" "configs/cfg_tramp.el" t) ("uniquify" "configs/cfg_uniquify.el" t) ("default" "configs/cfg_font-lock.el" t) ("ido" "configs/cfg_ido-mode.el" t) ("org" "configs/cfg_orgmode.el" t) ("gnus" "configs/cfg_gnus.el" t) ("nnmail" "configs/cfg_gnus.el" t)))) '(ispell-program-name "aspell") '(jabber-account-list (quote (("[email protected]")))) '(jabber-nickname "AVK") '(jabber-password nil) '(jabber-server "halogen-dg.com") '(jabber-username "alex") '(remember-data-file "~/Plans/remember.org") '(safe-local-variable-values (quote ((dtml-top-element . "body")))) '(save-place t nil (saveplace)) '(scroll-bar-mode (quote right)) '(semantic-idle-scheduler-idle-time 432000) '(show-paren-mode t) '(svn-status-hide-unmodified t) '(tool-bar-mode nil nil (tool-bar)) '(transient-mark-mode t) '(truncate-lines f) '(woman-use-own-frame nil)) ; ?? ????? ??????? y ??? n? (fset 'yes-or-no-p 'y-or-n-p) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(compilation-error ((t (:foreground "tomato" :weight bold)))) '(cursor ((t (:background "red1")))) '(custom-variable-tag ((((class color) (background dark)) (:inherit variable-pitch :foreground "DarkOrange" :weight bold)))) '(hl-line ((t (:background "grey24")))) '(isearch ((t (:background "orange" :foreground "black")))) '(message-cited-text ((((class color) (background dark)) (:foreground "SandyBrown")))) '(message-header-name ((((class color) (background dark)) (:foreground "DarkGrey")))) '(message-header-other ((((class color) (background dark)) (:foreground "LightPink2")))) '(message-header-subject ((((class color) (background dark)) (:foreground "yellow2")))) '(message-separator ((((class color) (background dark)) (:foreground "thistle")))) '(region ((t (:background "brown")))) '(tooltip ((((class color)) (:inherit variable-pitch :background "IndianRed1" :foreground "black"))))) The above is a python emacs configure file. Where should I put it to use it? And, are there any other changes I need to make?

    Read the article

  • Apache cyclic redirection problem

    - by slicedlime
    I have an extremely weird problem with one of my sites. I run a number of blogs off a single apache2 server with a shared wordpress install. Each site has a www.domain.com main domain, but a ServerAlias of domain.com. This works fine for all the blogs except one, which instead of redirecting to www.domain.com redirects to domain.com, causing a cyclic redirection. The configuration for each host looks like this: <VirtualHost *:80> ServerName www.domain.com ServerAlias domain.com DocumentRoot "/home/www/www.domain.com" <Directory "/home/www/www.domain.com"> Options MultiViews Indexes Includes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> As this didn't work, I tried a mod_rewrite rule for it, which still didn't redirect correctly. The weird thing here is that if i rewrite it to redirect to any other domain it will redirect correctly, even to another subdomain. So a rewrite rule for domain.com that redirects to foo.domain.com works, but not to www.domain.com. In the same way, trying to redirec to www.domain.com/foo/ ends me up with a redirection to domain.com/foo/. Even weirder, I tried setting up domain.com as a completely separate virtual host, and ran this php test script as index.php on it: <?php header('Location: http://www.domain.com/' . $_SERVER["request_uri"]); ?> Hitting domain.com still redirects to domain.com! Checking out the headers sent to the server verifies that I get exactly the redirect URL I wanted, except with the "www." stripped. The same script works like a charm if I replace www. with foo or redirect to any other domain. This has now foiled me for a long time. I've diffed the vhosts configs for a working domain and the faulty one, and the only difference is the domain name itself. I've diffed the .htaccess files for both sites, and the only difference is a path related to the sharing of wordpress installation for the blogs: php_value include_path ".:/home/www/www.domain.com/local/:/home/www/www.domain.com/" I searched through everything in /etc (including apache conf) for the domain name of the faulty host and found nothing weird, searched through everything in /etc for one of the working ones to make sure it didn't differ, I even went so far to check on the DNS setup of two domains to make sure there wasn't anything strange going on. Here's the response for the faulty one: user@localhost dir $ wget -S domain.com --2010-03-20 21:47:24-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:47:24 GMT Location: http://domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://domain.com/ [following] And a working one: user@localhost dir $ wget -S domain.com --2010-03-20 21:51:33-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:51:33 GMT Location: http://www.domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://www.domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://www.domain.com/ [following] I'm stumped. I've had this problem for a long time, and I feel like I've tried everything. I can't see why the different domains would act differently under the same installation with the same settings. Help :(

    Read the article

  • shutdown.exe on Win Server2k3 64-bit cannot be found

    - by normalocity
    Server 2003 SP2 64-bit Verified my path settings are correct, as I can run other executables within the "system32" folder without issue. If I cd to "c:\windows\system32\" folder, and try to run "shutdown /?" I get "shutdown is not recognized as a valid..." you know - the message you get when you type a command that doesn't exist. Doing a "dir *.exe" inside the "system32" folder, also doesn't return "shutdown.exe" as one of the results. HOWEVER - if I go through Windows Explorer - there it is! I can see shutdown.exe. Also, if I go to "Start - Run" and type "shutdown /?", it works fine. So, works in the GUI, not on the command line. very strange. This is an excerpt of the last portion of "dir *.exe" when run on the "system32" folder. Note the lack of commonly used executables such as "shutdown.exe" and "tsadmin.exe" 02/18/2007 07:00 AM 24,064 route.exe 02/18/2007 07:00 AM 29,184 routemon.exe 02/18/2007 07:00 AM 14,848 rsh.exe 02/18/2007 07:00 AM 67,072 rsopprov.exe 02/18/2007 07:00 AM 77,824 rtcshare.exe 02/18/2007 07:00 AM 18,432 runas.exe 02/18/2007 07:00 AM 34,816 rundll32.exe 02/18/2007 07:00 AM 18,432 runonce.exe 02/18/2007 07:00 AM 13,312 savedump.exe 03/19/2009 07:51 PM 49,152 sc.exe 02/18/2007 07:00 AM 90,112 scardsvr.exe 02/18/2007 07:00 AM 152,064 schtasks.exe 02/18/2007 07:00 AM 16,384 schupgr.exe 02/18/2007 07:00 AM 31,232 sdbinst.exe 02/18/2007 07:00 AM 36,352 secedit.exe 02/18/2007 07:00 AM 32,768 sethc.exe 06/28/2006 12:12 AM 31,232 SetLACState.exe 02/18/2007 07:00 AM 41,472 setup.exe 02/18/2007 07:00 AM 25,088 setup16.exe 02/18/2007 07:00 AM 20,480 setupn.exe 02/18/2007 07:00 AM 60,416 setx.exe 02/18/2007 07:00 AM 10,752 sfc.exe 02/18/2007 07:00 AM 76,288 sfmprint.exe 02/18/2007 07:00 AM 11,776 sfmpsexe.exe 02/18/2007 07:00 AM 65,024 sfmsvc.exe 02/18/2007 07:00 AM 38,400 shmgrate.exe 02/18/2007 07:00 AM 71,168 sigverif.exe 02/18/2007 07:00 AM 26,112 skeys.exe 02/18/2007 07:00 AM 96,256 smlogsvc.exe 02/18/2007 07:00 AM 53,760 smss.exe 02/18/2007 07:00 AM 40,960 snmp.exe 02/18/2007 07:00 AM 25,088 sort.exe 02/18/2007 07:00 AM 9,728 sprestrt.exe 02/18/2007 07:00 AM 10,240 subst.exe 02/18/2007 07:00 AM 14,848 svchost.exe 02/18/2007 07:00 AM 54,272 syncapp.exe 02/18/2007 07:00 AM 18,896 sysedit.exe 02/18/2007 07:00 AM 29,696 syskey.exe 02/18/2007 07:00 AM 107,520 sysocmgr.exe 02/18/2007 07:00 AM 79,360 systeminfo.exe 02/18/2007 07:00 AM 3,072 systray.exe 02/18/2007 07:00 AM 58,880 takeown.exe 02/18/2007 07:00 AM 32,768 tapicfg.exe 02/18/2007 07:00 AM 84,480 taskkill.exe 02/18/2007 07:00 AM 87,552 tasklist.exe 02/18/2007 07:00 AM 168,960 taskmgr.exe 02/18/2007 07:00 AM 13,824 tcmsetup.exe 02/18/2007 07:00 AM 21,504 tcpsvcs.exe 02/18/2007 07:00 AM 28,672 timeout.exe 02/18/2007 07:00 AM 419,328 tracerpt.exe 02/18/2007 07:00 AM 12,800 tracert.exe 02/18/2007 07:00 AM 26,624 tsecimp.exe 02/18/2007 07:00 AM 37,376 typeperf.exe 10/24/2008 04:12 PM 64,000 tzchange.exe 02/18/2007 07:00 AM 5,632 unlodctr.exe 02/18/2007 07:00 AM 321,024 upg351db.exe 02/18/2007 07:00 AM 16,896 ups.exe 02/18/2007 07:00 AM 4,096 user.exe 02/18/2007 07:00 AM 26,112 userinit.exe 02/18/2007 07:00 AM 49,152 utilman.exe 02/18/2007 07:00 AM 47,104 uwdf.exe 02/18/2007 07:00 AM 29,184 verclsid.exe 02/18/2007 07:00 AM 112,640 verifier.exe 02/18/2007 07:00 AM 1,129 vwipxspx.exe 02/18/2007 07:00 AM 55,296 w32tm.exe 02/18/2007 07:00 AM 38,400 waitfor.exe 02/18/2007 07:00 AM 39,424 wdfmgr.exe 02/18/2007 07:00 AM 62,464 wextract.exe 02/18/2007 07:00 AM 38,400 where.exe 02/18/2007 07:00 AM 48,640 whoami.exe 02/18/2007 07:00 AM 36,864 winchat.exe 08/13/2007 06:45 PM 206,336 WinFXDocObj.exe 02/18/2007 07:00 AM 8,704 winhlp32.exe 02/18/2007 07:00 AM 12,800 winmsd.exe 02/18/2007 07:00 AM 2,112 winspool.exe 02/18/2007 07:00 AM 6,656 winver.exe 08/21/2002 05:13 AM 189,952 WISPTIS.EXE 02/18/2007 07:00 AM 67,072 wlbs.exe 02/18/2007 07:00 AM 10,560 wowexec.exe 02/18/2007 07:00 AM 10,752 wowreg32.exe 02/18/2007 07:00 AM 31,232 wpnpinst.exe 02/18/2007 07:00 AM 5,632 write.exe 02/18/2007 07:00 AM 114,688 wscript.exe 02/18/2007 07:00 AM 30,720 xcopy.exe

    Read the article

  • shutdown.exe on Win Server2k3 cannot be found

    - by normalocity
    Server 2003 SP2 64-bit Verified my path settings are correct, as I can run other executables within the "system32" folder without issue. If I cd to "c:\windows\system32\" folder, and try to run "shutdown /?" I get "shutdown is not recognized as a valid..." you know - the message you get when you type a command that doesn't exist. Doing a "dir *.exe" inside the "system32" folder, also doesn't return "shutdown.exe" as one of the results. HOWEVER - if I go through Windows Explorer - there it is! I can see shutdown.exe. Also, if I go to "Start - Run" and type "shutdown /?", it works fine. So, works in the GUI, not on the command line. very strange. This is an excerpt of the last portion of "dir *.exe" when run on the "system32" folder. Note the lack of commonly used executables such as "shutdown.exe" and "tsadmin.exe" 02/18/2007 07:00 AM 24,064 route.exe 02/18/2007 07:00 AM 29,184 routemon.exe 02/18/2007 07:00 AM 14,848 rsh.exe 02/18/2007 07:00 AM 67,072 rsopprov.exe 02/18/2007 07:00 AM 77,824 rtcshare.exe 02/18/2007 07:00 AM 18,432 runas.exe 02/18/2007 07:00 AM 34,816 rundll32.exe 02/18/2007 07:00 AM 18,432 runonce.exe 02/18/2007 07:00 AM 13,312 savedump.exe 03/19/2009 07:51 PM 49,152 sc.exe 02/18/2007 07:00 AM 90,112 scardsvr.exe 02/18/2007 07:00 AM 152,064 schtasks.exe 02/18/2007 07:00 AM 16,384 schupgr.exe 02/18/2007 07:00 AM 31,232 sdbinst.exe 02/18/2007 07:00 AM 36,352 secedit.exe 02/18/2007 07:00 AM 32,768 sethc.exe 06/28/2006 12:12 AM 31,232 SetLACState.exe 02/18/2007 07:00 AM 41,472 setup.exe 02/18/2007 07:00 AM 25,088 setup16.exe 02/18/2007 07:00 AM 20,480 setupn.exe 02/18/2007 07:00 AM 60,416 setx.exe 02/18/2007 07:00 AM 10,752 sfc.exe 02/18/2007 07:00 AM 76,288 sfmprint.exe 02/18/2007 07:00 AM 11,776 sfmpsexe.exe 02/18/2007 07:00 AM 65,024 sfmsvc.exe 02/18/2007 07:00 AM 38,400 shmgrate.exe 02/18/2007 07:00 AM 71,168 sigverif.exe 02/18/2007 07:00 AM 26,112 skeys.exe 02/18/2007 07:00 AM 96,256 smlogsvc.exe 02/18/2007 07:00 AM 53,760 smss.exe 02/18/2007 07:00 AM 40,960 snmp.exe 02/18/2007 07:00 AM 25,088 sort.exe 02/18/2007 07:00 AM 9,728 sprestrt.exe 02/18/2007 07:00 AM 10,240 subst.exe 02/18/2007 07:00 AM 14,848 svchost.exe 02/18/2007 07:00 AM 54,272 syncapp.exe 02/18/2007 07:00 AM 18,896 sysedit.exe 02/18/2007 07:00 AM 29,696 syskey.exe 02/18/2007 07:00 AM 107,520 sysocmgr.exe 02/18/2007 07:00 AM 79,360 systeminfo.exe 02/18/2007 07:00 AM 3,072 systray.exe 02/18/2007 07:00 AM 58,880 takeown.exe 02/18/2007 07:00 AM 32,768 tapicfg.exe 02/18/2007 07:00 AM 84,480 taskkill.exe 02/18/2007 07:00 AM 87,552 tasklist.exe 02/18/2007 07:00 AM 168,960 taskmgr.exe 02/18/2007 07:00 AM 13,824 tcmsetup.exe 02/18/2007 07:00 AM 21,504 tcpsvcs.exe 02/18/2007 07:00 AM 28,672 timeout.exe 02/18/2007 07:00 AM 419,328 tracerpt.exe 02/18/2007 07:00 AM 12,800 tracert.exe 02/18/2007 07:00 AM 26,624 tsecimp.exe 02/18/2007 07:00 AM 37,376 typeperf.exe 10/24/2008 04:12 PM 64,000 tzchange.exe 02/18/2007 07:00 AM 5,632 unlodctr.exe 02/18/2007 07:00 AM 321,024 upg351db.exe 02/18/2007 07:00 AM 16,896 ups.exe 02/18/2007 07:00 AM 4,096 user.exe 02/18/2007 07:00 AM 26,112 userinit.exe 02/18/2007 07:00 AM 49,152 utilman.exe 02/18/2007 07:00 AM 47,104 uwdf.exe 02/18/2007 07:00 AM 29,184 verclsid.exe 02/18/2007 07:00 AM 112,640 verifier.exe 02/18/2007 07:00 AM 1,129 vwipxspx.exe 02/18/2007 07:00 AM 55,296 w32tm.exe 02/18/2007 07:00 AM 38,400 waitfor.exe 02/18/2007 07:00 AM 39,424 wdfmgr.exe 02/18/2007 07:00 AM 62,464 wextract.exe 02/18/2007 07:00 AM 38,400 where.exe 02/18/2007 07:00 AM 48,640 whoami.exe 02/18/2007 07:00 AM 36,864 winchat.exe 08/13/2007 06:45 PM 206,336 WinFXDocObj.exe 02/18/2007 07:00 AM 8,704 winhlp32.exe 02/18/2007 07:00 AM 12,800 winmsd.exe 02/18/2007 07:00 AM 2,112 winspool.exe 02/18/2007 07:00 AM 6,656 winver.exe 08/21/2002 05:13 AM 189,952 WISPTIS.EXE 02/18/2007 07:00 AM 67,072 wlbs.exe 02/18/2007 07:00 AM 10,560 wowexec.exe 02/18/2007 07:00 AM 10,752 wowreg32.exe 02/18/2007 07:00 AM 31,232 wpnpinst.exe 02/18/2007 07:00 AM 5,632 write.exe 02/18/2007 07:00 AM 114,688 wscript.exe 02/18/2007 07:00 AM 30,720 xcopy.exe

    Read the article

  • Screen -X exec commands not working until manually attached

    - by James Watt
    I have a batch script that starts a java server application inside of a screen. The command looks like this: cd /dir/ && screen -A -m -d -S javascreen java -Xms640M -Xmx1024M -jar javaserverapp.jar nogui After I run the batch script, it starts the server and puts it inside the correct screen. If I list my screens after, I see something like this: user@gtwy /dir $ screen -list There is a screen on: 16180.javascreen (Detached) 1 Socket in /var/run/screen/S-user. However, I have a second batch script that sends automated commands to this server and runs on a different crontab interval. Because of the way the application works, I send commands to it like this (this command tells it to alert connected users "testing 123"): screen -X exec .\!\! echo say testing 123 I've also tried: screen -R -X exec .\!\! echo say testing 123 screen -S javascreen -X exec .\!\! echo say testing 123 Unfortunately, these commands DO NOT WORK. They don't even give me an error message, they just do nothing. HOWEVER - If I manually attach to the screen first (with the below command) and then detach, now I can run any of the above commands flawlessly. I can demonstrate this with a video, if I wasn't clear enough here. screen -r -d Thanks in advance. Update: here is the important parts of /etc/screenrc. It should be totally vanilla, I've never edited this file. # VARIABLES # =============================================================== # No annoying audible bell, using "visual bell" # vbell on # default: off # vbell_msg " -- Bell,Bell!! -- " # default: "Wuff,Wuff!!" # Automatically detach on hangup. autodetach on # default: on # Don't display the copyright page startup_message off # default: on # Uses nethack-style messages # nethack on # default: off # Affects the copying of text regions crlf off # default: off # Enable/disable multiuser mode. Standard screen operation is singleuser. # In multiuser mode the commands acladd, aclchg, aclgrp and acldel can be used # to enable (and disable) other user accessing this screen session. # Requires suid-root. multiuser off # Change default scrollback value for new windows defscrollback 1000 # default: 100 # Define the time that all windows monitored for silence should # wait before displaying a message. Default 30 seconds. silencewait 15 # default: 30 # bufferfile: The file to use for commands # "readbuf" ('<') and "writebuf" ('>'): bufferfile $HOME/.screen_exchange # # hardcopydir: The directory which contains all hardcopies. # hardcopydir ~/.hardcopy # hardcopydir ~/.screen # # shell: Default process started in screen's windows. # Makes it possible to use a different shell inside screen # than is set as the default login shell. # If begins with a '-' character, the shell will be started as a login shell. # shell zsh # shell bash # shell ksh shell -$SHELL # shellaka '> |tcsh' # shelltitle '$ |bash' # emulate .logout message pow_detach_msg "Screen session of \$LOGNAME \$:cr:\$:nl:ended." # caption always " %w --- %c:%s" # caption always "%3n %t%? @%u%?%? [%h]%?%=%c" # advertise hardstatus support to $TERMCAP # termcapinfo * '' 'hs:ts=\E_:fs=\E\\:ds=\E_\E\\' # set every new windows hardstatus line to somenthing descriptive # defhstatus "screen: ^En (^Et)" # don't kill window after the process died # zombie "^["

    Read the article

  • Mod_Rewrite Apache ProxyPass ?

    - by Anon
    I have two websites; OLDSITE and NEWSITE. The OLDSITE has 120 IP Address that it has with it, and the NEWSITE had 5. I want to be able to separate everything from OLDSITE and NEWSITE so they are not tied together but use them on the same linux computer. My current apache setup is this: Listen 80 NameVirtualHost * <VirtualHost *> ServerName oldsite.com ServerAdmin [email protected] DocumentRoot /var/www/ <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.oldsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.oldsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.oldsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost newsite.com> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ <Directory /var/newsite/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.newsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.newsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.newsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost *> ServerName panel.oldsite.com ProxyPass / http://panel.oldsite.com:10000/ ProxyPassReverse / http://panel.oldsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> <VirtualHost *> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> I want to be able to access anything that is newsite.com and have it go to the /var/newsite unless their is a home directory...and then if its panel.newsite.com I want it to automatically do a proxypass to panel.newsite.com:10000... With this setup, it works perfect for oldsite.com.... both the proxy and the webpages... However, having the Virtualhost set to newsite.com renders the proxypass worthless. If I change the Virtualhost for the newsite.com to a wildcard, the proxypass will work but anything thats a subdomain of newsite.com won't work. so newsite.com will work, but www.newsite.com will not load correctly. I am assuming that when everything is wildcarded, then the ServerName somewhat acts like a RewriteCond and actually just applies the stuff to that URL. It uses the Virtualhost * (oldsite.com) and lets ANYTHING.oldsite.com work, but the second virtualhost * (newsite.com) only newsite.com will work... www.newsite.com will not. If I change the order of them, the opposite is true. So apparently it doesn't like me using 2 wildcards... I tried just making the Servername *.newsite.com .......but that would be too easy. I am not sure what I can do to do what I want? Perhaps I should make the ProxyPass included in the VirtualHosts and use something like: RewriteCond %{HTTP_HOST} ^panel\.newsite\.com$ [NC] RewriteRule ^(.*)$ http://panel.newsite.com:10000/ [P] ProxyPassReverse / http://panel.newsite.com:10000/ but that doesnt seem to want to login to webmin, it loads the login page but isnt working how the ProxyPass & ProxyPassReverse does.

    Read the article

  • rsyslogd not monitoring all files

    - by Tom O'Connor
    So.. I've installed Logstash, and instead of using the logstash shipper (because it needs the JVM and is generally massive), I'm using rsyslogd with the following configuration. # Use traditional timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $IncludeConfig /etc/rsyslog.d/*.conf # Provides kernel logging support (previously done by rklogd) $ModLoad imklog # Provides support for local system logging (e.g. via logger command) $ModLoad imuxsock # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log In /etc/rsyslog.d/logstash.conf there are 28 file monitor blocks using imfile $ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages $InputFileName /var/log/rabbitmq/startup_err $InputFileTag rmq-err: $InputFileStateFile state-rmq-err $InputFileFacility local6 $InputRunFileMonitor .... $InputFileName /var/log/some.other.custom.log $InputFileTag cust-log: $InputFileStateFile state-cust-log $InputFileFacility local6 $InputRunFileMonitor .... *.* @@10.90.0.110:5514 There are 28 InputFileMonitor blocks, each monitoring a different custom application logfile.. If I run [root@secret-gm02 ~]# lsof|grep rsyslog rsyslogd 5380 root cwd DIR 253,0 4096 2 / rsyslogd 5380 root rtd DIR 253,0 4096 2 / rsyslogd 5380 root txt REG 253,0 278976 1015955 /sbin/rsyslogd rsyslogd 5380 root mem REG 253,0 58400 1868123 /lib64/libgcc_s-4.1.2-20080825.so.1 rsyslogd 5380 root mem REG 253,0 144776 1867778 /lib64/ld-2.5.so rsyslogd 5380 root mem REG 253,0 1718232 1867780 /lib64/libc-2.5.so rsyslogd 5380 root mem REG 253,0 23360 1867787 /lib64/libdl-2.5.so rsyslogd 5380 root mem REG 253,0 145872 1867797 /lib64/libpthread-2.5.so rsyslogd 5380 root mem REG 253,0 85544 1867815 /lib64/libz.so.1.2.3 rsyslogd 5380 root mem REG 253,0 53448 1867801 /lib64/librt-2.5.so rsyslogd 5380 root mem REG 253,0 92816 1868016 /lib64/libresolv-2.5.so rsyslogd 5380 root mem REG 253,0 20384 1867990 /lib64/rsyslog/lmnsd_ptcp.so rsyslogd 5380 root mem REG 253,0 53880 1867802 /lib64/libnss_files-2.5.so rsyslogd 5380 root mem REG 253,0 23736 1867800 /lib64/libnss_dns-2.5.so rsyslogd 5380 root mem REG 253,0 20768 1867988 /lib64/rsyslog/lmnet.so rsyslogd 5380 root mem REG 253,0 11488 1867982 /lib64/rsyslog/imfile.so rsyslogd 5380 root mem REG 253,0 24040 1867983 /lib64/rsyslog/imklog.so rsyslogd 5380 root mem REG 253,0 11536 1867987 /lib64/rsyslog/imuxsock.so rsyslogd 5380 root mem REG 253,0 13152 1867989 /lib64/rsyslog/lmnetstrms.so rsyslogd 5380 root mem REG 253,0 8400 1867992 /lib64/rsyslog/lmtcpclt.so rsyslogd 5380 root 0r REG 0,3 0 4026531848 /proc/kmsg rsyslogd 5380 root 1u IPv4 1200589517 0t0 TCP 10.10.10.90 t:40629->10.10.10.90:5514 (ESTABLISHED) rsyslogd 5380 root 2u IPv4 1200589527 0t0 UDP *:45801 rsyslogd 5380 root 3w REG 253,3 17999744 2621483 /var/log/messages rsyslogd 5380 root 4w REG 253,3 13383 2621484 /var/log/secure rsyslogd 5380 root 5w REG 253,3 7180 2621493 /var/log/maillog rsyslogd 5380 root 6w REG 253,3 43321 2621529 /var/log/cron rsyslogd 5380 root 7w REG 253,3 0 2621494 /var/log/spooler rsyslogd 5380 root 8w REG 253,3 0 2621495 /var/log/boot.log rsyslogd 5380 root 9r REG 253,3 1064271998 2621464 /var/log/custom-application.monolog.log rsyslogd 5380 root 10u unix 0xffff81081fad2e40 0t0 1200589511 /dev/log You can see that there are nowhere near 28 logfiles actually being read. I really had to get one file monitored, so I moved it to the top, and it picked it up, but I'd like to be able to monitor all 28+ files, and not have to worry. OS is Centos 5.5 Kernel 2.6.18-308.el5 rsyslogd 3.22.1, compiled with: FEATURE_REGEXP: Yes FEATURE_LARGEFILE: Yes FEATURE_NETZIP (message compression): Yes GSSAPI Kerberos 5 support: Yes FEATURE_DEBUG (debug build, slow code): No Atomic operations supported: Yes Runtime Instrumentation (slow code): No Questions: Why is rsyslogd only monitoring a very small subset of the files? How can I fix this so that all the files are monitored?

    Read the article

  • Puppet gives SSL error because master is not running?

    - by Daniel Huger
    I started with two clean machines this time. My master is running 12.04 Version: 2.7.11-1ubuntu2 Depends: ruby1.8, puppetmaster-common (= 2.7.11-1ubuntu2) My client is 10.04 Version: 2.6.3-0ubuntu1~lucid1 Depends: puppet-common (= 2.6.3-0ubuntu1~lucid1), ruby1.8 To setup Puppet tutorial: http://shapeshed.com/setting-up-puppet-on-ubuntu-10-04/ To connect master and client: http://shapeshed.com/connecting-clients-to-a-puppet-master/ The first time I tried to connect master to client failed with SSL_connect error. So I did rm -rf /etc/puppet/ssl/ to remove all the keys inside ssl folders. It looked like it work.... BUT client# puppet agent --server puppet --waitforce 60 --test /usr/lib/ruby/1.8/facter/util/resolution.rb:46: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 /usr/lib/ruby/1.8/puppet/defaults.rb:67: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 info: Creating a new SSL key for giab10 warning: peer certificate won't be verified in this SSL session info: Caching certificate for ca warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Creating a new SSL certificate request for mybox123 info: Certificate Request fingerprint (md5): XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Caching certificate for mybox123 err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed warning: Not using cache on failed catalog It cached but then it couldn't retrieve it. Let me stop here.... worrying I would mess something up. But let's check master's status. * master is not running WoW.... ??? master# service puppetmaster start * Starting puppet master [OK] master# service puppetmaster status * master is not running I think time is sync. Well, we are behind a firewall so the port to sync time is disbaled. I checked with date and they seem okay. What about master not running? Is that the cause? Any help is appreciated. Thanks! /var/lib/puppet/log/masterhttp.log [2012-06-30 00:13:25] INFO WEBrick 1.3.1 [2012-06-30 00:13:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:13:25] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:19:40] INFO WEBrick 1.3.1 [2012-06-30 00:19:40] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:19:40] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:28:58] INFO WEBrick 1.3.1 [2012-06-30 00:28:58] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:28:58] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 15:31:25] INFO WEBrick 1.3.1 [2012-06-30 15:31:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 15:31:25] WARN TCPServer Error: Address already in use - bind(2) 1 S puppet 5186 1 0 80 0 - 29410 poll_s 15:44 ? 00:00:00 /usr/bin/ruby1.8 /usr/bin/puppet master --masterport=8140 4 S root 5235 5005 0 80 0 - 2344 pipe_w 15:45 pts/0 00:00:00 grep --color=auto puppet kill -9 5186 puppet master service puppetmaster status * master is not running I always have this error, but I always ignored it. http://pastebin.com/exbpArjv What could it mean? Time sync? Package not installed? Then how could we do puppetca in the first place?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >