Search Results

Search found 66785 results on 2672 pages for 'dont say the kids name'.

Page 434/2672 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Batch file script for Enable & disable the "use automatic Configuration Script"

    - by Tijo Joy
    My intention is to create a .bat file that toggles the check box of "use automatic Configuration Script" in Internet Settings. The following is my script @echo OFF setlocal ENABLEEXTENSIONS set KEY_NAME="HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" set VALUE_NAME=AutoConfigURL FOR /F "usebackq skip=1 tokens=1-3" %%A IN (`REG QUERY %KEY_NAME% /v %VALUE_NAME% 2^>nul`) DO ( set ValueName=%%A set ValueType=%%B set ValueValue=%%C ) @echo Value Name = %ValueName% @echo Value Type = %ValueType% @echo Value Value = %ValueValue% IF NOT %ValueValue%==yyyy ( reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" /v AutoConfigURL /t REG_SZ /d "yyyy" /f echo Proxy Enabled ) else ( echo Hai reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" /v AutoConfigURL /t REG_SZ /d "" /f echo Proxy Disabled ) The output i'm getting for the Proxy Enabled part is Value Name = AutoConfigURL Value Type = REG_SZ **Value Value =yyyy** Hai The operation completed successfully. Proxy Disabled But the Proxy Enable part isn't working fine the output i get is : Value Name = AutoConfigURL Value Type = REG_SZ **Value Value =** ( was unexpected at this time. The variable "Value Value" is not getting set when we try to do the Proxy enable

    Read the article

  • Windows 7, IIS 7.5, Selfssl

    - by Steve
    The windows iis6 resource kit won't install on Windows 7 (Home Premium) so I copied it from another machine and selfssl.exe is giving me: Failed to generate the cryptographic key: 0x5 I tried the instructions here but am still getting the above error. I'm trying to set the common name of the certificate to a name other than the machine name so I can avoid the certificate errors in the browser. This is a test web application. I know I can just test with the browser errors, but I'd like to mimic real world conditions as much as possible. Is there any other way to generate your own ssl certificates for iis7.5?

    Read the article

  • Disable reverse PTR check in Zimbra and force accept from invalid domains

    - by ewwhite
    I've moved an older Sendmail/Dovecot system to a Zimbra community edition system. I need to be able to receive messages from certain standalone Linux hosts that may not have valid A records or proper reverse DNS entries established (e.g. AT&T is the ISP or systems sitting on a consumer-level ISP). Establishing the reverse DNS or setting a SMARTHOST is not an option. The error I get in zimbra.log is: zimbra postfix/smtp[2200]: DB83B231B53: to=<root@host_name.baddomain.com>, relay=none, delay=0.07, delays=0.06/0/0/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=host_name.baddomain.com type=A: Host not found How can I override this? Is this more of a Postfix issue or is it Zimbra? edit - The problem seems to be with an underscore in the hostname of the server. So it's a problem with root@host_name.baddomain.com. Again, how can I override this in Zimbra?

    Read the article

  • Sage 50 Accounts 2010 wont run on windows 7

    - by admintech
    I have sucessfuly installed Sage 50 Accounts 2010 onto my 32 bit Windows 7 machine, yet whenever i try to run it i encounter - Log Name: Application Source: Application Error Date: 24/05/2010 17:14:13 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: LukeThomas-PC.domain.co.uk Description: Faulting application name: Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe, version: 2.0.0.91, time stamp: 0x4a8c22fe Faulting module name: igdumd32.dll, version: 8.15.10.1872, time stamp: 0x4a848a05 Exception code: 0xc0000409 Fault offset: 0x00012f96 Faulting process id: 0x1778 Faulting application start time: 0x01cafb5c2493b609 Faulting application path: C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe Faulting module path: C:\Windows\system32\igdumd32.dll Report Id: 63e2246b-674f-11df-96ba-002564c97988 Event Xml: 1000 2 100 0x80000000000000 5062 Application LukeThomas-PC.domain.co.uk Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe 2.0.0.91 4a8c22fe igdumd32.dll 8.15.10.1872 4a848a05 c0000409 00012f96 1778 01cafb5c2493b609 C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe C:\Windows\system32\igdumd32.dll 63e2246b-674f-11df-96ba-002564c97988 Any help would be appreciated as i cant find anything about this error

    Read the article

  • Windows Server 2003 can't see Vista machine

    - by Django Reinhardt
    Hi there, I've got a real PITA problem that I'm sure has a really simple solution. I have a Windows Server 2003 machine that needs to be able to see the network name of a Vista box - but refuses to. It can see the Vista box (and even access its shared folder) if I enter the Vista box's IP address. Problem is: SQL Server refuses to do Replication with anything other than the "actual server name". That means that the 2003 machine needs to be able to connect through the Vista machines network name... not just its IP address. I'm guessing it's a simple incompatibility between OS's, but I'm sure there's got to be a simple way of fixing it. Note: Yes, the Vista machine can connect to 2003 machine, no problem. And other machines in the office can connect to both the Vista machine and 2003 (they have more recent OS's). Thanks for any help!

    Read the article

  • How to fix error with "class" when creating a dhcp split-scope on Windows 2008?

    - by Jonas Stensved
    I'm trying to split a dhcp scope between a existing domain controller TTP01 and the new DC TTP02 to provide failover. I'm using the the wizard in DHCP [my scope] Advanced Split-scope. In the final step I get an error saying: Migration of Scope Options on Added DHCP Server: Failed Error: 0x00004E4C - The class name being used is unknown or incorrect. I can't find anything that seems to be wrong with my Scope Options settings, they are: Option Name Vendor Value Class 003 Router Standard 192.168.137.1 None 006 DNS Server Standard 192.168.137.2 None 015 DNS Domain Name Standard ttp.local None 060 PXEClient Standard PXEClient None I can't find any information about this error except this msdn article when searching. How do make it work?

    Read the article

  • Visual Studio 2012 setup crashes when trying to install

    - by Shyju
    In my Win7(64 bit) PC, I installed VS 2012 Ultimate Trial version few days back and today i got my msdn subscription of VS2012 Premium. so i uninstalled the Trial and was trying to run the setup exe for VS 2012 and dit is crashing. this is the error details i am seeing. Anybody know how to fix this ? Problem signature: Problem Event Name: BEX Application Name: en_visual_studio_premium_2012_x86_web_installer_920759.exe Application Version: 11.0.50727.1 Application Timestamp: 4fd9f28c Fault Module Name: igdumd32.dll Fault Module Version: 8.15.10.2057 Fault Module Timestamp: 4b5e4895 Exception Offset: 00015216 Exception Code: c0000409 Exception Data: 00000000 OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1033 Additional Information 1: 1d75 Additional Information 2: 1d7537ede8bee0a1d08a5f0d2036cc52 Additional Information 3: b4a4 Additional Information 4: b4a4e02d592ed99de97ca18a461b34ee Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

  • NullPointerException on jetty 7 startup with jetty deployment descriptor

    - by Draemon
    I'm getting the following error when I start Jetty: 2010-03-01 12:30:19.328:WARN::Failed startup of context WebAppContext@15ddf5@15ddf5/webapp,null,/path/to/jetty-distribution-7.0.1.v20091125/webapps-plus/webapp.war With this commandline: java -jar start.jar OPTIONS=All lib=/path/to/jetty-distribution-7.0.1.v20091125/lib/ext etc/jetty.xml etc/jetty-plus.xml /path/to/webapp/src/configuration/test.xml And test.xml contains: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/webapp</Set> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps-plus/webapp.war</Set> </Configure> If I don't include test.xml on the commandline it works fine.

    Read the article

  • How to Configure Windows Machine to Allow File Sharing with DNS Alias

    - by Michael Ferrante
    I have not seen a single article posted anywhere online that brings together all the settings one would need to do to make this work properly on Windows, so I thought I would post it here. To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0, add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN. To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool (setspn.exe). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885

    Read the article

  • Virus that tries to brute force attack Active Directory users (in alphabetical order)?

    - by Nate Pinchot
    Users started complaining about slow network speed so I fired up Wireshark. Did some checking and found many PCs sending packets similar to the following: (screenshot) http://imgur.com/45VlI.png I blurred out the text for the username, computer name and domain name (since it matches the internet domain name). Computers are spamming the Active Directory servers trying to brute force hack passwords. It will start with Administrator and go down the list of users in alphabetical order. Physically going to the PC finds no one anywhere near it and this behavior is spread across the network so it appears to be a virus of some sort. Scanning computers which have been caught spamming the server with Malwarebytes, Super Antispyware and BitDefender (this is the antivirus the client has) yields no results. This is an enterprise network with about 2500 PCs so doing a rebuild is not a favorable option. My next step is to contact BitDefender to see what help they can provide. Has anybody seen anything like this or have any ideas what it could possibly be?

    Read the article

  • Powershell or WMI to pull Printer Properties and Additional Drivers?

    - by JeremiahJohnson
    What I am trying to accomplish: Use a powershell script (WMI or cmdlets directly, or a combination) to query a 2003 or 2008 server with the PrintServer role, enumerate the printers shared, then list the drivers in use for that printer and specifically if an x86 or x64 driver is being used (or both). I've looked at Win32_Printer, Win32_PrinterDriver, Get-Printer, etc. None of these seem to be able to tell me about x64 drivers or when multiple platform-specific drivers are loaded. Something like: gwmi win32_printer -computername lebowski | %{$name = $_.name $supported = $_.getrelated('Win32_PrinterDriver') | select supportedplatform, driverpath, version Write-Host $name return $supported } Produces the following: PCLOADLETTER supportedplatform : Windows NT x86 driverpath : C:\WINDOWS\system32\spool\DRIVERS\W32X86\3\RIC54Dc.DLL version : 3 However the problem being that particular printer also has x64 drivers loaded. I really don't want to manually check the properties tab of 100 printers just to see if they have the x64 driver loaded.

    Read the article

  • JBoss AS 5: starts but can't connect (Windows, remote)

    - by Nuwan
    Hello I installed Jboss 5.0GA and Its works fine in localhost.But I want It to access through remote Machine.Then I bind my IP address to my server and started it.This is the command I used run.bat -b 10.17.62.63 Then the server Starts fine This is the console log when starting the server > =============================================================================== > > JBoss Bootstrap Environment > > JBOSS_HOME: C:\jboss-5.0.0.GA > > JAVA: C:\Program Files\Java\jdk1.6.0_34\bin\java > > JAVA_OPTS: -Dprogram.name=run.bat -server -Xms128m -Xmx512m > -XX:MaxPermSize=25 6m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Ds un.rmi.dgc.server.gcInterval=3600000 > > CLASSPATH: C:\jboss-5.0.0.GA\bin\run.jar > > =============================================================================== > > run.bat: unused non-option argument: ûb run.bat: unused non-option > argument: 0.0.0.0 13:43:38,179 INFO [ServerImpl] Starting JBoss > (Microcontainer)... 13:43:38,179 INFO [ServerImpl] Release ID: JBoss > [Morpheus] 5.0.0.GA (build: SV NTag=JBoss_5_0_0_GA date=200812041714) > 13:43:38,179 INFO [ServerImpl] Bootstrap URL: null 13:43:38,179 INFO > [ServerImpl] Home Dir: C:\jboss-5.0.0.GA 13:43:38,179 INFO > [ServerImpl] Home URL: file:/C:/jboss-5.0.0.GA/ 13:43:38,195 INFO > [ServerImpl] Library URL: file:/C:/jboss-5.0.0.GA/lib/ 13:43:38,195 > INFO [ServerImpl] Patch URL: null 13:43:38,195 INFO [ServerImpl] > Common Base URL: file:/C:/jboss-5.0.0.GA/common/ > > 13:43:38,195 INFO [ServerImpl] Common Library URL: > file:/C:/jboss-5.0.0.GA/comm on/lib/ 13:43:38,195 INFO [ServerImpl] > Server Name: default 13:43:38,195 INFO [ServerImpl] Server Base Dir: > C:\jboss-5.0.0.GA\server 13:43:38,195 INFO [ServerImpl] Server Base > URL: file:/C:/jboss-5.0.0.GA/server/ > > 13:43:38,210 INFO [ServerImpl] Server Config URL: > file:/C:/jboss-5.0.0.GA/serve r/default/conf/ 13:43:38,210 INFO > [ServerImpl] Server Home Dir: C:\jboss-5.0.0.GA\server\defaul t > 13:43:38,210 INFO [ServerImpl] Server Home URL: > file:/C:/jboss-5.0.0.GA/server/ default/ 13:43:38,210 INFO > [ServerImpl] Server Data Dir: C:\jboss-5.0.0.GA\server\defaul t\data > 13:43:38,210 INFO [ServerImpl] Server Library URL: > file:/C:/jboss-5.0.0.GA/serv er/default/lib/ 13:43:38,210 INFO > [ServerImpl] Server Log Dir: C:\jboss-5.0.0.GA\server\default \log > 13:43:38,210 INFO [ServerImpl] Server Native Dir: > C:\jboss-5.0.0.GA\server\defa ult\tmp\native 13:43:38,210 INFO > [ServerImpl] Server Temp Dir: C:\jboss-5.0.0.GA\server\defaul t\tmp > 13:43:38,210 INFO [ServerImpl] Server Temp Deploy Dir: > C:\jboss-5.0.0.GA\server \default\tmp\deploy 13:43:39,710 INFO > [ServerImpl] Starting Microcontainer, bootstrapURL=file:/C:/j > boss-5.0.0.GA/server/default/conf/bootstrap.xml 13:43:40,851 INFO > [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.pl > ugins.cache.IterableTimedVFSCache] 13:43:40,866 INFO > [VFSCacheFactory] Using VFSCache [IterableTimedVFSCache{lifet > ime=1800, resolution=60}] 13:43:41,616 INFO [CopyMechanism] VFS temp > dir: C:\jboss-5.0.0.GA\server\defaul t\tmp 13:43:41,648 INFO > [ZipEntryContext] VFS force nested jars copy-mode is enabled. > > 13:43:44,288 INFO [ServerInfo] Java version: 1.6.0_34,Sun > Microsystems Inc. 13:43:44,288 INFO [ServerInfo] Java VM: Java > HotSpot(TM) Server VM 20.9-b04,Sun Microsystems Inc. 13:43:44,288 > INFO [ServerInfo] OS-System: Windows XP 5.1,x86 13:43:44,569 INFO > [JMXKernel] Legacy JMX core initialized 13:43:50,148 INFO > [ProfileServiceImpl] Loading profile: default from: org.jboss > .system.server.profileservice.repository.SerializableDeploymentRepository@e72f0c > (root=C:\jboss-5.0.0.GA\server, > key=org.jboss.profileservice.spi.ProfileKey@143b > 82c3[domain=default,server=default,name=default]) 13:43:50,148 INFO > [ProfileImpl] Using repository:org.jboss.system.server.profil > eservice.repository.SerializableDeploymentRepository@e72f0c(root=C:\jboss-5.0.0. > GA\server, > key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,s > erver=default,name=default]) 13:43:50,148 INFO [ProfileServiceImpl] > Loaded profile: ProfileImpl@8b3bb3{key=o > rg.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,na > me=default]} 13:43:54,804 INFO [WebService] Using RMI server > codebase: http://127.0.0.1:8083 / 13:44:12,147 INFO [CXFServerConfig] > JBoss Web Services - Stack CXF Runtime Serv er 13:44:12,147 INFO > [CXFServerConfig] 3.1.2.GA 13:44:29,788 INFO > [Ejb3DependenciesDeployer] Encountered deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:37,116 INFO [JMXConnectorServerService] JMX Connector > server: service:jmx > :rmi://127.0.0.1/jndi/rmi://127.0.0.1:1090/jmxconnector 13:44:38,022 > INFO [MailService] Mail Service bound to java:/Mail 13:44:43,162 WARN > [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RI SK. It > has been detected that the MessageSucker component which sucks > messages f rom one node to another has not had its password changed > from the installation d efault. Please see the JBoss Messaging user > guide for instructions on how to do this. 13:44:43,209 WARN > [AnnotationCreator] No ClassLoader provided, using TCCL: org. > jboss.managed.api.annotation.ManagementComponent 13:44:43,600 INFO > [TransactionManagerService] JBossTS Transaction Service (JTA version) > - JBoss Inc. 13:44:43,600 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer 13:44:44,366 INFO > [TransactionManagerService] Initializing recovery manager 13:44:44,678 > INFO [TransactionManagerService] Recovery manager configured > 13:44:44,678 INFO [TransactionManagerService] Binding > TransactionManager JNDI R eference 13:44:44,787 INFO > [TransactionManagerService] Starting transaction recovery man ager > 13:44:46,428 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on > http-127.0.0 .1-8080 13:44:46,459 INFO [AjpProtocol] Initializing > Coyote AJP/1.3 on ajp-127.0.0.1-80 09 13:44:46,459 INFO > [StandardService] Starting service jboss.web 13:44:46,475 INFO > [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.1.GA > 13:44:46,616 INFO [Catalina] Server startup in 350 ms 13:44:46,709 > INFO [TomcatDeployment] deploy, ctxPath=/web-console, vfsUrl=manag > ement/console-mgr.sar/web-console.war 13:44:48,553 INFO > [TomcatDeployment] deploy, ctxPath=/juddi, vfsUrl=juddi-servi > ce.sar/juddi.war 13:44:48,678 INFO [RegistryServlet] Loading jUDDI > configuration. 13:44:48,694 INFO [RegistryServlet] Resources loaded > from: /WEB-INF/juddi.prope rties 13:44:48,709 INFO [RegistryServlet] > Initializing jUDDI components. 13:44:48,991 INFO [TomcatDeployment] > deploy, ctxPath=/invoker, vfsUrl=http-invo ker.sar/invoker.war > 13:44:49,162 INFO [TomcatDeployment] deploy, ctxPath=/jbossws, > vfsUrl=jbossws.s ar/jbossws-management.war 13:44:49,475 INFO > [RARDeployment] Required license terms exist, view vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml > 13:44:49,569 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml > 13:44:49,741 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml > 13:44:49,819 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml > 13:44:49,912 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml > 13:44:50,069 INFO [SimpleThreadPool] Job execution threads will use > class loade r of thread: main 13:44:50,115 INFO [QuartzScheduler] > Quartz Scheduler v.1.5.2 created. 13:44:50,131 INFO [RAMJobStore] > RAMJobStore initialized. 13:44:50,131 INFO [StdSchedulerFactory] > Quartz scheduler 'DefaultQuartzSchedule r' initialized from default > resource file in Quartz package: 'quartz.properties' > > 13:44:50,131 INFO [StdSchedulerFactory] Quartz scheduler version: > 1.5.2 13:44:50,131 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUS TERED started. 13:44:51,194 INFO > [ConnectionFactoryBindingService] Bound ConnectionManager 'jb > oss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name > 'java:DefaultDS' 13:44:51,819 WARN [QuartzTimerServiceFactory] sql > failed: CREATE TABLE QRTZ_JOB > _DETAILS(JOB_NAME VARCHAR(80) NOT NULL, JOB_GROUP VARCHAR(80) NOT NULL, DESCRIPT ION VARCHAR(120) NULL, JOB_CLASS_NAME VARCHAR(128) NOT > NULL, IS_DURABLE VARCHAR( 1) NOT NULL, IS_VOLATILE VARCHAR(1) NOT > NULL, IS_STATEFUL VARCHAR(1) NOT NULL, R EQUESTS_RECOVERY VARCHAR(1) > NOT NULL, JOB_DATA BINARY NULL, PRIMARY KEY (JOB_NAM E,JOB_GROUP)) > 13:44:51,912 INFO [SimpleThreadPool] Job execution threads will use > class loade r of thread: main 13:44:51,928 INFO [QuartzScheduler] > Quartz Scheduler v.1.5.2 created. 13:44:51,928 INFO [JobStoreCMT] > Using db table-based data access locking (synch ronization). > 13:44:51,944 INFO [JobStoreCMT] Removed 0 Volatile Trigger(s). > 13:44:51,944 INFO [JobStoreCMT] Removed 0 Volatile Job(s). > 13:44:51,944 INFO [JobStoreCMT] JobStoreCMT initialized. 13:44:51,944 > INFO [StdSchedulerFactory] Quartz scheduler 'JBossEJB3QuartzSchedu > ler' initialized from an externally provided properties instance. > 13:44:51,959 INFO [StdSchedulerFactory] Quartz scheduler version: > 1.5.2 13:44:51,959 INFO [JobStoreCMT] Freed 0 triggers from 'acquired' / 'blocked' st ate. 13:44:51,975 INFO [JobStoreCMT] > Recovering 0 jobs that were in-progress at the time of the last > shut-down. 13:44:51,975 INFO [JobStoreCMT] Recovery complete. > 13:44:51,975 INFO [JobStoreCMT] Removed 0 'complete' triggers. > 13:44:51,975 INFO [JobStoreCMT] Removed 0 stale fired job entries. > 13:44:51,990 INFO [QuartzScheduler] Scheduler > JBossEJB3QuartzScheduler_$_NON_CL USTERED started. 13:44:52,381 INFO > [ServerPeer] JBoss Messaging 1.4.1.GA server [0] started 13:44:52,569 > INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pa > geSize=2000, downCacheSize=2000 13:44:52,584 INFO [QueueService] > Queue[/queue/ExpiryQueue] started, fullSize=20 0000, pageSize=2000, > downCacheSize=2000 13:44:52,709 INFO [ConnectionFactory] Connector > bisocket://127.0.0.1:4457 has l easing enabled, lease period 10000 > milliseconds 13:44:52,709 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@1a8ac5e > started 13:44:52,725 WARN [ConnectionFactoryJNDIMapper] > supportsFailover attribute is t rue on connection factory: > jboss.messaging.connectionfactory:service=ClusteredCo nnectionFactory > but post office is non clustered. So connection factory will *no t* > support failover 13:44:52,725 WARN [ConnectionFactoryJNDIMapper] > supportsLoadBalancing attribute is true on connection factory: > jboss.messaging.connectionfactory:service=Cluste redConnectionFactory > but post office is non clustered. So connection factory wil l *not* > support load balancing 13:44:52,740 INFO [ConnectionFactory] > Connector bisocket://127.0.0.1:4457 has l easing enabled, lease period > 10000 milliseconds 13:44:52,740 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@1d43178 > started 13:44:52,740 INFO [ConnectionFactory] Connector > bisocket://127.0.0.1:4457 has l easing enabled, lease period 10000 > milliseconds 13:44:52,756 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@52728a > started 13:44:53,084 INFO [ConnectionFactoryBindingService] Bound > ConnectionManager 'jb > oss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name > 'java:JmsXA' 13:44:53,225 INFO [TomcatDeployment] deploy, ctxPath=/, > vfsUrl=ROOT.war 13:44:53,553 INFO [TomcatDeployment] deploy, > ctxPath=/jmx-console, vfsUrl=jmx-c onsole.war 13:44:53,975 INFO > [TomcatDeployment] deploy, ctxPath=/TestService, vfsUrl=TestS > erviceEAR.ear/TestService.war 13:44:55,662 INFO [JBossASKernel] > Created KernelDeployment for: myE-ejb.jar 13:44:55,709 INFO > [JBossASKernel] installing bean: jboss.j2ee:jar=myE-ejb.jar,n > ame=RPSService,service=EJB3 13:44:55,725 INFO [JBossASKernel] with > dependencies: 13:44:55,725 INFO [JBossASKernel] and demands: > 13:44:55,725 INFO [JBossASKernel] > jboss.ejb:service=EJBTimerService 13:44:55,725 INFO [JBossASKernel] > and supplies: 13:44:55,725 INFO [JBossASKernel] > jndi:RPSService/remote 13:44:55,725 INFO [JBossASKernel] Added > bean(jboss.j2ee:jar=myE-ejb.jar,name=RP SService,service=EJB3) to > KernelDeployment of: myE-ejb.jar 13:44:56,772 INFO > [SessionSpecContainer] Starting jboss.j2ee:jar=myE-ejb.jar,na > me=RPSService,service=EJB3 13:44:56,803 INFO [EJBContainer] STARTED > EJB: com.monz.rpz.RPSService ejbName: RPSService 13:44:56,819 INFO > [JndiSessionRegistrarBase] Binding the following Entries in G lobal > JNDI: > > > 13:44:57,381 INFO [DefaultEndpointRegistry] register: > jboss.ws:context=myE-ejb, endpoint=RPSService 13:44:57,428 INFO > [DescriptorDeploymentAspect] Add Service id=RPSService > address=http://127.0.0.1:8080/myE-ejb/RPSService > implementor=com.monz.rpz.RPSService > invoker=org.jboss.wsf.stack.cxf.InvokerEJB3 mtomEnabled=false > 13:44:57,459 INFO [DescriptorDeploymentAspect] JBossWS-CXF > configuration genera ted: > file:/C:/jboss-5.0.0.GA/server/default/tmp/jbossws/jbossws-cxf1864137209199 > 110130.xml 13:44:57,569 INFO [TomcatDeployment] deploy, ctxPath=/myE-ejb, vfsUrl=myE-ejb.j ar 13:44:57,709 WARN [config] > Unable to process deployment descriptor for context '/myE-ejb' > 13:44:59,334 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on > http-127.0.0.1-8 080 13:44:59,397 INFO [AjpProtocol] Starting Coyote > AJP/1.3 on ajp-127.0.0.1-8009 13:44:59,459 INFO [ServerImpl] JBoss > (Microcontainer) [5.0.0.GA (build: SVNTag= JBoss_5_0_0_GA > date=200812041714)] Started in 1m:21s:233ms But Still I cant connect to It when I Type my IP address in my browser thanks

    Read the article

  • Unable to resolve FQDN, hostname works

    - by HannesFostie
    We are having an issue where computers who are not part of the domain cannot resolve the FQDN of a server (but regular hostname and ip do resolve). The strange thing is that this does work when the computer is added to the network. Our domain name is rather long, its something along the lines of "team.dept.company.com", could that be it? DHCP server passes along the proper DNS, Name and WINS servers, as well as the domain name. I thought that should've solved the problem, but apparently not really. Our domain is still windows2003 EDIT: I am starting to believe I can narrow this down to a problem either with the vmware tools NIC drivers that are embedded in my winPE boot image, or to the fact that I'm trying to do this from inside a VM. Pinging a FQDN at the same time when using a different task sequence on a physical machine works.

    Read the article

  • mappoint 2013 randomly crashes on import

    - by ErocM
    We are sending routes to Mappoint 2013 from our application using an access database. It seems to happen with Mappoint 2010 and 2011 also. It doesn't happen on all of our clients either and it happens randomly on those who it does happen. This is the message: Problem signature: Problem Event Name: BEX Application Name: MapPoint.exe Application Version: 19.0.18.1100 Application Timestamp: 4fd664bb Fault Module Name: StackHash_94b0 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 7f82c94f Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6002.2.2.0.18.10 Locale ID: 1033 Additional Information 1: 94b0 Additional Information 2: 30950b6006304277980cdff17dfbd104 Additional Information 3: 098a Additional Information 4: 31c80150ac0b74b2dcb7884aa8fa1dac Does anyone know where I'd find out more information on this or how to resolve it? If this is not the correct exchange, pls point me to the right one and I'll delete and respost it. Thanks!

    Read the article

  • iptables syn flood countermeasure

    - by Penegal
    I'm trying to adjust my iptables firewall to increase the security of my server, and I found something a bit problematic here : I have to set INPUT policy to ACCEPT and, in addition, to have a rule saying iptables -I INPUT -i eth0 -j ACCEPT. Here comes my script (launched manually for tests) : #!/bin/sh IPT=/sbin/iptables echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X echo "Defining logging policy for dropped packets" $IPT -N LOGDROP $IPT -A LOGDROP -j LOG -m limit --limit 5/min --log-level debug --log-prefix "iptables rejected: " $IPT -A LOGDROP -j DROP echo "Setting firewall policy" $IPT -P INPUT DROP # Deny all incoming connections $IPT -P OUTPUT ACCEPT # Allow all outgoing connections $IPT -P FORWARD DROP # Deny all forwaring echo "Allowing connections from/to lo and incoming connections from eth0" $IPT -I INPUT -i lo -j ACCEPT $IPT -I OUTPUT -o lo -j ACCEPT #$IPT -I INPUT -i eth0 -j ACCEPT echo "Setting SYN flood countermeasures" $IPT -A INPUT -p tcp -i eth0 --syn -m limit --limit 100/second --limit-burst 200 -j LOGDROP echo "Allowing outgoing traffic corresponding to already initiated connections" $IPT -A OUTPUT -p ALL -m state --state ESTABLISHED,RELATED -j ACCEPT echo "Allowing incoming SSH" $IPT -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT echo "Setting SSH bruteforce attacks countermeasures (deny more than 10 connections every 10 minutes)" $IPT -A INPUT -p tcp --dport 22 -m recent --update --seconds 600 --hitcount 10 --rttl --name SSH -j LOGDROP echo "Allowing incoming traffic for HTTP, SMTP, NTP, PgSQL and SolR" $IPT -A INPUT -p tcp --dport 25 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT $IPT -A INPUT -p udp --dport 123 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p tcp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT echo "Allowing outgoing traffic for ICMP, SSH, whois, SMTP, DNS, HTTP, PgSQL and SolR" $IPT -A OUTPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 25 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 43 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 80 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 80 -o eth0 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p icmp -j ACCEPT echo "Allowing outgoing FTP backup" $IPT -A OUTPUT -p tcp --dport 20:21 -o eth0 -d 91.121.190.78 -j ACCEPT echo "Dropping and logging everything else" $IPT -A INPUT -s 0/0 -j LOGDROP $IPT -A OUTPUT -j LOGDROP $IPT -A FORWARD -j LOGDROP echo "Firewall loaded." echo "Maintaining new rules for 3 minutes for tests" sleep 180 $IPT -nvL echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT When I launch this script (I only have a SSH access), the shell displays every message up to Maintaining new rules for 3 minutes for tests, the server is unresponsive during the 3 minutes delay and then resume normal operations. The only solution I found until now was to set $IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT, but this configuration does not protect me of any attack, which is a great shame for a firewall. I suspect that the error comes from my script and not from iptables, but I don't understand what's wrong with my script. Could some do-gooder explain me my error, please? EDIT: here comes the result of iptables -nvL with the "accept all input" ($IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT) solution : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1 52 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:8983 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 2 728 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.78 tcp dpts:20:21 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (5 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 EDIT #2 : I modified my script (policy ACCEPT, defining authorized incoming packets then logging and dropping everything else) to write iptables -nvL results to a file and to allow only 10 ICMP requests per second, logging and dropping everything else. The result proved unexpected : while the server was unavailable to SSH connections, even already established, I ping-flooded it from another server, and the ping rate was restricted to 10 requests per second. During this test, I also tried to open new SSH connections, which remained unanswered until the script flushed rules. Here comes the iptables stats written after these tests : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 6 360 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "w00tw00t.at.ISC.SANS." ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: anoticiapb.com.br" ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: www.anoticiapb.com.br" ALGO name bm TO 65535 105 8820 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 10/sec burst 5 830 69720 LOGDROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:8983 16 1684 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 owner UID match 33 0 0 LOGDROP udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 owner UID match 33 116 11136 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.18 tcp dpts:20:21 7 1249 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (11 references) pkts bytes target prot opt in out source destination 35 3156 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 1/sec burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 859 73013 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Here comes the log content added during this test : Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55666 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55667 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55668 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55669 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:52 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55670 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:54 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55671 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:58 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55672 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=6 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=7 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=8 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=9 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=59 Mar 28 09:53:00 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=152 Mar 28 09:53:01 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=246 Mar 28 09:53:02 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=339 Mar 28 09:53:03 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=432 Mar 28 09:53:04 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=524 Mar 28 09:53:05 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=617 Mar 28 09:53:06 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=711 Mar 28 09:53:07 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=804 Mar 28 09:53:08 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=897 Mar 28 09:53:16 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61402 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:19 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61403 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:21 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55674 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:53:25 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61404 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55675 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55676 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55677 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:38 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55678 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55679 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5055 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:41 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55680 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:42 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5056 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:45 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55681 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:48 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5057 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 If I correctly interpreted these results, they say that ICMP rules were correctly interpreted by iptables, but SSH rules were not. This does not make any sense... Does somebody understand where my error comes from? EDIT #3 : After some more tests, I found out that commenting the SYN flood countermeasure removes the problem. I continue researches in this way but, meanwhile, if somebody sees my anti SYN flood rule error...

    Read the article

  • Windows Explorer keeps crashing in Windows 8.1

    - by Jonathan Allen
    This started a few days ago on a machine that has been running WIn 8.1 for over a month. I just finished a clean install of Windows 8.1 with a handful of apps like Dropbox and Office. Yet I'm still getting Windows Explorer crashes such as this: Faulting application name: Explorer.EXE, version: 6.3.9600.16408, time stamp: 0x523d251b Faulting module name: SHELL32.dll, version: 6.3.9600.16409, time stamp: 0x523fac81 Exception code: 0xc00000fd Fault offset: 0x0000000000189247 Faulting process id: 0xee8 Faulting application start time: 0x01ced2a62f6e5b2e Faulting application path: C:\WINDOWS\Explorer.EXE Faulting module path: C:\WINDOWS\system32\SHELL32.dll Report Id: e25c6b13-3eb4-11e3-be7a-2cd05a597b87 Faulting package full name: Faulting package-relative application ID: I assume that it is some sort of shell extension, but I don't know how to figure out which one.

    Read the article

  • multiply websites and different websites on the same iis server

    - by Krystian
    I've got this kind of situation: I've got windows 2003 server with dns server on same machine. It is binded to adress for ex. siteA.com Now i want to add to this machine website which name will be siteB.com. I created a new website on IIS6 server with name siteB.com but I dont know how to set up a dns server. My primary DNS administrator created me an alias for my server and he describe it to me like this: 'site siteB.com is an aliase for siteA.com' and then he said that I have to configure my DNS server by my own. I've tried to add new alias in my existing DNS zone (for siteA.com) but it's binding FQDN name like this: siteB.SiteA.com which is wrong as I supose. Can anybody explain me how can I bind this 2 webiste to my server?

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • How do I obtain a valid DNS resolution given just an IP address?

    - by Dee Newcum
    Is there a publicly-available DNS server somewhere that will respond to requests like: 74_125_225_50.anyip.com And will return 74.125.225.50 for the above request? That is, every single possible IP address can queried by name instead of number. http://ipq.co/ is close to what I'm looking for, but it requires you to first register an IP address before you can query its DNS name. I want a service that does a straightforward mapping from domain name to IP address. Why do I want to do this? I have a program that we use at work that requires a DNS lookup, but I need to be able to give it bare IP addresses. (long story... it's a server that I don't control, so I can't work around it using /etc/hosts)

    Read the article

  • Shorten Long DNS names

    - by user32425
    Hi, Amazon gives us a very long dns names i.e. c-123-123-123-255.compute-1.amazonaws.com Is there a way to map this name into a shorter name i.e. essentially what i want to do is to modify /etc/hosts file, and map the long name into a short one, i.e. aws1 c-123-123-123-255.compute-1.amazonaws.com but because /etc/hosts file only accepts ip address mapping, then I cannot do that. Is there any other way to do this? Thanks

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >