Search Results

Search found 12900 results on 516 pages for 'rules engine'.

Page 159/516 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • SEO Always Necessary Or Not Really?

    SEO or Search Engine Optimization is a tactic used by companies and advertisers to get a website at the top of search engine results pages. The beauty of this tactic is that the results are more sustainable than almost any other form of advertising. In other words once a website has attained a top position on a search engine for a given keyword, the website will remain there or there about for time to come.

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Silverlight and clientaccesspolicy.xml

    - by JerA
    I have a silverlight app hosted on "dev.system-engine.com" which creates an tcp connection connecting to "dev.system-engine.com" the silverlight app and the policy file are on the "dev.system-engine.com" webserver. "system-engine.com" and "dev.system-engine.com" are separate servers. I cannot seem to get it to work. I have been struggling with the policyfile. <?xml version="1.0" encoding="utf-8" ?> <access-policy> <cross-domain-access> <policy> <allow-from> <domain uri="http://dev.system-engine.com:80"/> </allow-from> <grant-to> <domain uri="http://dev.system-engine.com:80"/> <socket-resource port="80" protocol="tcp"/> <resource path="/" include-subpaths="true"/> </grant-to> </policy>

    Read the article

  • Sharp architecture; Accessing Validation Results

    - by nabeelfarid
    I am exploring Sharp Architecture and I would like to know how to access the validation results after calling Entity.IsValid(). I have two scenarios e.g. 1) If the entity.IsValid() return false, I would like to add the errors to ModelState.AddModelError() collection in my controller. E.g. in the Northwind sample we have an EmployeesController.Create() action when we do employee.IsValid(), how can I get access to the errors? public ActionResult Create(Employee employee) { if (ViewData.ModelState.IsValid && employee.IsValid()) { employeeRepository.SaveOrUpdate(employee); } // .... } [I already know that when an Action method is called, modelbinder enforces validation rules(nhibernate validator attributes) as it parses incoming values and tries to assign them to the model object and if it can't parse the incoming values  then it register those as errors in modelstate for each model object property. But what if i have some custom validation. Thats why we do ModelState.IsValid first.] 2) In my test methods I would like to test the nhibernate validation rules as well. I can do entity.IsValid() but that only returns true/ false. I would like to Assert against the actual error not just true/ false. In my previous projects, I normally use a wrapper Service Layer for Repositories, and instead of calling Repositories method directly from controller, controllers call service layer methods which in turn call repository methods. In my Service Layer all my custom validation rules resides and Service Layer methods throws a custom exception with a NameValueCollection of errors which I can easily add to ModelState in my controller. This way I can also easily implement sophisticated business rules in my service layer as well. I kow sharp architecture also provides a Service Layer project. But what I am interested in and my next question is: How I can use NHibernate Vaidators to implement sophisticated custom business rules (not just null,empty, range etc.) and make Entity.IsValid() to verify those rules too ?

    Read the article

  • How to approach ninject container/kernel in inheritance situation

    - by Bas
    I have the following situation: class RuleEngine {} abstract class RuleImplementation {} class RootRule : RuleImplementation {} class Rule1 : RuleImplementation {} class Rule2 : RuleImplementation {} The RuleEngine is injected by Ninject and has a kernel at it's disposal, the role of the RuleEngine is to fire off the root rule, which on it's turn will load all the other rules also using Ninject, but using a different Module and creating a new Kernel. Now my question is, some of the rules require some dependencies which I want to inject using Ninject. What would be the best way to create the kernel for these rules and also still do proper unit testing with it? (the kernel shouldn't become a real pain in my tests) I've been thinking of the following possibilitys: The kernel that I use in the RuleEngine class could be tossed around to RuleImplementation and thus be available for every rule. But tossing around Kernels isn't really something I wish to do. When creating the rules, I could give the kernel (which creates the rules) as a constructor argument for each rule. I could create a method inside the RuleImplementation which creates a kernel and makes it possible for the rules to retrieve the kernel using a get() in the abstract class Whats the convention of passing around/creating kernels? Just create new kernels, or reuse them?

    Read the article

  • Why linking doesn't work in my Xtext-based DSL?

    - by reprogrammer
    The following is the Xtext grammar for my DSL. Model: variableTypes=VariableTypes predicateTypes=PredicateTypes variableDeclarations= VariableDeclarations rules=Rules; VariableType: name=ID; VariableTypes: 'var types' (variableTypes+=VariableType)+; PredicateTypes: 'predicate types' (predicateTypes+=PredicateType)+; PredicateType: name=ID '(' (variableTypes+=[VariableType|ID])+ ')'; VariableDeclarations: 'vars' (variableDeclarations+=VariableDeclaration)+; VariableDeclaration: name=ID ':' type=[VariableType|ID]; Rules: 'rules' (rules+=Rule)+; Rule: head=Head ':-' body=Body; Head: predicate=Predicate; Body: (predicates+=Predicate)+; Predicate: predicateType=[PredicateType|ID] '(' (terms+=Term)+ ')'; Term: variable=Variable; Variable: variableDeclaration=[VariableDeclaration|ID]; terminal WS: (' ' | '\t' | '\r' | '\n' | ',')+; And, the following is a program in the above DSL. var types Node predicate types Edge(Node, Node) Path(Node, Node) vars x : Node y : Node z : Node rules Path(x, y) :- Edge(x, y) Path(x, y) :- Path(x, z) Path(z, y) When I used the generated Switch class to traverse the EMF object model corresponding to the above program, I realized that the nodes are not linked together properly. For example, the getPredicateType() method on a Predicate node returns null. Having read the Xtext user's guide, my impression is that the Xtext default linking semantics should work for my DSL. But, for some reason, the AST nodes of my DSL don't get linked together properly. Can anyone help me in diagnosing this problem?

    Read the article

  • What is the Rule of Thumb on Exposing Encapsulated Class Methods

    - by javamonkey79
    Consider the following analogy: If we have a class: "Car" we might expect it to have an instance of "Engine" in it. As in: "The car HAS-A engine". Similarly, in the "Engine" class we would expect an instance of "Starting System" or "Cooling System" which each have their appropriate sub-components. By the nature of encapsulation, is it not true that the car "HAS-A" "radiator hose" in it as well as the engine? Therefore, is it appropriate OO to do something like this: public class Car { private Engine _engine; public Engine getEngine() { return _engine; } // is it ok to use 'convenience' methods of inner classes? // are the following 2 methods "wrong" from an OO point of view? public RadiatorHose getRadiatorHose() { return getCoolingSystem().getRadiatorHose(); } public CoolingSystem getCoolingSystem() { return _engine.getCoolingSystem(); } } public class Engine { private CoolingSystem _coolingSystem; public CoolingSystem getCoolingSystem() { return _coolingSystem; } } public class CoolingSystem { private RadiatorHose _radiatorHose; public RadiatorHose getRadiatorHose() { return _radiatorHose; } } public class RadiatorHose {//... }

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • URL Rewrite – Protocol (http/https) in the Action

    - by OWScott
    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/. <rule name="Redirect to www"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}. The “Redirect to www” example with support for maintaining the protocol, will become: <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="true"> <add input="{CACHE_URL}" pattern="^(.+)://" /> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" /> </rule> It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close. I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule. Option 2 – Using a Rewrite Map For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe. After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map. <rewrite> <rules> <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" /> </rule> </rules> <rewriteMaps> <rewriteMap name="MapProtocol"> <add key="on" value="https" /> <add key="off" value="http" /> </rewriteMap> </rewriteMaps> </rewrite> Option 3 – CACHE_URL, Multi-rule If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}. The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission. First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary.  Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server. <rule name="Create HTTP_PROTOCOL"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{CACHE_URL}" pattern="^(.+)://" /> </conditions> <serverVariables> <set name="HTTP_PROTOCOL" value="{C:1}" /> </serverVariables> <action type="None" /> </rule>   <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" /> </rule> Option 4 – Multi-rule Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions. <rule name="Redirect to www - http" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> <rule name="Redirect to www - https" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="on" /> </conditions> <action type="Redirect" url="https://www.localtest.me/{R:1}" /> </rule> Conclusion Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules. Further information: Viewing all Server Variable for a site. URL Parts available to URL Rewrite Rules Further URL Rewrite articles

    Read the article

  • TF30004: The New Team Project Wizard encountered an unexpected error while initializing the Microsof

    - by Frozzare
    Hello, i get this error when i trying to create a new project in team project. The server is right, i check all ports. I don't now what i should do now, can't find any good information 2009-09-19 01:45:41Z | Module: Internal | Team Foundation Server proxy retrieved | Completion time: 0.338 seconds 2009-09-19 01:45:41Z | Module: Internal | The template information for Team Foundation Server "TFSServer01" was retrieved from the Team Foundation Server. | Completion time: 0.099 seconds 2009-09-19 01:45:41Z | Module: Wizard | Retrieved IAuthorizationService proxy | Completion time: 0.404 seconds 2009-09-19 01:45:41Z | Module: Wizard | TF30227: Project creation permissions retrieved | Completion time: 0.015 seconds 2009-09-19 01:45:44Z | Module: Engine | Thread: 5 | New project will be created with the "MSF for Agile Software Development - v4.2" methodology 2009-09-19 01:45:44Z | Module: Engine | Retrieved IAuthorizationService proxy | Completion time: 0 seconds 2009-09-19 01:45:44Z | Module: Engine | TF30227: Project creation permissions retrieved | Completion time: 0.01 seconds 2009-09-19 01:45:45Z | Module: Engine | Wrote compressed process template file | Completion time: 0.001 seconds 2009-09-19 01:45:46Z | Module: Engine | Extracted process template file | Completion time: 1.428 seconds 2009-09-19 01:45:46Z | Module: Engine | Thread: 5 | Starting Project Creation for project "TestProject" in domain "TFSServer01" 2009-09-19 01:45:46Z | Module: Engine | The user identity information was retrieved from the Group Security Service | Completion time: 0.045 seconds 2009-09-19 01:45:46Z | Module: Initializer | Thread: 5 | The New Team Project Wizard is starting to initialize the plug-ins. 2009-09-19 01:45:46Z | Module: CssStructureUploader | Thread: 5 | Entering Initialize in CssStructureUploader 2009-09-19 01:45:46Z | Module: CssStructureUploader | Thread: 5 | Initialize for CssStructureUploader complete 2009-09-19 01:45:46Z | Module: Initializer | Thread: 5 | The New Team Project Wizard successfully Initialized the plug-in Microsoft.ProjectCreationWizard.Classification. 2009-09-19 01:45:46Z | Module: Rosetta | Thread: 5 | Entering Initialize in RosettaReportUploader 2009-09-19 01:45:48Z | Module: Rosetta | Thread: 5 | Exiting Initialize for RosettaReportUploader 2009-09-19 01:45:48Z | Module: Initializer | Thread: 5 | The New Team Project Wizard successfully Initialized the plug-in Microsoft.ProjectCreationWizard.Reporting. 2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Entering Initialize in WssSiteCreator 2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Site information: Title = "TestProject" Description = "This team project was created based on the 'MSF for Agile Software Development - v4.2' process template." 2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Base site url: http://TFSServer01:14143/webbplatser 2009-09-19 01:45:48Z | Module: WSS | Thread: 5 | Admin site url: http://TFSServer01:16183/_vti_adm/admin.asmx ---begin Exception entry--- Time: 2009-09-19 01:46:27 Z Module: Initialize Event Description: TF30207: Initialization for plugin "Microsoft.ProjectCreationWizard.Portal 'failed Exception Type: Microsoft.TeamFoundation.Client.PcwException Exception Message: The client discovered that content-type of request is text / html; charset = utf-8, but the text / xml expected. The request failed with error message: -- Unable to connect to the configuration database. --. Stack Trace: vid Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckPermissions(ProjectCreationContext ctxt) vid Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.Initialize(ProjectCreationContext context) vid Microsoft.VisualStudio.TeamFoundation.EngineStarter.InitializePlugins(MsfTemplate template, PcwPluginCollection pluginCollection) -- Inner Exception -- Exception Type: System.InvalidOperationException Exception Message: The client discovered that content-type of request is text / html; charset = utf-8, but the text / xml expected. The request failed with error message: -- Unable to connect to the configuration database. --. Stack Trace: vid System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) vid System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) vid Microsoft.TeamFoundation.Proxy.Portal.Admin.GetLanguages() vid Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckPermissions(ProjectCreationContext ctxt) -- end Inner Exception -- --- end Exception entry --- Thanks for you help

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • mysql server upgrade problem from 5.0 to 5.1

    - by Avinash
    Hi I have upgraded my mysql server from 5.0 to 5.1. But i am having a problem related to tables for InnoDB storage Engine. My default engine is InnoDB, So it is enabled in my server. But tables with InneDB engine are not displaying in phpmyadmin. Tables with MyISAM are displaying properly. and also i can't fire a query on the table with InnoDB Engine. Thanks Avinash

    Read the article

  • crystal report not working in sharepoint

    - by Kushal
    Hello all When i try to deploy crystal report application on sharepoint server i get the following error The vs2008 application works fine locally.But when it is deployed on sharepoint server it gives this type of error..Plz help Unknown Query Engine Error Error in File C:\WINDOWS\TEMP\EmpReport {C388758D-9AE2-4E60-A4D1-E18BFB6F3703}.rpt: Unknown Query Engine Error at CrystalDecisions.ReportAppServer.Controllers.DatabaseControllerClass.ReplaceConnection(Object oldConnection, Object newConnection, Object parameterFields, Object crDBOptionUseDefault) at CrystalDecisions.CrystalReports.Engine.Table.SetDataSource(Object val, Type type) at CrystalDecisions.CrystalReports.Engine.ReportDocument.SetDataSourceInternal(Object val, Type type) i really cant figure out the issue

    Read the article

  • Is there an HTML browser rendering engine for Ruby?

    - by Jose
    Given a URL, I would like to be able to render the returned HTML to know width and height for each div, fonts' size for each piece of text, color of each element, position of each element on screen, etc. A possible approach could be traversing the DOM tree with Hpricot and checking CSS style by parsing the associated stylesheet using css_parser gem. But this would not consider default styles, inheritance, floats, etc. In Java there's Cobra, a Java Web Renderer, which is able to render a web page and query attributes like width, font size, etc. for each fragment. I could use Cobra with JRuby or similar solutions, but prefer a Ruby native tool. Is there any library like this for Ruby?

    Read the article

  • Factory Pattern - when do you say you need a specialised factory

    - by dbones
    Hi, I am having a little design crisis, I am making a Plane class, which is made of an engine 2 wings and N seats The engine takes in a engine size, and the wings have a span. would this still be feasible to have a PlaneFactory, when the factory may have to take in multiple parameters to setup the plane (wings, engine, no of seats) thanks in advance bones

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Can I suppress newlines after each template tag with Django's template engine?

    - by ento
    In Rails ERB, you can suppress newlines by adding a trailing hyphen to tags: <ul> <% for @item in @items -%> <li><%= @item %></li> <% end -%> </ul> becomes: <ul> <li>apple</li> <li>banana</li> <li>cacao</li> </ul> Is there a way to do this in Django? (Disclosure: I'm generating a csv file with Django) Edit: Clarified that the newlines I'm hunting down are the ones left behind after the template tags.

    Read the article

  • How can i use cron job in Social engine?

    - by Rajendra Banker
    Hi All, I am new with cron job in php,basically i want to send email to user on certain time of period.i want to send email daily,weekly,monthly,quarterly,yearly,or specific amount of days. In smarty template i want to use this type of function Can any body know how to do tihs?

    Read the article

  • Windows 2008 R2 IPsec encryption in tunnel mode, hosts in same subnet

    - by fission
    In Windows there appear to be two ways to set up IPsec: The IP Security Policy Management MMC snap-in (part of secpol.msc, introduced in Windows 2000). The Windows Firewall with Advanced Security MMC snap-in (wf.msc, introduced in Windows 2008/Vista). My question concerns #2 – I already figured out what I need to know for #1. (But I want to use the ‘new’ snap-in for its improved encryption capabilities.) I have two Windows Server 2008 R2 computers in the same domain (domain members), on the same subnet: server2 172.16.11.20 server3 172.16.11.30 My goal is to encrypt all communication between these two machines using IPsec in tunnel mode, so that the protocol stack is: IP ESP IP …etc. First, on each computer, I created a Connection Security Rule: Endpoint 1: (local IP address), eg 172.16.11.20 for server2 Endpoint 2: (remote IP address), eg 172.16.11.30 Protocol: Any Authentication: Require inbound and outbound, Computer (Kerberos V5) IPsec tunnel: Exempt IPsec protected connections Local tunnel endpoint: Any Remote tunnel endpoint: (remote IP address), eg 172.16.11.30 At this point, I can ping each machine, and Wireshark shows me the protocol stack; however, nothing is encrypted (which is expected at this point). I know that it's unencrypted because Wireshark can decode it (using the setting Attempt to detect/decode NULL encrypted ESP payloads) and the Monitor Security Associations Quick Mode display shows ESP Encryption: None. Then on each server, I created Inbound and Outbound Rules: Protocol: Any Local IP addresses: (local IP address), eg 172.16.11.20 Remote IP addresses: (remote IP address), eg 172.16.11.30 Action: Allow the connection if it is secure Require the connections to be encrypted The problem: Though I create the Inbound and Outbound Rules on each server to enable encryption, the data is still going over the wire (wrapped in ESP) with NULL encryption. (You can see this in Wireshark.) When the arrives at the receiving end, it's rejected (presumably because it's unencrypted). [And, disabling the Inbound rule on the receiving end causes it to lock up and/or bluescreen – fun!] The Windows Firewall log says, eg: 2014-05-30 22:26:28 DROP ICMP 172.16.11.20 172.16.11.30 - - 60 - - - - 8 0 - RECEIVE I've tried varying a few things: In the Rules, setting the local IP address to Any Toggling the Exempt IPsec protected connections setting Disabling rules (eg disabling one or both sets of Inbound or Outbound rules) Changing the protocol (eg to just TCP) But realistically there aren't that many knobs to turn. Does anyone have any ideas? Has anyone tried to set up tunnel mode between two hosts using Windows Firewall? I've successfully got it set up in transport mode (ie no tunnel) using exactly the same set of rules, so I'm a bit surprised that it didn't Just Work™ with the tunnel added.

    Read the article

  • Javascript (and HTML rendering) engine without a GUI for automation?

    - by MTsoul
    Are there any libraries or frameworks that provide the functionality of a browser, but do not need to actually render physically onto the screen? I want to automate navigation on web pages (Mechanize does this, for example), but I want the full browser experience, including Javascript. Thus, I'd like to have a virtual browser of some sort, that I can use to "click on links" programmatically, have DOM elements and JS scripts render within it, and manipulate these elements. Solution preferably in Python, but I can manage others.

    Read the article

  • Javascript on a desktop wallpaper (XP)

    - by Arcath
    I have a desktop wallpaper shipped out by my domain, it's a HTML wallpaper that displays information like today's date, etc. It also has the internet usage rules. Using jquery I've made it so that the rules are hidden and displayed when you click on the section heading. The JS runs fine and it hides the rules, but it appears that once the desktop "loses focus" the javascript stops and clicking the headings does nothing. Anyone know a way to keep javascript running?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >