Search Results

Search found 50247 results on 2010 pages for 'base class'.

Page 353/2010 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • How can you force a floating div to be the height of its parent?

    - by ErnieStings
    HTML markup: <div class="planRisk"> <div class="innerPlanRiskRight"> <div class="rmPlanFrequency">10 </div> <div class="rmPlanSeverity"> 5</div> <div class="rmPlanRiskFactor">50 </div> <div class="rmPlanNumSolutions">2</div> <div class="rmPlanPercentComplete">34% </div> <div class="rmPlanDeletePlanRisk"> X </div> </div> <div class="rmPlanRiskTitle"> Pandemic Influenza</div> </div> CSS: .planRisk{background-color:#DEECD1; border:1px solid #BEBEBE;} .innerPlanRiskRight{float:right; color:#000000;} .rmPlanFrequency{float:left; width:46px;background-color:#d9dee1; text-align:center; border-right:1px solid #ebebeb; padding:0.2em;} .rmPlanSeverity{float:left; width:46px; background-color:#dbe1d4; text-align:center; border-right:1px solid #ebebeb; padding:0.2em;} .rmPlanRiskFactor{float:left; width:46px; background-color:#e5d5da; text-align:center; border-right:1px solid #ebebeb; padding:0.2em;} .rmPlanNumSolutions{float:left; width:46px; background-color:#dae4e4; text-align:center; border-right:1px solid #ebebeb; padding:0.2em;} .rmPlanPercentComplete{float:left; width:46px; background-color:#dddddd; text-align:center; padding:0.2em; } .rmPlanDeletePlanRisk{float:left; width:30px; background-color:#DEECD1; text-align:center; padding:0.2em;} .rmPlanRiskTitle{padding:0.2em; } .rmPlanSolutionContainer{background-color:#f0f9e8; border: 0 1px 1px; border-left:1px solid #CDCDCD; border-right:1px solid #cdcdcd; } .innerSolutionRight{float:right;} .rmPlanSolution{border-bottom:1px solid #CDCDCD; padding-left:1em;} .rmPlanSolutionPercentComplete{float:left; width:46px; background-color:#E2EADA; padding-left:0.2em; padding-right:0.2em; text-align:center;} .rmPlanDeleteSolution{float:left; width:30px; text-align:center; padding-left:0.2em; padding-right:0.2em; }

    Read the article

  • NHibernate child deletion problem.

    - by JMSA
    Suppose, I have saved some permissions in the database by using this code: RoleRepository roleRep = new RoleRepository(); Role role = new Role(); role.PermissionItems = Permission.GetList(); roleRep .SaveOrUpdate(role); Now, I need this code to delete the PermissionItem(s) associated with a Role when role.PermissionItems == null. Here is the code: RoleRepository roleRep = new RoleRepository(); Role role = roleRep.Get(roleId); role.PermissionItems = null; roleRep .SaveOrUpdate(role); But this is not happening. What should be the correct way to cope with this situation? What/how should I change, hbm-file or persistance code? Role.cs public class Role { public virtual string RoleName { get; set; } public virtual bool IsActive { get; set; } public virtual IList<Permission> PermissionItems { get; set; } } Role.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="POCO" namespace="POCO"> <class name="Role" table="Role"> <id name="ID" column="ID"> <generator class="native" /> </id> <property name="RoleName" column="RoleName" /> <property name="IsActive" column="IsActive" type="System.Boolean" /> <bag name="PermissionItems" table="Permission" cascade="all" inverse="true"> <key column="RoleID"/> <one-to-many class="Permission" /> </bag> </class> </hibernate-mapping> Permission.cs public class Permission { public virtual string MenuItemKey { get; set; } public virtual int RoleID { get; set; } public virtual Role Role { get; set; } } Permission.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="POCO" namespace="POCO"> <class name="Permission" table="Permission"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="MenuItemKey" column="MenuItemKey" /> <property name="RoleID" column="RoleID" /> <many-to-one name="Role" column="RoleID" not-null="true" cascade="all"> </many-to-one> </class> </hibernate-mapping>

    Read the article

  • Java: how to avoid circual references when dumping object information with reflection?

    - by Tom
    I've modified an object dumping method to avoid circual references causing a StackOverflow error. This is what I ended up with: //returns all fields of the given object in a string public static String dumpFields(Object o, int callCount, ArrayList excludeList) { //add this object to the exclude list to avoid circual references in the future if (excludeList == null) excludeList = new ArrayList(); excludeList.add(o); callCount++; StringBuffer tabs = new StringBuffer(); for (int k = 0; k < callCount; k++) { tabs.append("\t"); } StringBuffer buffer = new StringBuffer(); Class oClass = o.getClass(); if (oClass.isArray()) { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("["); for (int i = 0; i < Array.getLength(o); i++) { if (i < 0) buffer.append(","); Object value = Array.get(o, i); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } buffer.append(tabs.toString()); buffer.append("]\n"); } else { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("{\n"); while (oClass != null) { Field[] fields = oClass.getDeclaredFields(); for (int i = 0; i < fields.length; i++) { if (fields[i] == null) continue; buffer.append(tabs.toString()); fields[i].setAccessible(true); buffer.append(fields[i].getName()); buffer.append("="); try { Object value = fields[i].get(o); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if ((value.getClass().isPrimitive()) || (value.getClass() == java.lang.Long.class) || (value.getClass() == java.lang.String.class) || (value.getClass() == java.lang.Integer.class) || (value.getClass() == java.lang.Boolean.class)) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } catch (IllegalAccessException e) { System.out.println("IllegalAccessException: " + e.getMessage()); } buffer.append("\n"); } oClass = oClass.getSuperclass(); } buffer.append(tabs.toString()); buffer.append("}\n"); } return buffer.toString(); } The method is initially called like this: System.out.println(dumpFields(obj, 0, null); So, basically I added an excludeList which contains all the previousely checked objects. Now, if an object contains another object and that object links back to the original object, it should not follow that object further down the chain. However, my logic seems to have a flaw as I still get stuck in an infinite loop. Does anyone know why this is happening?

    Read the article

  • Java JNI leak in c++ process.

    - by user662056
    Hi all.. I am beginner in Java. My problem is: I am calling a Java class's method from c++. For this i am using JNI. Everythings works correct, but i have some memory LEAKS in the process of c++ program... So.. i did simple example.. 1) I create a java machine (jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args);) 2) then i take a pointer on java class (jclass cls = env-FindClass("test_jni")); 3) after that i create a java class object object, by calling the constructor (testJavaObject = env-NewObject(cls, testConstruct);) AT THIS very moment in the process of c++ program is allocated 10 MB of memory 4) Next i delete the class , the object, and the Java Machine .. AT THIS very moment the 10 MB of memory are not free ................. So below i have a few lines of code c++ program void main() { { //Env JNIEnv *env; // java virtual machine JavaVM *jvm; JavaVMOption* options = new JavaVMOption[1]; //class paths options[0].optionString = "-Djava.class.path=C:/Sun/SDK/jdk/lib;D:/jms_test/java_jni_leak;"; // other options JavaVMInitArgs vm_args; vm_args.version = JNI_VERSION_1_6; vm_args.options = options; vm_args.nOptions = 1; vm_args.ignoreUnrecognized = false; // alloc part of memory (for test) before CreateJavaVM char* testMem0 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem0[i] = 'a'; // create java VM jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args); // alloc part of memory (for test) after CreateJavaVM char* testMem1 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem1[i] = 'b'; //Creating java virtual machine jclass cls = env->FindClass("test_jni"); // Id of a class constructor jmethodID testConstruct = env->GetMethodID(cls, "<init>", "()V"); // The Java Object // Calling the constructor, is allocated 10 MB of memory in c++ process jobject testJavaObject = env->NewObject(cls, testConstruct); // function DeleteLocalRef, // In this very moment memory not free env->DeleteLocalRef(testJavaObject); env->DeleteLocalRef(cls); // 1!!!!!!!!!!!!! res = jvm->DestroyJavaVM(); delete[] testMem0; delete[] testMem1; // In this very moment memory not free. TO /// } int gg = 0; } java class (it just allocs some memory) import java.util.*; public class test_jni { ArrayList<String> testStringList; test_jni() { System.out.println("start constructor"); testStringList = new ArrayList<String>(); for(int i = 0; i < 1000000; ++i) { // ??????? ?????? testStringList.add("TEEEEEEEEEEEEEEEEST"); } } } process memory view, after crating javaVM and java object: testMem0 and testMem1 - test memory, that's allocated by c++. ************** testMem0 ************** JNI_CreateJavaVM ************** testMem1 ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** process memory view, after destroy javaVM and delete ref on java object: testMem0 and testMem1 are deleted to; ************** JNI_CreateJavaVM ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** So testMem0 and testMem1 is deleted, But JavaVM and Java object not.... Sow what i do wrong... and how i can free memory in the c++ process program.

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Anti-Forgery Request Recipes For ASP.NET MVC And AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, the work would be a little crazy. Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenWrapperAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Specify Non-constant salt in runtime By default, the salt should be a compile time constant, so it can be used for the [ValidateAntiForgeryToken] or [ValidateAntiForgeryTokenWrapper] attribute. Problem One Web product might be sold to many clients. If a constant salt is evaluated in compile time, after the product is built and deployed to many clients, they all have the same salt. Of course, clients do not like this. Even some clients might want to specify a custom salt in configuration. In these scenarios, salt is required to be a runtime value. Solution In the above [ValidateAntiForgeryToken] and [ValidateAntiForgeryTokenWrapper] attribute, the salt is passed through constructor. So one solution is to remove this parameter:public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = AntiForgeryToken.Value }; } // Other members. } But here the injected dependency becomes a hard dependency. So the other solution is moving validation code into controller to work around the limitation of attributes:public abstract class AntiForgeryControllerBase : Controller { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; protected AntiForgeryControllerBase(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } protected override void OnAuthorization(AuthorizationContext filterContext) { base.OnAuthorization(filterContext); string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } Then make controller classes inheriting from this AntiForgeryControllerBase class. Now the salt is no long required to be a compile time constant. Submit token via AJAX For browser side, once server side turns on anti-forgery validation for HTTP POST, all AJAX POST requests will fail by default. Problem In AJAX scenarios, the HTTP POST request is not sent by form. Take jQuery as an example:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution Basically, the tokens must be printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() need to be called somewhere. Now the browser has token in both HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token, where $.appendAntiForgeryToken() is useful:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by an iframe, while the token is in the parent window. Here, token's container window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • Loading a Template From a User Control

    - by Ricardo Peres
    What if you wanted to load a template (ITemplate property) from an external user control (.ascx) file? Yes, it is possible; there are a number of ways to do this, the one I'll talk about here is through a type converter. You need to apply a TypeConverterAttribute to your ITemplate property where you specify a custom type converter that does the job. This type converter relies on InstanceDescriptor. Here is the code for it: public class TemplateTypeConverter: TypeConverter { public override Boolean CanConvertFrom(ITypeDescriptorContext context, Type sourceType) { return ((sourceType == typeof(String)) || (base.CanConvertFrom(context, sourceType) == true)); } public override Boolean CanConvertTo(ITypeDescriptorContext context, Type destinationType) { return ((destinationType == typeof(InstanceDescriptor)) || (base.CanConvertTo(context, destinationType) == true)); } public override Object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, Object value, Type destinationType) { if (destinationType == typeof(InstanceDescriptor)) { Object objectFactory = value.GetType().GetField("_objectFactory", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(value); Object builtType = objectFactory.GetType().BaseType.GetField("_builtType", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(objectFactory); MethodInfo loadTemplate = typeof(TemplateTypeConverter).GetMethod("LoadTemplate"); return (new InstanceDescriptor(loadTemplate, new Object [] { "~/" + (builtType as Type).Name.Replace('_', '/').Replace("/ascx", ".ascx") })); } return base.ConvertTo(context, culture, value, destinationType); } public static ITemplate LoadTemplate(String virtualPath) { using (Page page = new Page()) { return (page.LoadTemplate(virtualPath)); } } } And, on your control: public class MyControl: Control { [Browsable(false)] [TypeConverter(typeof(TemplateTypeConverter))] public ITemplate Template { get; set; } } This allows the following declaration: Hope this helps! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Adding objects to the environment at timed intervals

    - by david
    I am using an ArrayList to handle objects and at each interval of 120 frames, I am adding a new object of the same type at a random location along the z-axis of 60. The problem is, it doesn't add just 1. It depends on how many are in the list. If I kill the Fox before the time interval when one is supposed to spawn comes, then no Fox will be spawned. If I don't kill any foxes, it grows exponentially. I only want one Fox to be added every 120 frames. This problem never happened before when I created new ones and added them to the environment. Any insights? Here is my code: /**** FOX CLASS ****/ import env3d.EnvObject; import java.util.ArrayList; public class Fox extends Creature { private int frame = 0; public Fox(double x, double y, double z) { super(x, y, z); // Must use the mutator as the fields have private access // in the parent class setTexture("models/fox/fox.png"); setModel("models/fox/fox.obj"); setScale(1.4); } public void move(ArrayList<Creature> creatures, ArrayList<Creature> dead_creatures, ArrayList<Creature> new_creatures) { frame++; setX(getX()-0.2); setRotateY(270); if (frame > 120) { Fox f = new Fox(60, 1, (int)(Math.random()*28)+1); new_creatures.add(f); frame = 0; } for (Creature c : creatures) { if (this.distance(c) < this.getScale()+c.getScale() && c instanceof Tux) { dead_creatures.add(c); } } for (Creature c : creatures) { if (c.getX() < 1 && c instanceof Fox) { dead_creatures.add(c); } } } } import env3d.Env; import java.util.ArrayList; import org.lwjgl.input.Keyboard; /** * A predator and prey simulation. Fox is the predator and Tux is the prey. */ public class Game { private Env env; private boolean finished; private ArrayList<Creature> creatures; private KingTux king; private Snowball ball; private int tuxcounter; private int kills; /** * Constructor for the Game class. It sets up the foxes and tuxes. */ public Game() { // we use a separate ArrayList to keep track of each animal. // our room is 50 x 50. creatures = new ArrayList<Creature>(); for (int i = 0; i < 10; i++) { creatures.add(new Tux((int)(Math.random()*10)+1, 1, (int)(Math.random()*28)+1)); } for (int i = 0; i < 1; i++) { creatures.add(new Fox(60, 1, (int)(Math.random()*28)+1)); } king = new KingTux(25, 1, 35); ball = new Snowball(-400, -400, -400); } /** * Play the game */ public void play() { finished = false; // Create the new environment. Must be done in the same // method as the game loop env = new Env(); // Make the room 50 x 50. env.setRoom(new Room()); // Add all the animals into to the environment for display for (Creature c : creatures) { env.addObject(c); } for (Creature c : creatures) { if (c instanceof Tux) { tuxcounter++; } } env.addObject(king); env.addObject(ball); // Sets up the camera env.setCameraXYZ(30, 50, 55); env.setCameraPitch(-63); // Turn off the default controls env.setDefaultControl(false); // A list to keep track of dead tuxes. ArrayList<Creature> dead_creatures = new ArrayList<Creature>(); ArrayList<Creature> new_creatures = new ArrayList<Creature>(); // The main game loop while (!finished) { if (env.getKey() == 1 || tuxcounter == 0) { finished = true; } env.setDisplayStr("Tuxes: " + tuxcounter, 15, 0); env.setDisplayStr("Kills: " + kills, 140, 0); processInput(); ball.move(); king.check(); // Move each fox and tux. for (Creature c : creatures) { c.move(creatures, dead_creatures, new_creatures); } for (Creature c : creatures) { if (c.distance(ball) < c.getScale()+ball.getScale() && c instanceof Fox) { dead_creatures.add(c); ball.setX(-400); ball.setY(-400); ball.setZ(-400); kills++; } } // Clean up of the dead tuxes. for (Creature c : dead_creatures) { if (c instanceof Tux) { tuxcounter--; } env.removeObject(c); creatures.remove(c); } for (Creature c : new_creatures) { creatures.add(c); env.addObject(c); } // we clear the ArrayList for the next loop. We could create a new one // every loop but that would be very inefficient. dead_creatures.clear(); new_creatures.clear(); // Update display env.advanceOneFrame(); } // Just a little clean up env.exit(); } private void processInput() { int keyDown = env.getKeyDown(); int key = env.getKey(); if (keyDown == 203) { king.setX(king.getX()-1); } else if (keyDown == 205) { king.setX(king.getX()+1); } if (ball.getX() <= -400 && key == Keyboard.KEY_S) { ball.setX(king.getX()); ball.setY(king.getY()); ball.setZ(king.getZ()); } } /** * Main method to launch the program. */ public static void main(String args[]) { (new Game()).play(); } } /**** CREATURE CLASS ****/ /* (Parent class to Tux, Fox, and KingTux) */ import env3d.EnvObject; import java.util.ArrayList; abstract public class Creature extends EnvObject { private int frame; private double rand; /** * Constructor for objects of class Creature */ public Creature(double x, double y, double z) { setX(x); setY(y); setZ(z); setScale(1); rand = Math.random(); } private void randomGenerator() { rand = Math.random(); } public void move(ArrayList<Creature> creatures, ArrayList<Creature> dead_creatures, ArrayList<Creature> new_creatures) { frame++; if (frame > 12) { randomGenerator(); frame = 0; } // if (rand < 0.25) { // setX(getX()+0.3); // setRotateY(90); // } else if (rand < 0.5) { // setX(getX()-0.3); // setRotateY(270); // } else if (rand < 0.75) { // setZ(getZ()+0.3); // setRotateY(0); // } else if (rand < 1) { // setZ(getZ()-0.3); // setRotateY(180); // } if (rand < 0.5) { setRotateY(getRotateY()-7); } else if (rand < 1) { setRotateY(getRotateY()+7); } setX(getX()+Math.sin(Math.toRadians(getRotateY()))*0.5); setZ(getZ()+Math.cos(Math.toRadians(getRotateY()))*0.5); if (getX() < getScale()) setX(getScale()); if (getX() > 50-getScale()) setX(50 - getScale()); if (getZ() < getScale()) setZ(getScale()); if (getZ() > 50-getScale()) setZ(50 - getScale()); // The move method now handles collision detection if (this instanceof Fox) { for (Creature c : creatures) { if (c.distance(this) < c.getScale()+this.getScale() && c instanceof Tux) { dead_creatures.add(c); } } } } } The rest of the classes are a bit trivial to this specific problem.

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • How can I make a game like doodlejump XNA c#

    - by Ramy
    I wanted to know how can I make the background scroll down like doodlejump. I have a game made and I have to transform it so it's like doodle jump, but I'm wonder how or where to look so I can make he background keep moving as in progressing through the background till let's say the character dies. namespace IFM20884 { using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; public abstract class BackgroundScroll : Sprite { private float speedOfBackground = 0.2f; // speed that the background moves public BackgroundScroll (GraphicsDeviceManager graphics) : base(graphics.GraphicsDevice.Viewport.Width / 2f, graphics.GraphicsDevice.Viewport.Height / 2f) { } //Getter public float speedOfBackground { get { return this.speedOfBackground ; } set { this.speedOfBackground = value; } } public override void Update(GameTime gameTime, GraphicsDeviceManager graphics) { //Makes background go down. ForcePosition(Position.X, Position.Y + (gameTime.ElapsedGameTime.Milliseconds * this.speedOfBackground )); if (Position.Y - (Height / 2) > graphics.GraphicsDevice.Viewport.Height) { ForcePosition(Position.X, Position.Y - this.Height); } } public override void Draw(SpriteBatch spriteBatch) { ForcePosition(Position.X, Position.Y - this.Height); base.Draw(spriteBatch); ForcerPosition(Position.X, Position.Y + this.Height); base.Draw(spriteBatch); } } }

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Cannot install Crossover

    - by tech
    I can't install crossover from the package, ".deb". Here is a screenshoot of it : Here is what I got when I was trying to install with terminal: `young@jianyue:~$ cd /home/young/Desktop young@jianyue:~/Desktop$ sudo dpkg -i crossover.deb Selecting previously unselected package ia32-crossover. (Reading database ... 127804 files and directories currently installed.) Unpacking ia32-crossover (from crossover.deb) ... dpkg: dependency problems prevent configuration of ia32-crossover: ia32-crossover depends on libc6-i386; however: Package libc6-i386 is not installed. ia32-crossover depends on ia32-libs | ia32-apt-get; however: Package ia32-libs is not installed. Package ia32-apt-get is not installed. ia32-crossover depends on lib32gcc1; however: Package lib32gcc1 is not installed. ia32-crossover depends on lib32nss-mdns; however: Package lib32nss-mdns is not installed. ia32-crossover depends on lib32z1; however: Package lib32z1 is not installed. ia32-crossover depends on python-glade2; however: Package python-glade2 is not installed. ia32-crossover depends on lib32asound2; however: Package lib32asound2 is not installed. dpkg: error processing ia32-crossover (--install): dependency problems - leaving unconfigured Processing triggers for doc-base ... Processing 33 changed doc-base files, 1 added doc-base file... Errors were encountered while processing: ia32-crossover `

    Read the article

  • Silverlight Cream for November 27, 2011 -- #1176

    - by Dave Campbell
    In this Issue: Matt Eland, Parag Joshi, Jerrel Blankenship, and Joost van Schaik. Above the Fold: WP7: "Safe event detachment base class for Windows Phone 7 behaviors" Joost van Schaik Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:31 Days of Mango | Day #22: App ConnectMatt Eland takes the reigns of Jeff's blog for Day 22 and is talking about App Connect... App Connect allows apps to be listed on Quick Cards relative to an app's subject matter, and Quick Cards are items that appear in searches to let users find out more info... check out the blog post if you're not familiar with this31 Days of Mango | Day #21: SocketsJeff's Day 21 is written by Parag Joshi, and is on sockets... and is building a WP7 app for posting restaurant orders to a Silverlight OOB app running on a host machine... good sized tutorial and discussion, plus a project to download and play with31 Days of Mango | Day #20: Creating RingtonesJerrel Blankenship has Day 20 for Jeff Blankenburg's 31 Days of Mango and is discussing Ringtones... how to create and save a custom ringtone for your userSafe event detachment base class for Windows Phone 7 behaviorsJoost van Schaik revisits his Safe Event Detachment pattern for WP7 and built a base class to take care of the initialization involved to be kind to us, the developers... code includedStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Less is more: The making of a 37 signals style pizza

    - by Liam McLennan
    For years now we have been hearing from 37 signals that the way to bake a great web app is to build less – well the same is true of pizza. Our western hedonism has led us to pursue ever cheesier and more stuffed crusts at the expense of the simple flavours. All we are left with is a fatty, salty heart attack in waiting. The Italians know that the secret to great taste is simplicity. With that in mind I decided to base my pizza masterpiece on these simple flavours: tomato sopressa (spicy aged salami) mozzarella garlic basil Of course the first thing one needs when making pizza is a base.   A freshly made base is extremely important but unfortunately I was too lazy. Next up is the tomato sauce. My wife made the sauce by reducing some tomatoes and adding herbs and sugar. We had selected some fine ingredients to make our topping: sopressa salami, fresh basil and the best mozzarella we could find.   It is, according to google, important to bake pizza at a high temperature, so I set the oven to 250C (480F). Here are the before and after shots: Meanwhile, the dog did nothing.

    Read the article

  • Ubuntu 12.04 LTS 32bit does not detect 4Gb ram

    - by David
    I have recently installed 4Gb of ram for an existing 12.04 32bit Ubuntu. It's not being recognised, only 3.2Gb is showing, See: administrator@Root2:~$ free total used free shared buffers cached Mem: 3355256 1251112 2104144 0 48664 391972 -/+ buffers/cache: 810476 2544780 System is PAE capable, See: administrator@Root2:~$ grep --color=always -i PAE /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts The system us fully patched and tried to run manual PAE upgrade, See: administrator@Root2:~$ sudo apt-get install linux-generic-pae linux-headers-generic-pae [sudo] password for administrator: Reading package lists... Done Building dependency tree Reading state information... Done linux-generic-pae is already the newest version. linux-headers-generic-pae is already the newest version. The following packages were automatically installed and are no longer required: language-pack-zh-hans language-pack-kde-en language-pack-kde-zh-hans language-pack-kde-en-base kde-l10n-engb kde-l10n-zhcn language-pack-zh-hans-base firefox-locale-zh-hans language-pack-kde-zh-hans-base Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I am not sure what else to try to recognise the full physical memory installed other than loading 64bit. Any thoughts? Thanks! output of uname -r administrator@Root2:~$ uname -r 3.2.0-24-generic-pae

    Read the article

  • Why is apt-get --auto-remove not removing all dependencies?

    - by Mike
    I just installed a package (dansguardian in this case) and apt told me that I had unmet dependencies. # sudo apt-get install dansguardian Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: clamav clamav-base clamav-freshclam libclamav6 libtommath0 Suggested packages: clamav-docs squid libclamunrar6 The following NEW packages will be installed: clamav clamav-base clamav-freshclam dansguardian libclamav6 libtommath0 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/4,956 kB of archives. After this operation, 14.4 MB of additional disk space will be used. Do you want to continue [Y/n]? So I installed it and the dependencies. So far so good. Later on, I decide that this package just isn't the package for me, so I want to remove it and all of the other junk it installed with it since I'm not going to be needing any of it: # sudo apt-get remove --auto-remove --purge dansguardian Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: dansguardian 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 1,816 kB disk space will be freed. Do you want to continue [Y/n]? However it is only removing that one specific package. What about clamav clamav-base clamav-freshclam libclamav6 libtommath0? Not only did it not remove them, but clamav was actually running a daemon that loads every time the computer boots. I thought that --auto-remove would remove not only the packages, but also the dependencies that were installed with it. So basically, without going through the apt history log file (if I even remember to do so, or if I even remember that a specific package I installed 3 months ago had dependencies along with it), is there a way to remove a package and all of the other dependencies that were installed like in this case?

    Read the article

  • For a Javascript library, what is the best or standard way to support extensibility

    - by Michael Best
    Specifically, I want to support "plugins" that modify the behavior of parts of the library. I couldn't find much information on the web about this subject. But here are my ideas for how a library could be extensible. The library exports an object with both public and "protected" functions. A plugin can replace any of those functions, thus modifying the library's behavior. Advantages of this method are that it's simple and that the plugin's functions can have full access to the library's "protected" functions. Disadvantages are that the library may be harder to maintain with a larger set of exposed functions and it could be hard to debug if multiple plugins are involved (how to know which plugin modified which function?). The library provides an "add plugin" function that accepts an object with a specific interface. Internally, the library will use the plugin instead of it's own code if appropriate. With this method, the internals of the library can be rearranged more freely as long as it still supports the same plugin interface. This could also support having different plugin interfaces to modify different parts of the library. A disadvantage of this method is that the plugins may have to re-implement code that is already part of the library since the library's internal functions are not exported. The library provides a "set implementation" function that accepts an object inherited from a specific base object. The library's public API calls functions in the implementation object for any functionality that can be modified and the base implementation object includes the core functionality, with both external (to the API) and internal functions. A plugin creates a new implementation object, which inherits from the base object and replaces any functions it wants to modify. This combines advantages and disadvantages of both the other methods.

    Read the article

  • Can't play Steel Storm, Burning Retribution

    - by Goytor
    I've bougth Steel Storm, Burning Retribution in the Software Center, and every time I run it shows the following message: You have reached this menu due to missing or unlocable content/data You may consider adding -base dir /path/to/game to your launch commandline I've gone to main menu in the preferences tab and changed the launcher to no avail. I've tried running it from console, with /opt/steelstorm-episode2/steelstorm, I got: Game is Steel-Storm using base gamedir gamedata Steel-Storm Linux 01:07:07 Jun 11 2011 - release Playing shareware version. Skeletal animation uses SSE code path DPSOFTRAST available (SSE2 instructions detected) Failed to init SDL joystick subsystem: couldn't exec quake.rc couldn't exec default.cfg execing config.cfg couldn't exec autoexec.cfg Client using an automatically assigned port Client opened a socket on address 0.0.0.0:0 Client opened a socket on address [0:0:0:0:0:0:0:0]:0 Linked against SDL version 1.2.12 Using SDL library version 1.2.14 GL_VENDOR: NVIDIA Corporation GL_RENDERER: GeForce 6150SE nForce 430/PCI/SSE2/3DNOW! GL_VERSION: 2.1.2 NVIDIA 270.41.06 vid.support.arb_multisample 1 vid.mode.samples 0 vid.support.gl20shaders 1 Video Mode: fullscreen 640x480x32x0.00hz S_Startup: initializing sound output format: 48000Hz, 16 bit, 2 channels... Wanted audio Specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 2048 Obtained audio specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 1024 Sound format: 48000Hz, 2 channels, 16 bits per sample CDAudio_Init: No CD in player. Can't get initial CD volume CD Audio Initialized If I try -base /opt/steelstorm-episode2/steelstorm says "command not found".

    Read the article

  • Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008

    - by Darren Cook
    I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object-oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • Xna, after mouse click cpu usage goes 100%

    - by kosnkov
    Hi i have following code and it is enough just if i click on blue window then cpu goes to 100% for like at least one minute even with my i7 4 cores. I just check even with empty project and is the same !!! public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; private Texture2D cursorTex; private Vector2 cursorPos; GraphicsDevice device; float xPosition; float yPosition; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { Viewport vp = GraphicsDevice.Viewport; xPosition = vp.X + (vp.Width / 2); yPosition = vp.Y + (vp.Height / 2); device = graphics.GraphicsDevice; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); cursorTex = Content.Load<Texture2D>("strzalka"); } protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(cursorTex, cursorPos, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • How to configure ubuntu ldap client to get password policies from server?

    - by Rafaeldv
    I have a ldap server on CentOS, 389-ds. I configured the client, ubuntu 12.04, to authenticate on that base and it works very well. But it don't gets the password policies from server. For example, if i set the policy to force user to change the password on first login, ubuntu ignores it and logs him in, always. How can i setup the client to get the policies? Here are the client files: /etc/nsswitch.conf passwd: files ldap group: files ldap shadow: files ldap hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis sudoers: ldap files common-auth auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_ldap.so use_first_pass auth requisite pam_deny.so auth required pam_permit.so auth optional pam_cap.so common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 default=ignore] pam_ldap.so account requisite pam_deny.so account required pam_permit.so common-password password requisite pam_cracklib.so retry=3 minlen=8 difok=3 password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass password requisite pam_deny.so password required pam_permit.so password optional pam_gnome_keyring.so common-session session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_umask.so session required pam_unix.so session optional pam_ldap.so session optional pam_ck_connector.so nox11 session optional pam_mkhomedir.so skel=/etc/skel umask=0022 /etc/ldap.conf base dc=a,dc=b,dc=c uri ldaps://a.b.c/ ldap_version 3 rootbinddn cn=directory manager pam_password md5 sudoers_base ou=SUDOers,dc=a,dc=b,dc=c pam_lookup_policy yes pam_check_host_attr yes nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,colord,daemon,games,gnats,hplip,irc,kernoops,libuuid,lightdm,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,sshd,sync,sys,syslog,usbmux,uucp,whoopsie,www-data /etc/ldap/ldap.conf BASE dc=a,dc=b,dc=c URI ldaps://a.b.c/ ssl on use_sasl no tls_checkpeer no sudoers_base ou=SUDOers,dc=a,dc=b,dc=c sudoers_debug 2 pam_lookup_policy yes pam_check_host_attr yes pam_lookup_policy yes pam_check_host_attr yes TLS_CACERT /etc/ssl/certs/ca-certificates.crt TLS_REQCERT never

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

  • Legacy Code Retreat Questions

    - by MarkPearl
    I recently heard of the concept of a Legacy Code Retreat. Since I have attended and helped facilitate some normal Code Retreats I thought it might be interesting in trying a Legacy Code Retreat, but I have a few questions on how a legacy CR differs from a normal one. If anyone has attended a Legacy CR and has some suggestions on how best to host these event’s please leave a comment on what has worked for you in the past or if you have any answers to my questions below… Should you restrict the languages that people can do the sessions in? In the normal CR’s I have been involved in the past we have had people attend and code in their programming language of choice. A normal CR lends itself to  this because each session starts with no code. With a legacy CR each session seems to start with an existing code base. Is there some sort of limitation on the languages that people can work in during the sessions? If not, how do you give them a base to start from? What happens as the beginning of each session? In the normal CR that I have attended each session would have a constraint set on it – i.e. no if statements used, no primitives, etc. With a legacy CR it seems more like patterns for refactoring are learned. Does the facilitator explain the pattern used before the session starts or are they just given a code base to start from and an objective to achieve

    Read the article

  • How to educate business managers on the complexity of adding new features? [duplicate]

    - by Derrick Miller
    This question already has an answer here: How to educate business managers on the complexity of adding new features? [duplicate] 3 answers We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features ad infinitum and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >