Search Results

Search found 5277 results on 212 pages for 'fuzzy logic'.

Page 30/212 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Oracle Utilities Application Framework V4.2.0.0.0 Released

    - by ACShorten
    The Oracle Utilities Application Framework V4.2.0.0.0 has been released with Oracle Utilities Customer Care And Billing V2.4. This release includes new functionality and updates to existing functionality and will be progressively released across the Oracle Utilities applications. The release is quite substantial with lots of new and exciting changes. The release notes shipped with the product includes a summary of the changes implemented in V4.2.0.0.0. They include the following: Configuration Migration Assistant (CMA) - A new data management capability to allow you to export and import Configuration Data from one environment to another with support for Approval/Rejection of individual changes. Database Connection Tagging - Additional tags have been added to the database connection to allow database administrators, Oracle Enterprise Manager and other Oracle technology the ability to monitor and use individual database connection information. Native Support for Oracle WebLogic - In the past the Oracle Utilities Application Framework used Oracle WebLogic in embedded mode, and now, to support advanced configuration and the ExaLogic platform, we are adding Native Support for Oracle WebLogic as configuration option. Native Web Services Support - In the past the Oracle Utilities Application Framework supplied a servlet to handle Web Services calls and now we offer an alternative to use the native Web Services capability of Oracle WebLogic. This allows for enhanced clustering, a greater level of Web Service standards support, enchanced security options and the ability to use the Web Services management capabilities in Oracle WebLogic to implement higher levels of management including defining additional security rules to control access to individual Web Services. XML Data Type Support - Oracle Utilities Application Framework now allows implementors to define XML Data types used in Oracle in the definition of custom objects to take advantage of XQuery and other XML features. Fuzzy Operator Support - Oracle Utilities Application Framework supports the use of the fuzzy operator in conjunction with Oracle Text to take advantage of the fuzzy searching capabilities within the database. Global Batch View - A new JMX based API has been implemented to allow JSR120 compliant consoles the ability to view batch execution across all threadpools in the Coherence based Named Cache Cluster. Portal Personalization - It is now possible to store the runtime customizations of query zones such as preferred sorting, field order and filters to reuse as personal preferences each time that zone is used. These are just the major changes and there are quite a few more that have been delivered (and more to come in the service packs!!). Over the next few weeks we will be publishing new whitepapers and new entries in this blog outlining new facilities that you want to take advantage of.

    Read the article

  • Algorithm for optimal combination of two variables

    - by AlanChavez
    I'm looking for an algorithm that would be able to determine the optimal combination of two variables, but I'm not sure where to start looking. For example, if I have 10,000 rows in a database and each row contains price, and square feet is there any algorithm out there that will be able to determine what combination of price and sq ft is optimal. I know this is vague, but I assume is along the lines of Fuzzy logic and fuzzy sets, but I'm not sure and I'd like to start digging in the right field to see if I can come up with something that solves my problem.

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Mike Cohn-style burndown charts in JIRA

    - by Fuzzy Purple Monkey
    We use Jira/Greenhopper for our project planning/tracking. The built-in graphs are great, but when a new issue/story is added during a project, the whole burn-down graph moves up rather than increasing the part of the graph when the issue was added. Ideally I'd like to generate a Mike Cohn-style burndown graph, which shows a hump when new issues are added. Does anyone know of any plugins that support this, or been able to extract this data directly from the database?

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Tips or techniques to use when you do't know how to code something?

    - by janoChen
    I have a background as UI designer. And I realized that it is a bit hard for me to write a pieces of logic. Sometimes I get it right, but most of the time, I end up with something hacky (and it usually takes a lot of time). And is not that I don't like programming, in fact, I'm starting to like it as much as design. It's just that sometimes I think that I'm better at dealing with colors an shapes, rather than numbers and logic (but I want to change that). What I usually do is to search the solution on the Internet, copy the example, and insert it into my app (I know this is not a very good practice). I've heard that one tip was to write the logic in common English as comment before writing the actual code. What other tips and techniques I can use?

    Read the article

  • Testing with Profiler Custom Events and Database Snapshots

    We've all had them. One of those stored procedures that is huge and contains complex business logic which may or may not be executed. These procedures make it an absolute nightmare when it comes to debugging problems because they're so complex and have so many logic offshoots that it's very easy to get lost when you're trying to determine the path that the procedure code took when it ran. Fortunately Profiler lets you define custom events that you can raise in your code and capture in a trace so you get a better window into the sub events occurring in your code. I found it very useful to use custom events and a database snapshot to debug some code recently and we'll explore both in this article. I find raising these events and running Profiler to be very useful for testing my stored procedures on my own as well as when my code is going through official testing and user acceptance. It's a simple approach and a great way to catch any performance problems or logic errors.

    Read the article

  • ASP.NET C# Session Variable

    - by SAMIR BHOGAYTA
    You can make changes in the web.config. You can give the location path i.e the pages to whom u want to apply the security. Ex. 1) In first case the page can be accessed by everyone. // Allow ALL users to visit the CreatingUserAccounts.aspx // location path="CreatingUserAccounts.aspx" system.web authorization allow users="*" / /authorization /system.web /location 2) in this case only admin can access the page // Allow ADMIN users to visit the hello.aspx location path="hello.aspx" system.web authorization allow roles="ADMIN' / deny users="*" / /authorization /system.web /location OR On the every page you need to check the authorization according to the page logic ex: On every page call this if (session[loggeduser] !=null) { DataSet dsUser=(DataSet)session[loggeduser]; if (dsUser !=null && dsUser.Tables.Count0 && dsUser.Tables[0] !=null && dsUser.Tables[0].Rows.Count0) { if (dsUser.Table[0].Rows[0]["UserType"]=="SuperAdmin") { //your page logic here } if (dsUser.Table[0].Rows[0]["UserType"]=="Admin") { //your page logic here } } }

    Read the article

  • Can't get past 2542 Threads in Java on 4GB iMac OSX 10.6.3 Snow Leopard (32bit)

    - by fuzzy lollipop
    I am running the following program trying to figure out how to configure my JVM to get the maximum number of threads my machine can support. For those that might not know, Snow Leopard ships with Java 6. I tried starting it with defaults, and the following command lines, I always get the Out of Memory Error at Thread 2542 no matter what the JVM options are set to. java TestThreadStackSizes 100000 java -Xss1024 TestThreadStackSizes 100000 java -Xmx128m -Xss1024 TestThreadStackSizes 100000 java -Xmx2048m -Xss1024 TestThreadStackSizes 100000 java -Xmx2048m -Xms2048m -Xss1024 TestThreadStackSizes 100000 no matter what I pass it, I get the same results, Out of Memory Error at 2542 public class TestThreadStackSizes { public static void main(final String[] args) { Thread.currentThread().setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { public void uncaughtException(final Thread t, final Throwable e) { System.err.println(e.getMessage()); System.exit(1); } }); int numThreads = 1000; if (args.length == 1) { numThreads = Integer.parseInt(args[0]); } for (int i = 0; i < numThreads; i++) { try { Thread t = new Thread(new SleeperThread(i)); t.start(); } catch (final OutOfMemoryError e) { throw new RuntimeException(String.format("Out of Memory Error on Thread %d", i), e); } } } private static class SleeperThread implements Runnable { private final int i; private SleeperThread(final int i) { this.i = i; } public void run() { try { System.out.format("Thread %d about to sleep\n", this.i); Thread.sleep(1000 * 60 * 60); } catch (final InterruptedException e) { throw new RuntimeException(e); } } } } Any ideas on now I can affect these results?

    Read the article

  • Clean way to use mutable implementation of Immutable interfaces for encapsulation

    - by dsollen
    My code is working on some compost relationship which creates a tree structure, class A has many children of type B, which has many children of type C etc. The lowest level class, call it bar, also points to a connected bar class. This effectively makes nearly every object in my domain inter-connected. Immutable objects would be problematic due to the expense of rebuilding almost all of my domain to make a single change to one class. I chose to go with an interface approach. Every object has an Immutable interface which only publishes the getter methods. I have controller objects which constructs the domain objects and thus has reference to the full objects, thus capable of calling the setter methods; but only ever publishes the immutable interface. Any change requested will go through the controller. So something like this: public interface ImmutableFoo{ public Bar getBar(); public Location getLocation(); } public class Foo implements ImmutableFoo{ private Bar bar; private Location location; @Override public Bar getBar(){ return Bar; } public void setBar(Bar bar){ this.bar=bar; } @Override public Location getLocation(){ return Location; } } public class Controller{ Private Map<Location, Foo> fooMap; public ImmutableFoo addBar(Bar bar){ Foo foo=fooMap.get(bar.getLocation()); if(foo!=null) foo.addBar(bar); return foo; } } I felt the basic approach seems sensible, however, when I speak to others they always seem to have trouble envisioning what I'm describing, which leaves me concerned that I may have a larger design issue then I'm aware of. Is it problematic to have domain objects so tightly coupled, or to use the quasi-mutable approach to modifying them? Assuming that the design approach itself isn't inherently flawed the particular discussion which left me wondering about my approach had to do with the presence of business logic in the domain objects. Currently I have my setter methods in the mutable objects do error checking and all other logic required to verify and make a change to the object. It was suggested that this should be pulled out into a service class, which applies all the business logic, to simplify my domain objects. I understand the advantage in mocking/testing and general separation of logic into two classes. However, with a service method/object It seems I loose some of the advantage of polymorphism, I can't override a base class to add in new error checking or business logic. It seems, if my polymorphic classes were complicated enough, I would end up with a service method that has to check a dozen flags to decide what error checking and business logic applies. So, for example, if I wanted to have a childFoo which also had a size field which should be compared to bar before adding par my current approach would look something like this. public class Foo implements ImmutableFoo{ public void addBar(Bar bar){ if(!getLocation().equals(bar.getLocation()) throw new LocationException(); this.bar=bar; } } public interface ImmutableChildFoo extends ImmutableFoo{ public int getSize(); } public ChildFoo extends Foo implements ImmutableChildFoo{ private int size; @Override public int getSize(){ return size; } @Override public void addBar(Bar bar){ if(getSize()<bar.getSize()){ throw new LocationException(); super.addBar(bar); } My colleague was suggesting instead having a service object that looks something like this (over simplified, the 'service' object would likely be more complex). public interface ImmutableFoo{ ///original interface, presumably used in other methods public Location getLocation(); public boolean isChildFoo(); } public interface ImmutableSizedFoo implements ImmutableFoo{ public int getSize(); } public class Foo implements ImmutableSizedFoo{ public Bar bar; @Override public void addBar(Bar bar){ this.bar=bar; } @Override public int getSize(){ //default size if no size is known return 0; } @Override public boolean isChildFoo return false; } } public ChildFoo extends Foo{ private int size; @Override public int getSize(){ return size; } @Override public boolean isChildFoo(); return true; } } public class Controller{ Private Map<Location, Foo> fooMap; public ImmutableSizedFoo addBar(Bar bar){ Foo foo=fooMap.get(bar.getLocation()); service.addBarToFoo(foo, bar); returned foo; } public class Service{ public static void addBarToFoo(Foo foo, Bar bar){ if(foo==null) return; if(!foo.getLocation().equals(bar.getLocation())) throw new LocationException(); if(foo.isChildFoo() && foo.getSize()<bar.getSize()) throw new LocationException(); foo.setBar(bar); } } } Is the recommended approach of using services and inversion of control inherently superior, or superior in certain cases, to overriding methods directly? If so is there a good way to go with the service approach while not loosing the power of polymorphism to override some of the behavior?

    Read the article

  • Can' get couchdb external http handlers to work.

    - by fuzzy lollipop
    following the instructions here http://wiki.apache.org/couchdb/ExternalProcesses this is what I get { * error: "{{badarg,[{erlang,port_command, [#Port<0.2056>, [123, [34,<<"info">>,34], 58, [123, [34,"db_name",34], 58, [34,<<"transfer_central">>,34], 44, [34,"doc_count",34], 58,"39441",44, [34,"doc_del_count",34], 58,"0",44, [34,"update_seq",34], 58,"56508",44, [34,"purge_seq",34], 58,"0",44, [34,"compact_running",34], 58,<<"false">>,44, [34,"disk_size",34], 58,"43593828",44, [34,"instance_start_time",34], 58, [34,<<"1272560477320483">>,34], 44, [34,"disk_format_version",34], 58,"5",125], 44, [34,<<"id">>,34], 58,<<"null">>,44, [34,<<"method">>,34], 58, [34,"GET",34], 44, [34,<<"path">>,34], 58, [91, [34,<<"transfer_central">>,34], 44, [34,<<"_test">>,34], 93], 44, [34,<<"query">>,34], 58,<<"{}">>,44, [34,<<"headers">>,34], 58, [123, [34,<<"Accept">>,34], 58, [34, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>, 34], 44, [34,<<"Accept-Charset">>,34], 58, [34,<<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>,34], 44, [34,<<"Accept-Encoding">>,34], 58, [34,<<"gzip,deflate">>,34], 44, [34,<<"Accept-Language">>,34], 58, [34,<<"en-us,en;q=0.5">>,34], 44, [34,<<"Connection">>,34], 58, [34,<<"keep-alive">>,34], 44, [34,<<"Host">>,34], 58, [34,<<"127.0.0.1:5984">>,34], 44, [34,<<"Keep-Alive">>,34], 58, [34,<<"115">>,34], 44, [34,<<"User-Agent">>,34], 58, [34, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>, 34], 125], 44, [34,<<"body">>,34], 58, [34,"undefined",34], 44, [34,<<"peer">>,34], 58, [34,<<"127.0.0.1">>,34], 44, [34,<<"form">>,34], 58,<<"{}">>,44, [34,<<"cookie">>,34], 58,<<"{}">>,44, [34,<<"userCtx">>,34], 58, [123, [34,<<"db">>,34], 58, [34,<<"transfer_central">>,34], 44, [34,<<"name">>,34], 58,<<"null">>,44, [34,<<"roles">>,34], 58,<<"[]">>,125], 125,10]]}, {couch_os_process,writeline,2}, {couch_os_process,writejson,2}, {couch_os_process,handle_call,3}, {gen_server,handle_msg,5}, {proc_lib,init_p_do_apply,3}]}, {gen_server,call, [<0.110.0>, {prompt,{[{<<"info">>, {[{db_name,<<"transfer_central">>}, {doc_count,39441}, {doc_del_count,0}, {update_seq,56508}, {purge_seq,0}, {compact_running,false}, {disk_size,43593828}, {instance_start_time,<<"1272560477320483">>}, {disk_format_version,5}]}}, {<<"id">>,null}, {<<"method">>,'GET'}, {<<"path">>,[<<"transfer_central">>,<<"_test">>]}, {<<"query">>,{[]}}, {<<"headers">>, {[{<<"Accept">>, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>}, {<<"Accept-Charset">>, <<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>}, {<<"Accept-Encoding">>,<<"gzip,deflate">>}, {<<"Accept-Language">>,<<"en-us,en;q=0.5">>}, {<<"Connection">>,<<"keep-alive">>}, {<<"Host">>,<<"127.0.0.1:5984">>}, {<<"Keep-Alive">>,<<"115">>}, {<<"User-Agent">>, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>}]}}, {<<"body">>,undefined}, {<<"peer">>,<<"127.0.0.1">>}, {<<"form">>,{[]}}, {<<"cookie">>,{[]}}, {<<"userCtx">>, {[{<<"db">>,<<"transfer_central">>}, {<<"name">>,null}, {<<"roles">>,[]}]}}]}}, infinity]}}" * reason: "{gen_server,call, [<0.109.0>, {execute,{[{<<"info">>, {[{db_name,<<"transfer_central">>}, {doc_count,39441}, {doc_del_count,0}, {update_seq,56508}, {purge_seq,0}, {compact_running,false}, {disk_size,43593828}, {instance_start_time,<<"1272560477320483">>}, {disk_format_version,5}]}}, {<<"id">>,null}, {<<"method">>,'GET'}, {<<"path">>,[<<"transfer_central">>,<<"_test">>]}, {<<"query">>,{[]}}, {<<"headers">>, {[{<<"Accept">>, <<"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json">>}, {<<"Accept-Charset">>, <<"ISO-8859-1,utf-8;q=0.7,*;q=0.7">>}, {<<"Accept-Encoding">>,<<"gzip,deflate">>}, {<<"Accept-Language">>,<<"en-us,en;q=0.5">>}, {<<"Connection">>,<<"keep-alive">>}, {<<"Host">>,<<"127.0.0.1:5984">>}, {<<"Keep-Alive">>,<<"115">>}, {<<"User-Agent">>, <<"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3">>}]}}, {<<"body">>,undefined}, {<<"peer">>,<<"127.0.0.1">>}, {<<"form">>,{[]}}, {<<"cookie">>,{[]}}, {<<"userCtx">>, {[{<<"db">>,<<"transfer_central">>}, {<<"name">>,null}, {<<"roles">>,[]}]}}]}}, infinity]}" }

    Read the article

  • How does a search functionality fit in DDD with CQRS?

    - by Songo
    In Vaughn Vernon's book Implementing domain driven design and the accompanying sample application I found that he implemented a CQRS approach to the iddd_collaboration bounded context. He presents the following classes in the application service layer: CalendarApplicationService.java CalendarEntryApplicationService.java CalendarEntryQueryService.java CalendarQueryService.java I'm interested to know if an application will have a search page that feature numerous drop downs and check boxes with a smart text box to match different search patterns; How will you structure all that search logic? In a command service or a query service? Taking a look at the CalendarQueryService.java I can see that it has 2 methods for a huge query, but no logic at all to mix and match any search filters for example. I've heard that the application layer shouldn't have any business logic, so where will I construct my dynamic query? or maybe just clutter everything in the Query service?

    Read the article

  • I need a simple command line program to transform XML using XSLT

    - by fuzzy lollipop
    I am on OSX Snow Leopard (10.6.2) I can install anything I need to. I would preferably like a Python or Java solution. I have searched on Google and found lots of information on writing my own program to do this, but this is a just a quick and dirty experiment so I don't want to invest a lot of time on writing a bunch of code to do this, I am sure someone else has done this already.

    Read the article

  • Easiest turn-base games you can think of?

    - by Edgar Miranda
    I'm planning to get into the process of programming multiplayer turn-base games. I would like to start off by making some of the simplest (yet fun) multiplayer turn-base games out there. What are some that you can provide? For example... Tic-Tac-Toe Rock-Paper-Scissors Checkers Some not so easy games... 4 in a row chess poker In terms of "ease" of implementation I'm mainly looking at logic. For example, Rock-Paper-Scissors has very simple logic, while chess has logic that is more complicated. So far I have the following: Hexagon Heroes of Might and Magic Nine Men's Morris Connect 4 21 (card game) Pen the Pig (The Dot game) Memory Match

    Read the article

  • how to get jquery.couch.app.js to work with IE8

    - by fuzzy lollipop
    I have tested this on Windows XP SP3 and Windows 7 Ultimate in IE7 and IE8 (in all compatiblity modes) and it fails the same way on both. I am running the latest HEAD from the the couchapp repository. This works fine on my OSX 10.6.3 development machine. I have tested with Chrome 4.1.249.1064 (45376) and Firefox 3.6 and they both work fine. As do the Safari 4 and Firefox 3.6 on OSX 10.6.3 Here is the error message Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0) Timestamp: Wed, 28 Apr 2010 03:32:55 UTC Message: Object doesn't support this property or method Line: 159 Char: 7 Code: 0 URI: http://192.168.0.105:5984/test/_design/test/vendor/couchapp/jquery.couch.app.js and here is the "offending" bit of code, which works on Chrome, Firefox and Safari just fine. If says the failure is on the like that qs.forEach() from the file jquery.couch.app.js 157 var qs = document.location.search.replace(/^\?/,'').split('&'); 158 var q = {}; 159 qs.forEach(function(param) { 160 var ps = param.split('='); 161 var k = decodeURIComponent(ps[0]); 162 var v = decodeURIComponent(ps[1]); 163 if (["startkey", "endkey", "key"].indexOf(k) != -1) { 164 q[k] = JSON.parse(v); 165 } else { 166 q[k] = v; 167 } 168 });

    Read the article

  • CouchDB Map/Reduce raises execption in reduce function?

    - by fuzzy lollipop
    my view generates keys in this format ["job_id:1234567890", 1271430291000] where the first key element is a unique key and the second is a timestamp in milliseconds. I run my view with this elapsed_time?startkey=["123"]&endkey=["123",{}]&group=true&group_level=1 and here is my reduce function, the intention is to reduce the output to get the earliest and latest timestamps and return the difference between them and now function(keys,values,rereduce) { var now = new Date().valueOf(); var first = Number.MIN_VALUE; var last = Number.MAX_VALUE; if (rereduce) { first = Math.max(first,values[0].first); last = Math.min(last,values[0].last); } else { first = keys[0][0][1]; last = keys[keys.length-1][0][1]; } return {first:now - first, last:now - last}; } and when processing a query it constantly raises the following execption: function raised exception (new TypeError("keys has no properties", "", 1)) I am making sure not to reference keys inside my rereduce block. Why does this function constantly raise this exception?

    Read the article

  • Formula for three competing heroes, each has one they can beat and one they're beaten by

    - by Georgiadis Abraam
    I am trying to design a game for a project I have, The main idea is: 3 Types of heroes 3 Stats per hero There are no levels involved so the differences must be located on stats. Fight logic - The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week I am trying to find a stats based formula that will allow me to fix this but I can't, I was meddling with numbers yesterday and it was decent but I can't extract the formula out of it. Could you please guide me or give me hints on how should I start creating formulas on a Non lvl game that fulfills the fight logic?

    Read the article

  • Is it a good practice to use branches to maintain different editions of the same software?

    - by Tamás Szelei
    We have a product that has a few different editions. The differences are minor: different strings here and there, very little additional logic in one, very little difference in logic in the other. When the software is being developed, most changes need to be added to each edition; however, there are a few that don't and a few that needs to differ. Is it a valid use of branches if I have release-editionA and release-editionB (..etc) branches? Are there any gotchas? Good practices? Update: Thanks for the insight everyone, lots of good answers here. The general consensus seems to be that it is a bad idea to use branches for this purpose. For anyone wondering, my final solution to the problem is to externalize strings as configuration, and externalize the differing logic as plugins or scripts.

    Read the article

  • Tips or techniques to use when you don't know how to code something?

    - by janoChen
    I have a background as UI designer. And I realized that it is a bit hard for me to write a pieces of logic. Sometimes I get it right, but most of the time, I end up with something hacky (and it usually takes a lot of time). And is not that I don't like programming, in fact, I'm starting to like it as much as design. It's just that sometimes I think that I'm better at dealing with colors an shapes, rather than numbers and logic (but I want to change that). What I usually do is to search the solution on the Internet, copy the example, and insert it into my app (I know this is not a very good practice). I've heard that one tip was to write the logic in common English as comment before writing the actual code. What other tips and techniques I can use?

    Read the article

  • Is there benefit to maintain a large project with bad code?

    - by upton
    I'm currently maintain a large project with more than 100000 LOC. The code use the MFC as its framework, in genral, it only has interface part which heavily use the mfc api and a business logic part which full of bad code, confusing logic. The company has some small features delivered to the customer each year(most features are adding code to exisiting project, finding some reference of some api or variable and it' s no different with fixing 3-4 bugs ), most of the tasks are to resove issue and optimize performance . Like other company with maintaining position, it value people who knows much logic about its product. There are people who can quickly finish the job on such project, is it worth to train myself like such a programmer? Is there benifits to work on such project for a long time?

    Read the article

  • Is there benifit to maintain a large project with bad code?

    - by upton
    I'm currently maintain a large project with more than 100000 LOC. The code use the MFC as its framework, in genral, it only has interface part which heavily use the mfc api and a business logic part which full of bad code, confusing logic. The company has some small features delivered to the customer each year(most features are adding code to exisiting project, finding some reference of some api or variable and it' s no different with fixing 3-4 bugs ), most of the tasks are to resove issue and optimize performance . Like other company with maintaining position, it value people who knows much logic about its product. There are people who can quickly finish the job on such project, is it worth to train myself like such a programmer? Is there benifits to work on such project for a long time?

    Read the article

  • How many threads should an Android game use?

    - by kvance
    At minimum, an OpenGL Android game has a UI thread and a Renderer thread created by GLSurfaceView. Renderer.onDrawFrame() should be doing a minimum of work to get the higest FPS. The physics, AI, etc. don't need to run every frame, so we can put those in another thread. Now we have: Renderer thread - Update animations and draw polys Game thread - Logic & periodic physics, AI, etc. updates UI thread - Android UI interaction only Since you don't ever want to block the UI thread, I run one more thread for the game logic. Maybe that's not necessary though? Is there ever a reason to run game logic in the renderer thread?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >