Search Results

Search found 15485 results on 620 pages for 'product updates'.

Page 86/620 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Spring MVC return ajax response using Jackson

    - by anshumn
    I have a scenario where I am filling a dropdown box in JSP through AJAX response from the server. In the controller, I am retuning a Collection of Product objects and have annotated the return type with @ResponseBody. Controller - @RequestMapping(value="/getServicesForMarket", method = RequestMethod.GET) public @ResponseBody Collection<Product> getServices(@RequestParam(value="marketId", required=true) int marketId) { Collection<Product> products = marketService.getProducts(marketId); return products; } And Product is @Entity @Table(name = "PRODUCT") public class Product implements Serializable { private static final long serialVersionUID = 1L; private int id; private Market market; private Service service; private int price; @Id @GeneratedValue(strategy = GenerationType.AUTO) public int getId() { return id; } public void setId(int id) { this.id = id; } @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name = "MARKET_ID") public Market getMarket() { return market; } public void setMarket(Market market) { this.market = market; } @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name = "SERVICE_ID") public Service getService() { return service; } public void setService(Service service) { this.service = service; } @Column(name = "PRICE") public int getPrice() { return price; } public void setPrice(int price) { this.price = price; } } Service is @Entity @Table(name="SERVICE") public class Service implements Serializable { /** * */ private static final long serialVersionUID = 1L; private int id; private String name; private String description; @Id @GeneratedValue @Column(name="ID") public int getId() { return id; } public void setId(int id) { this.id = id; } @Column(name="NAME") public String getName() { return name; } public void setName(String name) { this.name = name; } @Column(name="DESCRIPTION") public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } } In the JSP, I need to get the data from the service field of Product also. So I in my JQuery callback function, I have written like product.service.description to get the data. It seems that by default Jackson is not mapping the associated service object (or any other custom object). Also I am not getting any exception. In the JSP, I do not get the data. It is working fine when I return Collection of some object which does not contain any other custom objects as its fields. Am I missing any settings for this to work? Thanks!

    Read the article

  • LINQ Nested Where

    - by griegs
    If I have the following model; public List<RecommendedProduct> recommendations Then public class RecommendedProduct public List<Product> Products Then the Product; public class Product public string Code The recommendations list has, as an example, 10 items in it. Each recommendations item has two Products in it. How, with LINQ, can I find the recommendations object that has products with both "A" and "B" product codes?

    Read the article

  • Asp.Net MVC ActionLink

    - by Pino
    Can anyone explain why the following happens? And how to resolve, Visual Studio 2010 and MVC2 <%= Html.ActionLink("Add New Option", "AddOption", "Product", new { @class = "lighbox" }, null)%> Results in /Product/AddOption?class=lightbox <%= Html.ActionLink("Add New Option", "AddOption", "Product", new { @class = "lighbox" })%> Results in /Product/AddOption?Length=7 Thanks

    Read the article

  • How to invite Beta Testers to test an application

    - by Sunil Kumar Sahoo
    I have created a Text Messaging product which will be free. But I want Beta Testers for that product. This product can be deployed on Windows, Mac and Linux platforms How to invite Beta Testers to test my application. Also I want to know how to invite linux, Mac, StackOverFlow and other community guys to beta test my product. Thanks Sunil Kumar Sahoo

    Read the article

  • PyYAML parse into arbitary object

    - by Philip Fourie
    I have the following Python 2.6 program and YAML definition (using PyYAML): import yaml x = yaml.load( """ product: name : 'Product X' sku : 123 features : - size : '10x30cm' weight : '10kg' """ ) print type(x) print x Which results in the following output: <type 'dict'> {'product': {'sku': 123, 'name': 'Product X', 'features': [{'weight': '10kg', 'size': '10x30cm'}]}} It is possible to create a strongly typed object from x? I would like to the following: print x.features(0).size I am aware that it is possible to create and instance from an existent class, but that is not what I want for this particular scenario.

    Read the article

  • Mathematica equivalent of Ruby's inject

    - by Ben Alpert
    Is there a Mathematica function like inject in Ruby? For example, if I want the product of the elements in a list, in Ruby I can write: list.inject(1) { |prod,el| prod * el } I found I can just use Product in Mathematica: Apply[Product, list] However, this isn't general enough for me (like, if I don't just want the product or sum of the numbers). What's the closest equivalent to inject?

    Read the article

  • YAML front matter for Jekyll and nested lists

    - by motleydev
    I have a set of nested yaml lists with something like the following: title: the example image: link.jpg products: - top-level: Product One arbitrary: Value nested-products: - nested: Associated Product sub-arbitrary: Associated Value - top-level: Product Two arbitrary: Value - top-level: Product Three arbitrary: Value I can loop through the products with no problem using for item in page.products and I can use a logic operator to determine if nested products exist - what I CAN'T do is loop through multiple nested-products per iteration of top-level I have tried using for subitem in item and other options - but I can't get it to work - any ideas?

    Read the article

  • Magento user created attribute for products is not saved...

    - by Elzo Valugi
    I am fighting an apparently simple thing for about two days now. I hope somebody can help. I created the myUpdate EAV attribute class Company_Module_Model_Resource_Eav_Mysql4_Setup extends Mage_Eav_Model_Entity_Setup { public function getDefaultEntities() { return array( 'catalog_product' => array( 'entity_model' => 'catalog/product', 'attribute_model' => 'catalog/resource_eav_attribute', 'table' => 'catalog/product', 'additional_attribute_table' => 'catalog/eav_attribute', 'entity_attribute_collection' => 'catalog/product_attribute_collection', 'attributes' => array( 'my_update' => array( 'label' => 'My timestamp', 'type' => 'datetime', 'input' => 'date', 'default' => '', 'class' => 'validate-date', 'backend' => 'eav/entity_attribute_backend_datetime', 'frontend' => '', 'table' => '', 'source' => '', 'global' => Mage_Catalog_Model_Resource_Eav_Attribute::SCOPE_GLOBAL, 'visible' => true, 'required' => false, 'user_defined' => true, 'searchable' => false, 'filterable' => false, 'comparable' => false, 'visible_on_front' => false, 'visible_in_advanced_search' => false, 'unique' => false, 'apply_to' => 'simple', ) ) ) ); } } The attribute is created OK and on install appears correctly in the list of attributes. Later I do // for product 1 $product->setMyUpdate($stringDate); // string this format: yyyy-MM-dd HH:mm:ss $product->save(); // and saves without issues * in admin module But later when I do: $product = Mage::getModel('catalog/product')->load(1); var_dump($product->getMyUpdate()); // returns null Somehow the data is not really saved.. or I am not retrieving it correctly. Please advice on how to get the data and where the data is saved in the DB so I can check at if the insert is done correctly. Thanks.

    Read the article

  • One way Has-Many-Through

    - by Hock
    Hello, I have a Category, a Subcategory and a Product model. I have: Category has_many Subcategories Subcategory has_many Products Subcategory belongs_to Category Product belongs_to Subcategory Is there a way to have something like Category has_many Projects through Subcategories ? The 'normal' rails way wouldn't work because "subcategory" doesn't belongs to product so product does not have a subcategory_id field. Instead, I need the query to be something like SELECT * FROM products WHERE id IN category.subcategory_ids Is there a way to do that? Thanks, Nicolás Hock Isaza

    Read the article

  • Generate dynamic UPDATE command from Expression<Func<T, T>>

    - by Rui Jarimba
    I'm trying to generate an UPDATE command based on Expression trees (for a batch update). Assuming the following UPDATE command: UPDATE Product SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1 For an expression like this: Expression<Func<Product, Product>> updateExpression = entity => new Product() { ProductTypeId = 123, ProcessAttempts = entity.ProcessAttempts + 1 }; How can I generate the SET part of the command? SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1

    Read the article

  • Rails 3 Abstract Class vs Inherited Class

    - by R. Yanchuleff
    In my rails 3 model, I have two classes: Product, Service. I want both to be of type InventoryItem because I have another model called Store and Store has_many :InventoryItems This is what I'm trying to get to, but I'm not sure how to model this in my InventoryItem model and my Product and Service models. Should InventoryItem just be a parent class that Product and Service inherit from, or should InventoryItem be modeled as a class abstract of which Product and Service extend from. Thanks in advance for the advice!

    Read the article

  • sorting array after array_count_values

    - by umermalik
    hi to all! I have an array of products $products = array_count_values($products); now I have an array where $key is product number and $value is how many times I have such a product in the array. I want to sort this new array that product with the least "duplicates" are on the first place, but what ever I use (rsort, krsort,..) i loose product numbers (key). any suggestions? thanks.

    Read the article

  • Rails saving data to model that has multiple has_many

    - by Ajey
    So I have a product model that looks like belongs_to :seller has_many :coupons And coupon model that looks like belongs_to :seller belongs_to :product And in my Products controller I use @seller = current_user @coupon = @seller.coupons.create(params[:coupon]) to create the coupons for the seller While the coupon is being created, I need to associate it with the product too, i.e When a new coupon is created it should be saved for the seller AS WELL AS for the product.

    Read the article

  • Problem with RewriteCond %{QUERY_STRING}, backreference not dispaying in final URL

    - by eb_Dev
    Hi, I have the following in my .htaccess file: RewriteCond %{QUERY_STRING} ^route\=product\/category\&path\=35\&page\=([0-9]+)$ RewriteRule ^index.php$ http://%{HTTP_HOST}/product/category/35/page_$1? [R=301,L] It's not behaving as expected though, when I enter the URL: http://mywebsite.com/index.php?route=product/category&path=35&page=2 It gets rewritten to: http://mywebsite.com/product/category/35/page_ Could someone tell me what I have done wrong please? Thanks, eb_dev

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Trying to update debian not working

    - by Sean
    As root i type this command apt-get update and get these error messages. > Err http://security.debian.org lenny/updates Release.gpg Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/main Translation-en_US Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/contrib Translation-en_US Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/non-free Translation-en_US Could not resolve 'security.debian.org' Err http://www.backports.org lenny-backports Release.gpg Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/main Translation-en_US Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/contrib Translation-en_US Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/non-free Translation-en_US Could not resolve 'www.backports.org' Err http://ftp.us.debian.org lenny Release.gpg Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/main Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/contrib Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/non-free Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://http.us.debian.org stable Release.gpg Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/main Translation-en_US Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/contrib Translation-en_US Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/non-free Translation-en_US Could not resolve 'http.us.debian.org' Reading package lists... Done W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/Release.gpg Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/main/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/contrib/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/non-free/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/Release.gpg Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/main/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/contrib/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/non-free/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/Release.gpg Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/main/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/contrib/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/non-free/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/Release.gpg Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/main/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/contrib/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/non-free/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems This is on a dreamplug linux server. Configured so that my network starts on 192.168.1.2 and my router is port forwarding ssh to 192.168.1.6 to the server.

    Read the article

  • Trouble with dns and debian update

    - by Sean
    I tried to update my debian dreamplug server with the command running as root apt-get update and recieved these errors. Err http://security.debian.org lenny/updates Release.gpg Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/main Translation-en_US Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/contrib Translation-en_US Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/non-free Translation-en_US Could not resolve 'security.debian.org' Err httdp://www.backports.org lenny-backports Releasegpg Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/main Translation-en_US Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/contrib Translation-en_US Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/non-free Translation-en_US Could not resolve 'www.backports.org' Err httdp://ftp.us.debian.org lenny Release.gpg Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/main Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/contrib Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/non-free Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://http.us.debian.org stable Release.gpg Could not resolve 'http.us.debian.org' Err htdtp://http.us.debian.org stable/main Translation-en_US Could not resolve 'http.us.debian.org' Err httdp://http.us.debian.org stable/contrib Translation-en_US Could not resolve 'http.us.debian.org' Err htdtp://http.us.debian.org stable/non-free Translation-en_US Could not resolve 'http.us.debian.org' Reading package lists... Done W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/Release.gpg Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/main/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/contrib/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/non-free/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/Release.gpg Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/main/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/contrib/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/non-free/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/Release.gpg Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/main/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/contrib/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/non-free/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/Release.gpg Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/main/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/contrib/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/non-free/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems I am able to ping ip addresses but not namespaces. Can't seem to figure out the problem. My /etc/resolv.conf file contains nameserver 192.168.1.2 which is my router.

    Read the article

  • How to install Oracle Weblogic Server using OS-specific Package installer?(Linux/Solaris)

    - by PratikS -- Oracle
    Note: OS-specific Package installer As the name suggests the installer is platform specific. It is meant for installation with a 32bit JVM only. Both SUN and JROCKIT 32 bit JDKs come bundled with "OS-specific Package installer", so no need to install the JDK in advance. There are three different ways of installing Oracle Weblogic Server: Graphical mode Console mode Silent mode For Linux/Solaris: Steps to install OS-specific Package .bin installer(for Linux/Solaris) are almost same as windows except for the way we launch the installation.Installer: wls_<version>_<linux/solaris>32.bin (E.g. wls1036_linux32.bin/wls1036_solaris32.bin) 1) Graphical mode: Log in to the target UNIX system. Go to the directory that contains the installation program.(Make sure GUI is enabled or else it will default to console mode) Launch the installation by entering the following commands: [weblogic@pratik ~]$ pwd/home/oracle[weblogic@pratik ~]$ cd WLSInstallers/[weblogic@pratik WLSInstallers]$ ls -ltrtotal 851512-rw-rw-r-- 1 oracle oracle 871091023 Dec 22  2011 wls1036_linux32.bin[weblogic@pratik WLSInstallers]$ chmod a+x wls1036_linux32.bin[weblogic@pratik WLSInstallers]$ ls -ltrtotal 851512-rwxrwxr-x 1 oracle oracle 871091023 Dec 22  2011 wls1036_linux32.bin[weblogic@pratik WLSInstallers]$ ./wls1036_linux32.bin As soon as you run ./wls1036_linux32.bin with GUI enabled you would see the following screen: Rest of the screens and steps are similar to that of Graphical mode installation on windows, refer: How to install Oracle Weblogic Server using OS-specific Package installer?(Windows) 2) Console mode: Log in to the target UNIX system. Go to the directory that contains the installation program. Launch the installation by entering the following commands: [weblogic@pratik ~]$ pwd/home/oracle[weblogic@pratik ~]$ cd WLSInstallers/[weblogic@pratik WLSInstallers]$ ls -ltrtotal 851512-rw-rw-r-- 1 weblogic weblogic 871091023 Dec 22  2011 wls1036_linux32.bin[weblogic@pratik WLSInstallers]$ chmod a+x wls1036_linux32.bin[weblogic@pratik WLSInstallers]$ ls -ltrtotal 851512-rwxrwxr-x 1 weblogic weblogic 871091023 Dec 22  2011 wls1036_linux32.bin [weblogic@pratik WLSInstallers]$ ./wls1036_linux32.bin -mode=consoleExtracting 0%....................................................................................................100%<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Welcome:--------This installer will guide you through the installation of WebLogic 10.3.6.0.Type "Next" or enter to proceed to the next prompt.  If you want to change data entered previously, type "Previous".  You may quit the installer at any time by typing "Exit".Enter [Exit][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Choose Middleware Home Directory:--------------------------------- ->1|* Create a new Middleware Home   2|/home/oracle/wls_12cEnter index number to select OR [Exit][Previous][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Choose Middleware Home Directory:---------------------------------    "Middleware Home" = [Enter new value or use default"/home/oracle/Oracle/Middleware"]Enter new Middleware Home OR [Exit][Previous][Next]> /home/oracle/WLS1036<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Choose Middleware Home Directory:---------------------------------    "Middleware Home" = [/home/oracle/WLS1036]Use above value or select another option:    1 - Enter new Middleware Home    2 - Change to default [/home/oracle/Oracle/Middleware]Enter option number to select OR [Exit][Previous][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Register for Security Updates:------------------------------Provide your email address for security updates and  to initiate configuration manager.   1|Email:[]   2|Support Password:[]   3|Receive Security Update:[Yes]Enter index number to select OR [Exit][Previous][Next]> 3<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Register for Security Updates:------------------------------Provide your email address for security updates and  to initiate configuration manager.    "Receive Security Update:" = [Enter new value or use default "Yes"]Enter [Yes][No]? No<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Register for Security Updates:------------------------------Provide your email address for security updates and  to initiate configuration manager.    "Receive Security Update:" = [Enter new value or use default "Yes"]    ** Do you wish to bypass initiation of the configuration manager and    **  remain uninformed of critical security issues in your configuration?Enter [Yes][No]? Yes<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Register for Security Updates:------------------------------Provide your email address for security updates and  to initiate configuration manager.   1|Email:[]   2|Support Password:[]   3|Receive Security Update:[No]Enter index number to select OR [Exit][Previous][Next]>Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Register for Security Updates:------------------------------Provide your email address for security updates and  to initiate configuration manager.   1|Email:[]   2|Support Password:[]   3|Receive Security Update:[No]Enter index number to select OR [Exit][Previous][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Choose Install Type:--------------------Select the type of installation you wish to perform. ->1|Typical    |  Install the following product(s) and component(s):    | - WebLogic Server    | - Oracle Coherence   2|Custom    |  Choose software products and components to install and perform optional    |configuration.Enter index number to select OR [Exit][Previous][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Choose Product Installation Directories:----------------------------------------Middleware Home Directory: [/home/oracle/WLS1036]Product Installation Directories:   1|WebLogic Server: [/home/oracle/WLS1036/wlserver_10.3]   2|Oracle Coherence: [/home/oracle/WLS1036/coherence_3.7]Enter index number to select OR [Exit][Previous][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->The following Products and JDKs will be installed:--------------------------------------------------    WebLogic Platform 10.3.6.0    |_____WebLogic Server    |    |_____Core Application Server    |    |_____Administration Console    |    |_____Configuration Wizard and Upgrade Framework    |    |_____Web 2.0 HTTP Pub-Sub Server    |    |_____WebLogic SCA    |    |_____WebLogic JDBC Drivers    |    |_____Third Party JDBC Drivers    |    |_____WebLogic Server Clients    |    |_____WebLogic Web Server Plugins    |    |_____UDDI and Xquery Support    |    |_____Evaluation Database    |_____Oracle Coherence    |    |_____Coherence Product Files    |_____JDKs         |_____SUN SDK 1.6.0_29         |_____Oracle JRockit 1.6.0_29 SDK    *Estimated size of installation: 1,276.0 MBEnter [Exit][Previous][Next]> Next<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Installing files..0%          25%          50%          75%          100%[------------|------------|------------|------------][***************************************************]<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Installing JDK....0%          25%          50%          75%          100%[------------|------------|------------|------------][***************************************************]Performing String Substitutions...<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Configuring OCM...0%          25%          50%          75%          100%[------------|------------|------------|------------][***************************************************]Creating Domains...<-------------------- Oracle Installer - WebLogic 10.3.6.0 ------------------->Installation CompleteCongratulations! Installation is complete.Press [Enter] to continue or type [Exit]> [weblogic@pratik ~]$ Note: All the inputs are in Bold 3) Silent mode:              1) Log in to the target Unix system.             2) Create a silent.xml file that defines the configuration settings normally entered by a user during an interactive installation process, such as graphical-mode or console-mode installation. <?xml version="1.0" encoding="UTF-8"?><bea-installer>     <input-fields>        <data-value name="BEAHOME" value="/home/oracle/WLS1036" />        <data-value name="WLS_INSTALL_DIR" value="/home/oracle/WLS1036/wlserver_10.3" />        <data-value name="COMPONENT_PATHS" value="WebLogic Server|Oracle Coherence" />    </input-fields></bea-installer> <!-- Note: This sample silent.xml file is used to install all the components of WebLogic Server and Oracle Coherence. All the values in Bold are the variables. -->               3) Place the silent.xml file in the same directory as where the WebLogic Server Package installer is located.              4) Go to the directory that contains the installation program.              5) Start the installer as follows: [weblogic@pratik WLSInstallers]$ chmod a+x wls1036_linux32.bin[weblogic@pratik WLSInstallers]$ ls -ltrtotal 851516-rwxrwxr-x 1 weblogic weblogic 871091023 Dec 22  2011 wls1036_linux32.bin-rw-rw-r-- 1 weblogic weblogic       331 Jul  5 03:48 silent.xml[weblogic@pratik WLSInstallers]$ cat silent.xml<?xml version="1.0" encoding="UTF-8"?><bea-installer>        <input-fields>                <data-value name="BEAHOME" value="/home/oracle/WLS1036" />                <data-value name="WLS_INSTALL_DIR" value="/home/oracle/WLS1036/wlserver_10.3" />                <data-value name="COMPONENT_PATHS" value="WebLogic Server|Oracle Coherence" />        </input-fields></bea-installer>[weblogic@pratik WLSInstallers]$ ./wls1036_linux32.bin -mode=silenlent.xml -log=/home/oracle/WLSInstallers/install.logExtracting 0%....................................................................................................100%[weblogic@pratik WLSInstallers]$ -log=/home/oracle/WLSInstallers/install.log creates a installation log(install.log) under "/home/oracle/WLSInstallers/", when installation completes you will see the following printed in the log file: 2012-07-05 03:59:36,788 INFO  [WizardController] com.bea.plateng.wizard.silent.tasks.LogTask - The installation was successfull! For other configurable values in silent.xml refer: Values for the Sample silent.xml File for WebLogic Server Important links to Refer: Running the Installation Program in Graphical Mode Running the Installation Program in Console Mode Running the Installation Program in Silent Mode

    Read the article

  • creating objects in list

    - by prince23
    List myList = new List { new Users{ Name="Kumar", Gender="M", Age=75, Parent="All"}, new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, new Users{ Name="kian", Gender="M",Age=20, Parent="Suresh"}, new Users{ Name="Nathani", Gender="M",Age=15, Parent="Suresh"}, new Users{ Name="Peter", Gender="M",Age=90, Parent="All"}, new Users{ Name="Mica", Age=56, Gender="M",Parent="Peter"}, new Users{ Name="Linderman", Gender="M",Age=51, Parent="Peter"}, new Users{ Name="john", Age=25, Gender="M",Parent="Mica"}, new Users{ Name="tom", Gender="M",Age=21, Parent="Mica"}, new Users{ Name="Ando", Age=64, Gender="M",Parent="All"}, new Users{ Name="Maya", Age=24, Gender="M",Parent="Ando"}, new Users{ Name="Niki Sanders", Gender="F",Age=2, Parent="Maya"}, new Users{ Name="Angela Patrelli", Gender="F",Age=3, Parent="Maya"}, }; now i need to format the data like here i need to check the parent based on the parent i will be creating objects under the parent objects now { Name="Kumar", Gender="M", Age=75, Parent="All" } this is the top level as a property Parent ="all" now under parent object kumar here we again create a new object to store these information under object(kumar who is the parent) new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, what i need to do here is i need to check the parent based on the parent create further objects under it ex: i need to achive like this. looking for the syntax how i can do it. public class SampleProjectData { public static ObservableCollection<Product> GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project2", OverallStartTime = dtS + TimeSpan.FromDays(1.5), OverallEndTime = dtS + TimeSpan.FromDays(5.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Victor's Task" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project3", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Jason's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(7), EndTime = dtS + TimeSpan.FromDays(9), TaskName = "Jason's Task 2" }); teams[0].Projects.Add(emp); teams.Add(new Product() { PDName = "Product2", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project4", OverallStartTime = dtS + TimeSpan.FromDays(0.5), OverallEndTime = dtS + TimeSpan.FromDays(3.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Vicky's Task" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project5", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2.2), EndTime = dtS + TimeSpan.FromDays(3.8), TaskName = "Oleg's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(6), TaskName = "Oleg's Task 2" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(8), EndTime = dtS + TimeSpan.FromDays(9.6), TaskName = "Oleg's Task 3" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project6", OverallStartTime = dtS + TimeSpan.FromDays(2.5), OverallEndTime = dtS + TimeSpan.FromDays(4.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(0.8), EndTime = dtS + TimeSpan.FromDays(2), TaskName = "Kim's Task" }); teams[1].Projects.Add(emp); teams.Add(new Product() { PDName = "Product3", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project7", OverallStartTime = dtS + TimeSpan.FromDays(5), OverallEndTime = dtS + TimeSpan.FromDays(7.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Balaji's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(8), TaskName = "Balaji's Task 2" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project8", OverallStartTime = dtS + TimeSpan.FromDays(3), OverallEndTime = dtS + TimeSpan.FromDays(6.3) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.75), EndTime = dtS + TimeSpan.FromDays(2.25), TaskName = "Li's Task" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project9", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2), EndTime = dtS + TimeSpan.FromDays(3), TaskName = "Stacy's Task" }); teams[2].Projects.Add(emp); return teams; } } above is an sample data where i am grouping them with static data in the same way i need **to for teh data which is cmg from DB and i need to store them list** all these three data are comig from different services. and i am storing them in a list now i have three tables data Product , Project, Task. all the data are coming from webservies. i have created three list where i am storing the data in list. List<Project>objpro= new List<Project>(); List<Product>objproduct= new List<Product>(); LIst<Task>objTask= new List<Task>(); **now what i need to do is i need to do the mapping between these tables. if you see above. i have object of Product under Product i have added object of Project and then under project object i have added task object.** now from the above data which is stored in the list i need to do the same mapping between class. and group the data. public class Product : INotifyPropertyChanged { public Product() { this.Projects = new ObservableCollection<Project>(); } public string PDName { get; set; } public ObservableCollection<Project> Projects { get; set; } private DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } private DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Project : INotifyPropertyChanged { public Project() { this.Tasks = new ObservableCollection<Task>(); } public string PName { get; set; } public ObservableCollection<Task> Tasks { get; set; } DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Task : INotifyPropertyChanged { public string TaskName { get; set; } DateTime _st; public DateTime StartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("StartTime"); this.EndTime = value + dur; } } } private DateTime _et; public DateTime EndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("EndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } hope my question is clear

    Read the article

  • Cant bind data to a table view

    - by sudhakarilla
    Hi, I have retrieved data from Json URL and displayed it in a table view. I have also inlcuced a button in table view. On clicking the button the data must be transferred to a another table view. The problem is that i could send the data to a view and could display it on a label. But i couldnt bind the dat to table view ... Here's some of the code snippets... Buy Button... -(IBAction)Buybutton{ /* UIAlertView *alert =[[UIAlertView alloc]initWithTitle:@"thank u" message:@"products" delegate:nil cancelButtonTitle:@"ok" otherButtonTitles:nil]; [alert show]; [alert release];*/ Product *selectedProduct = [[data products]objectAtIndex:0]; CartViewController *cartviewcontroller = [[[CartViewController alloc] initWithNibName:@"CartViewController" bundle:nil]autorelease]; cartviewcontroller.product= selectedProduct; //NSString *productname=[product ProductName]; //[currentproducts setproduct:productname]; [self.view addSubview:cartviewcontroller.view]; } CartView... // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; data = [GlobalData SharedData]; NSMutableArray *prod =[[NSMutableArray alloc]init]; prod = [data products]; for(NSDictionary *product in prod) { Cart *myprod = [[Cart alloc]init]; myprod.Description = [product Description]; myprod.ProductImage =[product ProductImage]; myprod.ProductName = [product ProductName]; myprod.SalePrice = [product SalePrice]; [data.carts addObject:myprod]; [myprod release]; } Cart *cart = [[data carts]objectAtIndex:0]; NSString *productname=[cart ProductName]; self.label.text =productname; NSLog(@"carts"); } (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [data.carts count]; } -(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { return 75; } (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"cellforrow"); static NSString *CellIdentifier = @"Cell"; ProductCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if(cell ==nil) { cell = [[[ProductCell alloc]initWithFrame:CGRectZero reuseIdentifier:CellIdentifier]autorelease]; } NSUInteger row = [indexPath row]; Cart *cart = [[data carts]objectAtIndex:row]; cell.productNameLabel.text = [cart ProductName]; /*NSString *sale = [[NSString alloc]initWithFormat:@"SalePrice:%@",[cart SalePrice]]; cell.salePriceLabel.text = sale; cell.DescriptionLabel.text = [cart Description]; NSMutableString imageUrl =[NSMutableString string]; [imageUrl appendFormat:@"http://demo.s2commerce.net/DesktopModules/S2Commerce/Images/Products/%@",[product ProductImage]]; NSLog(@"imageurl:%@",imageUrl); NSString mapURL = [imageUrl stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; NSData* imageData = [[NSData alloc]initWithContentsOfURL:[NSURL URLWithString:mapURL]]; UIImage* image = [[UIImage alloc]initWithData:imageData]; cell.productImageview.image = image; [imageData release]; [image release];*/ return cell; } I am also getting the following error in the console 2010-06-11 18:34:29.169 navigation[4109:207] * -[CartViewController tableView:numberOfRowsInSection:]: message sent to deallocated instance 0xcb4d4f90

    Read the article

  • Need help to properly remove duplicates in NHibernate

    - by Michael D. Kirkpatrick
    Here is the problem I am having. I have a database with over 100 records in it. I am paging through the data to get 9 results at a time. When I added a check to see if items are active, it caused the results to start doubling up. A little background: "Product" is the actual product line "ProductSkus" are the actual products that exist in the product line When there is more then 1 ProductSku within Product, it causes a duplicate entry to be returned. See the NHibernate Query below: result = this.Session.CreateCriteria<Model.Product>() .Add(Expression.Eq("IsActive", true)) .AddOrder(new Order("Name", true)) .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // This part of the query duplicates the products .CreateAlias("ProductSkus", "ProdSkus", JoinType.InnerJoin) .Add(Expression.Eq("ProdSkus.IsActive", true)) .CreateAlias("ProductToSubcategory", "ProdToSubcat") .CreateAlias("ProdToSubcat.ProductSubcategory", "ProdSubcat") .Add(Expression.Eq("ProdSubcat.ID", subCatId)) // This part takes out the duplicate products - Removes too many items... // Turns out that with .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // it gets 9 records back then the duplicates are removed. // Example: // Total Records over 100 // Max = 9 // 4 Duplicates removed // Yields 5 records when there should be 9 // Why??? This line is ran in NHibernate on the data after it has been extracted from the SQL server. .SetResultTransformer(new NHibernate.Transform.DistinctRootEntityResultTransformer()) .List<Model.Product>(); I added the DistinctRootEntityResultTransformer to clean up the duplicates. The problem is that it pulls 9 records back that contains duplicates. DistinctRootEntityResultTransformer then cleans up the duplicates in the 9 records. I am basically needing a distinct statement to be ran on the SQL server to begin with. However, distinct on SQL is not going to work since NHibernate by default wants to add every field from every table in the select part of the statement. I am only using the fields that belong to the root table to begin with (Model.Product). If I can tell NHibernate to not add the fields to the joined tables into the select part of the statement along with adding Distinct, it would work. I use NHibernare Profiler to see the actual query: SELECT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, prodskus1_.ID as ID373_0_, prodskus1_.Description as Descript2_373_0_, prodskus1_.PartNumber as PartNumber373_0_, prodskus1_.Price as Price373_0_, prodskus1_.IsKit as IsKit373_0_, prodskus1_.IsActive as IsActive373_0_, prodskus1_.IsFeaturedProduct as IsFeatur7_373_0_, prodskus1_.DateAdded as DateAdded373_0_, prodskus1_.Weight as Weight373_0_, prodskus1_.TimesViewed as TimesVi10_373_0_, prodskus1_.TimesOrdered as TimesOr11_373_0_, prodskus1_.ProductID as ProductID373_0_, prodskus1_.OverSizedBoxID as OverSiz13_373_0_, prodtosubc2_.ID as ID362_1_, prodtosubc2_.MasterSubcategory as MasterSu2_362_1_, prodtosubc2_.ProductID as ProductID362_1_, prodtosubc2_.ProductSubcategoryID as ProductS4_362_1_, prodsubcat3_.ID as ID352_2_, prodsubcat3_.Name as Name352_2_, prodsubcat3_.ProductCategoryID as ProductC3_352_2_, prodsubcat3_.ImageID as ImageID352_2_, prodsubcat3_.TriggerShow as TriggerS5_352_2_ FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc If I hand modify the query and run it directly on the SQL server I get the result set I want (I removed all the extra fields in the select section and added DISTINCT): SELECT DISTINCT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc The big question I now must ask is... What must I change in the NHibernate Query to ultimately get the exact same result? Thanks in advance.

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >