Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 217/992 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • CodePlex Daily Summary for Wednesday, June 13, 2012

    CodePlex Daily Summary for Wednesday, June 13, 2012Popular ReleasesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v1.8: Installation guide: Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI Note: if previous module installations are detected, they are removed during upgrade. Note: PowerShell 3.0 RC is now supported. Note: Windows Server 2012 is partially supported (output formatting is not working for me). Release notes for version 1.8.0: Version 1.8 introduces a set of new .NET clas...Metodología General Ajustada - MGA: 02.07.02: Cambios Parmenio: Corrección para que se generen los objetivos en el formato PRO_F03. Se debe ejcutar el script en la BD. Cambios John: Soporte técnico telefónico y por correo electrónico. Integración de código fuente. Se ajustan los siguientes formatos para que no dupliquen los anexos al hacer clic dos veces en Actualizar: FORMATO ID 03 - ANALISIS DE PARTICIPANTES, FORMATO ID 05 - OBJETIVOS.Generación de instaladores: Conectado y Desconectado.BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsQTP FT Uninstaller: QTP FT Uninstaller v3: - KnowledgeInbox has made this tool open source - Converted to C# - Better scanning & cleaning mechanismWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: 6?12????? ??? - ?????.net2.0?.net4.0????Winform?Web???????。 NET2 - .net 2.0?Winform?Web???? NET4 - .net 4.0?Winform?Web???? Libs - SDK ???,??2.0?4.0?SDK??? 6?11????? ??? - ?Entities???????????EntityBase,???ToString()???????json???,??????4.0???????。2.0?3.5???! ??? - Request????????AccessToken??????source=appkey?????。????,????????,???????public_timeline?????????。 ?? - ???ClinetLogin??????????RefreshToken???????false???。 ?? - ???RepostTimeline????Statuses???null???。 ?? - Utility?Bui...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesNew Projects4SQ - Foursquare for WP7: This is a public Foursquare App, designed for the COMMUNITY for WP7.Alert Management Web Part: Business Problem: In a Project management site, When new project sites created, Project Manager creates alerts for all team members in multiple Lists/Libraries. Pain point is: He need to manually create alerts by going to each and every list/library >> Alert Me >> and then adding users to keep them informed. Yes, Its a pain for them to add users one by one in all of the lists and libraries, Solution: we started creating Custom Alert Management web part which will solve this problem. K...An Un-sql SQL Database Installer: D.E.M.O.N SQL Database Installer is an Un-SQL(SQL means boring here) Database server installer for many types of Database :MySQL, SQLite and others.Aqzgf: android for qzgfbaoming: ????????????,???dbentry.net 4.2??????。Comdaybe 2012_knockoutjs examples: This is a visual studio 2012 project containing all the demo's of #comdaybe 2012. more information on: http://www.communityday.be/ETExplorer: This application is a replacement for the standard windows explorer. When I began working with Windows 7 I noticed that some of the features I liked from XP were missing. After trying many freeware replacements and not finding any of them having all the features I needed, I decided to develop my own explorer replacement.ExamAnalysis: Exam Analysis Website (building...)FizzBuzz: FizzBuzzImproved Dnn Event Log Email Notification provider: The email notifications for log events in Dnn are poor. This project aims to improve them through a new logging provider. The initial work uses the existing logging provider and just overrides the SendLogNotifications method. Over time hopefully this will be improved to enhance the email sending capabilities of Dnn.InfoMap: a framework for mapsiTextSharp MPL for Silverlight & Windows Phone: This is a port of the iText Sharp (4.2) that was the last version published under the Mozilla Public License for Silverlight and Windows Phone. You can use this version in a commercial application and it does not require your app to be released as open source. You do not need any kind of license from anyone to use this unless you want to change the publishing information (where it sets iText as the publisher)ModifyMimeTypes: In Exchange 2007 and 2010, the Content-Type encoding for MIME attachments is determined by a list which is stored in the Exchange Organization container in Active Directory within the msExchMimeTypes property. The list can be easily viewed in the Exchange Management Shell using: •(Get-OrganizationConfig).MimeTypes The list is order specific. So if two entries exist for the same attachment type, the one earlier in the list will be the one that is used. One example of this is the encod...Monitor Reporting Services with PowerPivot: PowerPivot workbook with a connection to Reporting Service log. Several built in reports are already created in the workbook. You will need to connect to the execution log 2 view in your report server database. Typically the report server Database is named ReportServer. The view to query is named is ExecutionLog2. You may need to add a where clause to the query in the workbook to limit the data you want to see. I already have a where cluase that limits it to only rendered reports. The Date...QCats: QCats put hearts into socail relationship.Rad sa bazom: Ovo je samo proba, brsem brzoREJS: An easy to use, easy to implement Razor preparser for CSS and JS with support for caching, external data, LESS and much more.RexSharp: RexSharp is a relatively new web framework, that can easily be integrated with ASP.NET. It is based largely on the Websharper framework, written in F# - but with significant additions and optimizations. Like Websharper, RexSharp aims to make web and mobile app development a lot more fun, with F#. This is a preliminary release - detailed instructions for using the framework will be made available in a short while. ShareDev.Webparts: Coming soon...SharePoint Common Framework: SharepointCommon is a framework for Microsoft SharePoint 2010© Server and Foundation. It allows to map list items to simple hand-writen POCO classes and perform actions by manipulate with entities.Solution Extender for Microsoft Dynamics CRM 2011: Solution Extender makes it easier for Dynamics CRM 2011 integrators to export/import components that can't be included in solutions. You will be able to export/import Duplicate detection rules, Saved views and more to come It's developed in C#.System Center 2012 - Orchestrator Integration Packs and Utilities: System Center 2012 - Orchestrator Integration Packs and UtilitiesTalkBack: A small library to enable decoupled communication between different components.Tomson Scattering for GDT: Tomson is a software used on Gas Dynamic Trap setup in Institute of Nuclear Physics of Russian Academy of Sciences for electron temperature calculation in plasma based on tomson scattering method measurements.tony first project: first projectVirtual Card Table: This is a classroom project for explaining TDD.WellFound Yachts: WellFound Yachts

    Read the article

  • Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

    - by Ricardo Ferreira
    The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence. To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below. package com.oracle.coherence.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Random; @SuppressWarnings("serial") public class Person implements Serializable { private String firstName; private String lastName; private List<Object> fat; private String email; public Person() { generateFat(); } public Person(String firstName, String lastName, String email) { setFirstName(firstName); setLastName(lastName); setEmail(email); generateFat(); } private void generateFat() { fat = new ArrayList<Object>(); Random random = new Random(); for (int i = 0; i < random.nextInt(18000); i++) { HashMap<Long, Double> internalFat = new HashMap<Long, Double>(); for (int j = 0; j < random.nextInt(10000); j++) { internalFat.put(random.nextLong(), random.nextDouble()); } fat.add(internalFat); } } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below. package com.oracle.coherence.demo; import com.oracle.coherence.domain.Person; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; public class CreatePeopleCacheAndPopulateWithData { public static void main(String[] args) { // Asks Coherence for a new cache named "People"... NamedCache people = CacheFactory.getCache("People"); // Creates three people that will be putted into the data grid. Each person // generates an internal fat that should increase its size in terms of bytes... Person pessoa1 = new Person("Ricardo", "Ferreira", "[email protected]"); Person pessoa2 = new Person("Vitor", "Ferreira", "[email protected]"); Person pessoa3 = new Person("Vivian", "Ferreira", "[email protected]"); // Insert three people at the data grid... people.put(1, pessoa1); people.put(2, pessoa2); people.put(3, pessoa3); // Waits for 5 minutes until the user runs the Java program // that calculates the total size of the people cache... try { System.out.println("---> Waiting for 5 minutes for the cache size calculation..."); Thread.sleep(300000); } catch (InterruptedException ie) { ie.printStackTrace(); } } } Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below. package com.oracle.coherence.demo; import java.text.DecimalFormat; import java.util.Map; import java.util.Set; import java.util.TreeMap; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.ObjectName; import com.tangosol.net.CacheFactory; public class CalculateTheSizeOfPeopleCache { @SuppressWarnings({ "unchecked", "rawtypes" }) private void run() throws Exception { // Enable JMX support in this Coherence data grid session... System.setProperty("tangosol.coherence.management", "all"); // Create a sample cache just to access the data grid... CacheFactory.getCache(MBeanServerFactory.class.getName()); // Gets the JMX server from Coherence data grid... MBeanServer jmxServer = getJMXServer(); // Creates a internal data structure that would maintain // the statistics from each cache in the data grid... Map cacheList = new TreeMap(); Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null); for (Object jmxObject : jmxObjectList) { ObjectName jmxObjectName = (ObjectName) jmxObject; String cacheName = jmxObjectName.getKeyProperty("name"); if (cacheName.equals(MBeanServerFactory.class.getName())) { continue; } else { cacheList.put(cacheName, new Statistics(cacheName)); } } // Updates the internal data structure with statistic data // retrieved from caches inside the in-memory data grid... Set<String> cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Set resultSet = jmxServer.queryNames( new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null); for (Object resultSetRef : resultSet) { ObjectName objectName = (ObjectName) resultSetRef; if (objectName.getKeyProperty("tier").equals("back")) { int unit = (Integer) jmxServer.getAttribute(objectName, "Units"); int size = (Integer) jmxServer.getAttribute(objectName, "Size"); Statistics statistics = (Statistics) cacheList.get(cacheName); statistics.incrementUnit(unit); statistics.incrementSize(size); cacheList.put(cacheName, statistics); } } } // Finally... print the objects from the internal data // structure that represents the statistics from caches... cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Statistics estatisticas = (Statistics) cacheList.get(cacheName); System.out.println(estatisticas); } } public MBeanServer getJMXServer() { MBeanServer jmxServer = null; for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) { jmxServer = (MBeanServer) jmxServerRef; if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) { break; } jmxServer = null; } if (jmxServer == null) { jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN); } return jmxServer; } private class Statistics { private long unit; private long size; private String cacheName; public Statistics(String cacheName) { this.cacheName = cacheName; } public void incrementUnit(long unit) { this.unit += unit; } public void incrementSize(long size) { this.size += size; } public long getUnit() { return unit; } public long getSize() { return size; } public double getUnitInMB() { return unit / (1024.0 * 1024.0); } public double getAverageSize() { return size == 0 ? 0 : unit / size; } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("\nCache Statistics of '").append(cacheName).append("':\n"); sb.append(" - Total Entries of Cache -----> " + getSize()).append("\n"); sb.append(" - Used Memory (Bytes) --------> " + getUnit()).append("\n"); sb.append(" - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n"); sb.append(" - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n"); return sb.toString(); } } public static void main(String[] args) throws Exception { new CalculateTheSizeOfPeopleCache().run(); } public static final DecimalFormat FORMAT = new DecimalFormat("###.###"); public static final String DEFAULT_DOMAIN = ""; public static final String DOMAIN_NAME = "Coherence"; } I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics". This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this: 2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified 2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 3.6.0.4 Build 19111 Grid Edition: Development mode Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. 2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml" 2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml" 2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider 2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) 2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB Group{Address=224.3.6.0, Port=36000, TTL=4} MasterMemberSet ( ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) ) RecycleMillis=1200000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) TcpRing{Connections=[1]} IpMonitor{AddressListSize=0} 2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1 2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1 2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions Cache Statistics of 'People': - Total Entries of Cache -----> 3 - Used Memory (Bytes) --------> 883920 - Used Memory (MB) -----------> 0.843 - Object Average Size --------> 294640 I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • CodePlex Daily Summary for Friday, June 15, 2012

    CodePlex Daily Summary for Friday, June 15, 2012Popular ReleasesStackBuilder: StackBuilder 1.0.8.0: + Corrected a few bugs in batch processor + Corrected a bug in layer thumbnail generatorAutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.SharePoint KnowledgeBase (ISC 2012): ISC.KBase.Advanced: A full-trust version of the Base application. This version uses a Managed Metadata column for categories and contains other enhancements.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsWebSocket4Net: WebSocket4Net 0.7: Changes included in this release: updated ClientEngine added proper exception handling code added state support for callback added property AllowUnstrustedCertificate for JsonWebSocket added properties for sending ping automatically improved JsBridge fixed a uri compatibility issueXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingSP Sticky Notes: SPSticky Solution Package: Install steps:Install the sandbox solution to the site collection Activate the solution Add a SPSticky web part (from a 'Custom' group) to a page that resides on the same web as the StickyList list Initial version, supported functionality: Moving a sticky around (position is saved in list) Resizing of the sticky (size is not saved) Changing the text of the sticky (saved in list) Saving occurs when you click away from the sticky being edited Stickys are filtered based on the curre...New ProjectsAdvanced Utility Libs: This project provides many basic and advanced utility libs for developer.ATopSearchEngine: ATopSearchEngine, a class projectAudio Player 3D: A WPF 3D Audio Player.beginABC: day la project test :)BookmarkSave For VB6: An addin, written in VB.net, for VB6 that preserves Bookmarks and Breakpoints between runs of the IDE.Centos 5 & 6 Managements Packs For System Center Operations Manager 2012: Centos 5 & 6 Managements Packs For System Center Operations Manager 2012ChildProcesses - Child Process Management for .NET: ChildProcesses.NET is a child process management library for the .NET framework. It allows to create child processes, and provides bidirectional extendable interprocess communication based on WCF and NamedPipes out of the box Child and parent processes monitor each other and notify about termination or other events. Child processes are an alternative to AppDomains when parent must continue if the child crashes. It is also poossible to mix 32Bit and 64 Bit processes. Cookie Free Analytics: Cookie Free Analytics (CFA) is a free server side Google Analytics tracking solution for Windows based websites running IIS & ASP.NET. 100% javascript & cookie free - with CFA you can track your visitors & file downloads in Google Analytics without cookies or JavaScript. Ideal for mobile websites or those affected by the EU Cookie law.F# (FSharp) based Windows Phone 7.5 software to answer: Where am I?: Location Finder F# (FSharp) based Windows Phone 7.5 software to answer the simple question: - Where am I? Henyo Word: The summary is required.Houma-Thibodaux .NET User Group: The CodePlex home of the Houma-Thibodaux .NET user group.JqGridHelper: some code for using jqgrid 1. javacsript function for customer mutil-search 2. C# function for parse querystring and build sql query command 3. a sql procedure templete for searching NetFluid.IRC: Mibbit clone in NetFluid with the possibilities to use permanent eggdrop in C# nGo: nGo frameworkNinja Client-Server Game: Naruto based online client-server game.NoLinEq: NoLinEq is a mathematical tool for solving nonlinear equations using different methods.Oculus: Oculus is a real-time ETW listening service that is plugin-basedPinoy Henyo: Response.Redirect (https://henyoword.codeplex.com)Razorblade: Razorblade is intended to be a template for WebDesign/WebDevelopment. It is based on HTML5Boilerplate and uses ASP.NET wtih the Razor Syntax (C#). Original HTML5Boilerplate comments are intact with some added comments to explain some of the Razor Syntax.ServiceFramework: The ServiceFramework project is a framework built using Microsoft WCF to address architectural governance issues by tracking the activity of services.Sharp Console: Sharp Console is a Windows Command Line (WCL)alternative written in C#. It targets those who lack access to the WCL or simply wish to use the NET framework instead. It aims to provide the same (and more!) functionality as the WCL. Contribute anything you feel will make it better!SharpKit.on{X}: This project allows you to write on{x} rules using C#shequ01: shequ01 websiteSilena: Porta lectus ac sed scelerisque, pellentesque, cras a et urna enim ultricies, tristique magnis nec proin. Velit tristique, aliquet quis ut mid lorem eros? Dolor magna. Aliquet. Et? Sociis pellentesque, tincidunt aenean cursus, pulvinar! Placerat etiam! Mid eu? Vut a. Placerat duis, risus adipiscing lacus, sagittis magna vut etiam, sed. Et ridiculus! Et sit, enim scelerisque velit tristique vel? Adipiscing vel adipiscing eu. Enim proin egestas lorem, aenean lectus enim vel nisi, augue duis pla...Silverlight 5 Chat: Full-duplex Publish/Subscribe -pattern on TCP/IP: F-Sharp (F#) WCF "PollingDuplex" WebService with SL5 FSharp clientSimple CQRS implemented with F#: Simple CQRS on F# (F-Sharp) 3.0 SOD: sustainable office designerSolvis Control II Viewer: SolvisSC2Viewer kann die Logdaten der Solvis Heizung Steuerung SolvisControl 2 auslesen und grafisch darstellen. testdd061412012tfs01: bvtestddgit061420123: eswrtestddgit061420124: fgtestddhg061420121: sdtestgit061420121: xctesthg06142012hg3: dfsTurismo Digital: Turismo DigitlUmbraco 5 File Picker: ????Umbraco 5 ???????。????????????????????。WeatherFrame: Windows Phone weather app.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 16, 2011

    CodePlex Daily Summary for Wednesday, November 16, 2011Popular ReleasesMVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...Public Key Infrastructure PowerShell module: PowerShell PKI Module v0.9.2: Installation guide: Use default installation path to install this module for all users To install this module for current user only — use the following path: %MyDocs%\WindowsPowerShell\Modules Note: you MUST uninstall previously installed module versions. Direct upgrade is not supported! Note: at this time PowerShell 3.0 CTP1 is not supported. Release notes for version 0.9.2: updated and corrected help for several commands. added online help for all commands. Get-CertificationAuthorit...Files Name Copier: 1.0.0.1: Files Name Copier is a simple easy to use utility that allows you to drag and drop any number of files onto it. The clipboard will now contain the list of files you just dropped.SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 3: 1. improved object search, now show previous/next navigation 2. shows server start up time 3. shows process cpu/ioCODE Framework: 4.0.11115.0: Added support for partial views in the WPF framework, as well as a new helper feature that allows hooking commands/actions to all WPF events.Silverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...DotNetNuke® Community Edition: 06.01.01: Major Highlights Fixed problem with the core skin object rendering CSS above the other framework inserted files, which caused problems when using core style skin objects Fixed issue with iFrames getting removed when content is saved Fixed issue with the HTML module removing styling and scripts from the content Fixed issue with inserting the link to jquery after the header of the page Security Fixesnone Updated Modules/Providers ModulesHTML version 6.1.0 ProvidersnoneDotNetNuke Performance Settings: 01.00.00: First release of DotNetNuke SQL update queries to set the DNN installation for optimimal performance. Please review and rate this release... (stars are welcome)SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...Windows Azure SDK for PHP: Windows Azure SDK for PHP v4.0.4: INSTALLATION Windows Azure SDK for PHP requires no special installation steps. Simply download the SDK, extract it to the folder you would like to keep it in, and add the library directory to your PHP include_path. INSTALLATION VIA PEAR Maarten Balliauw provides an unofficial PEAR channel via http://www.pearplex.net. Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPAzure Or if you've already installed PHPAzure before: pear upgrade p...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsDesktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbRDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...New Projects#foo MigratorFoo: Migration generator for NHibernate projects with integrated NuGet console commands.Best music production software: Programmers needed to make the perfect open source music production software: I have a well structured creative idea based on the following principle "powerful simplicity"...Bricks: Bricks is a development foundation for different technologyCultiv Jupiter Contact Form: A contact form package for Umbraco v5 (Jupiter)DNN Fan Gate: A simple module for fan gating content in DotNetNukeDobo: DoboDotNetNuke Performance Settings: A set of SQL queries that will change the (Host & Schedule) settings of your DotNetNuke installation for maximal production performance or set it optimal for Development. These queries were used in my presentation on DotNetNuke World 2011.Experiment Manager: This project was developed as a response to the requirements for an experiment manager software for clinical psychology experiments. Feedbook: Feedbook supports any RSS and Atom news feeds, It can also download podcasts saliently and play it within Feedbook. It integrates with Twitter and Google Buzz very well, you can send tweets/post easily and can also subscribe to twitter timelines/buzz feeds. Characters like # & @ also recognize as special keywords, so that you can subscribe to feeds of any on going topic on twitter like #download, #movie or #cricket. http://feedbook.orgFiles Name Copier: Files Name Copier is a simple easy to use utility that allows you to drag and drop any number of files onto it. The clipboard will now contain the list of files you just dropped.Kinban: Kinect + TFS This is a WPF representation of a Kanban board that can be manipulated with the Kinect.MapTeamsPolls: This is an ASP.Net ajax polls user controls. Using ajax and jQuery. This user controls can be used in any ASP.Net 4.0 projects. Use the build script to create Umbraco package for the user controls and the admin page. As a bonus, there is a ruby rake build script for the package.MetroButton with Converter Image SVG2XAML Silverlight: - Control library SL4 with Button Metro Style - Windows application for download and convert SVG to Resource.xaml for silverlightMetroIoc - a simple WinRT IoC container: A WinRT port of the MicroIoc container... MetroIoc is a native WinRT Inversion of Control container, for use in Metro style apps. Check the license for usage rights of this projectMTN Library: .NET library with extensions for LINQ and Entity Framework as well as useful extensions. Contains many extensions, features for display methods using jquery Ajax. Has a proxy to create and use methods with cache as much as in the ajax methods.myDeck: Main goal of this project is to show best practices and patterns for unit testing in Silverlight Technology. It is developed as part of BAThesisNetwork extensions for .net Micro Framework and Gadgeteer: Adds .net Microframework support for - BinaryReader - BinaryWriter Adds a FEZ way for TCP and UDP networking to the Gadgeteer platformNewSunse: this card creating projectOrchard CMS ImageResizer: resize or scale images in orchard simply by url or using the build-in HtmlHelperReportCenter: ReportCenterSharePoint 2010 Extended Lookup Field: This project installs a custom lookup field, which allows the user to choose the item through a SharePoint 2010 popup. Here are the features offered by this field: - high extensibility - high performance when woking with very large lists - small size for new and edit list forms - posibility to specify you own search page - SharePoint 2010 lookup and feel The project is a VS2010 SharePoint 2010 project. Simplicity Framework: Simplicity is a framework to automatic generate dataccess an UI code for crudlstackoverflowstore: Homenaje a stackoverflow.Stoc: Aplicatia Stoc simplifica generarea facturilor si a retelor, gestionand stocul marfii unei firme. Student PAS: Student PAS is a simple webapp for assign tasks for students. It was created with educational purposes. Thus, it is not an serious usable software.System Center 2012 - Operations Manager Managed Module Samples: Sample Visual Studio 2010 projects that illustrate how you can create custom module types for System Center 2012 - Operations Manager. The project contains the C# code for creating data source, write action, probe and condition detection modules and more.Tecnosop: Sistema de facturacion de TecnosopUtilityLibrary.FormControl: FormControlWebRequestFrequencyControl: This project intends to solve the web request frequency control problem. This solution is based on HttpModule and highly configurable. If you need to limit the frequency of requests, you may interest in this.Windows Phone Marketplace Analytics: Libraries (and data) to help you access and analyze the different information provided by the MS marketplace.Windows Troubleshooting Platform Starter Packs: Starter Packs are open source packages based on the Windows Troubleshooting Platform that can be modified by developers to suit their specific needs. Starter Packs are an easy way for developers to create and distribute a Windows Troubleshooting Pack to diagnose computer problems.

    Read the article

  • OIM 11g : Multi-thread approach for writing custom scheduled job

    - by Saravanan V S
    In this post I have shared my experience of designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs.  I have used thread pool (in particular fixed thread pool) pattern in developing the OIM schedule job. The thread pooling pattern has noted advantages compared to thread per task approach. I have listed few of the advantage here ·         Threads are reused ·         Creation and tear-down cost of thread is reduced ·         Task execution latency is reduced ·         Improved performance ·         Controlled and efficient management of memory and resources used by threads More about java thread pool http://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html The following diagram depicts the high-level architectural diagram of the schedule job that process input from a flat file to update OIM process form data using fixed thread pool approach    The custom scheduled job shared in this post is developed to meet following requirement 1)      Need to process a CSV extract that contains identity, account identifying key and list of data to be updated on an existing OIM resource account. 2)      CSV file can contain data for multiple resources configured in OIM 3)      List of attribute to update and mapping between CSV column to OIM fields may vary between resources The following are three Java class developed for this requirement (I have given only prototype of the code that explains how to use thread pools in schedule task) CustomScheduler.java - Implementation of TaskSupport class that reads and passes the parameters configured on the schedule job to Thread Executor class. package com.oracle.oim.scheduler; import java.util.HashMap; import com.oracle.oim.bo.MultiThreadDataRecon; import oracle.iam.scheduler.vo.TaskSupport; public class CustomScheduler extends TaskSupport {      public void execute(HashMap options) throws Exception {             /*  Read Schedule Job Parameters */             String param1 = (String) options.get(“Parameter1”);             .             int noOfThread = (int) options.get(“No of Threads”);             .             String paramn = (int) options.get(“ParamterN”); /* Provide all the required input configured on schedule job to Thread Pool Executor implementation class like 1) Name of the file, 2) Delimiter 3) Header Row Numer 4) Line Escape character 5) Config and resource map lookup 6) No the thread to create */ new MultiThreadDataRecon(all_required_parameters, noOfThreads).reconcile();       }       public HashMap getAttributes() { return null; }       public void setAttributes() {       } } MultiThreadDataRecon.java – Helper class that reads data from input file, initialize the thread executor and builds the task queue. package com.oracle.oim.bo; import <required file IO classes>; import  <required java.util classes>; import  <required OIM API classes>; import <csv reader api>; public class MultiThreadDataRecon {  private int noOfThreads;  private ExecutorService threadExecutor = null;  public MetaDataRecon(<required params>, int noOfThreads)  {       //Store parameters locally       .       .       this.noOfThread = noOfThread;  }        /**        *  Initialize         */  private void init() throws Exception {       try {             // Initialize CSV file reader API objects             // Initialize OIM API objects             /* Initialize Fixed Thread Pool Executor class if no of threads                 configured is more than 1 */             if (noOfThreads > 1) {                   threadExecutor = Executors.newFixedThreadPool(noOfThreads);             } else {                   threadExecutor = Executors.newSingleThreadExecutor();             }             /* Initialize TaskProcess clas s which will be executing task                 from the Queue */                TaskProcessor.initializeConfig(params);       } catch (***Exception e) {                   // TO DO       }  }       /**        *  Method to reconcile data from CSV to OIM        */ public void reconcile() throws Exception {        try {             init();             while(<csv file has line>){                   processRow(line);             }             /* Initiate thread shutdown */             threadExecutor.shutdown();             while (!threadExecutor.isTerminated()) {                 // Wait for all task to complete.             }            } catch (Exception e) {                   // TO DO            } finally {                   try {                         //Close all the file handles                   } catch (IOException e) {                         //TO DO                   }             }       }       /**        * Method to process         */       private void processRow(String row) {             // Create task processor instance with the row data              // Following code push the task to work queue and wait for next                available thread to execute             threadExecutor.execute(new TaskProcessor(rowData));       } } TaskProcessor.java – Implementation of “Runnable” interface that executes the required business logic to update data in OIM. package com.oracle.oim.bo; import <required APIs> class TaskProcessor implements Runnable {       //Initialize required member variables       /**        * Constructor        */       public TaskProcessor(<row data>) {             // Initialize and parse csv row       }       /*       *  Method to initialize required object for task execution       */       public static void initializeConfig(<params>) {             // Process param and initialize the required configs and object       }           /*        * (non-Javadoc)        *         * @see java.lang.Runnable#run()        */            public void run() {             if (<is csv data valid>){                   processData();             }       }  /**   * Process the the received CSV input   */  private void processData() {     try{       //Find the user in OIM using the identity matching key value from CSV       // Find the account to be update from user’s account based on account identifying key on CSV       // Update the account with data from CSV       }catch(***Exception e){           //TO DO       }   } }

    Read the article

  • Error deploying Spring application to Jboss due to ClassCastException

    - by Rafael
    When I try to deploy a spring application in Jboss, I get this error: 11:32:34,045 ERROR [AbstractKernelController] Error installing to Start: name=persistence.unit:unitName=#ehr-punit state=Create java.lang.RuntimeException: Specification violation [EJB3 JPA 6.2.1.2] - You have not defined a jta-data-source for a JTA enabled persistence context named: ehr-punit at org.jboss.jpa.deployment.PersistenceUnitInfoImpl.(PersistenceUnitInfoImpl.java:115) at org.jboss.jpa.deployment.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:275) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.reflect.plugins.introspection.ReflectionUtils.invoke(ReflectionUtils.java:59) at org.jboss.reflect.plugins.introspection.ReflectMethodInfoImpl.invoke(ReflectMethodInfoImpl.java:150) at org.jboss.joinpoint.plugins.BasicMethodJoinPoint.dispatch(BasicMethodJoinPoint.java:66) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction$JoinpointDispatchWrapper.execute(KernelControllerContextAction.java:241) at org.jboss.kernel.plugins.dependency.ExecutionWrapper.execute(ExecutionWrapper.java:47) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction.dispatchExecutionWrapper(KernelControllerContextAction.java:109) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction.dispatchJoinPoint(KernelControllerContextAction.java:70) at org.jboss.kernel.plugins.dependency.LifecycleAction.installActionInternal(LifecycleAction.java:221) at org.jboss.kernel.plugins.dependency.InstallsAwareAction.installAction(InstallsAwareAction.java:54) at org.jboss.kernel.plugins.dependency.InstallsAwareAction.installAction(InstallsAwareAction.java:42) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:774) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:540) at org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.deploy(BeanMetaDataDeployer.java:121) at org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.deploy(BeanMetaDataDeployer.java:51) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) 11:32:35,615 INFO [TomcatDeployment] deploy, ctxPath=/ehr-web 11:32:35,986 INFO [[/ehr-web]] Initializing Spring root WebApplicationContext 11:32:35,986 INFO [ContextLoader] Root WebApplicationContext: initialization started 11:32:36,046 INFO [XmlWebApplicationContext] Refreshing org.springframework.web.context.support.XmlWebApplicationContext@1392743: display name [Root WebApplicationContext]; startup date [Mon Jul 20 11:32:36 BRT 2009]; root of context hierarchy 11:32:36,184 INFO [XmlBeanDefinitionReader] Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] 11:32:36,189 ERROR [ContextLoader] Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl cannot be cast to javax.xml.parsers.DocumentBuilderFactory at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:420) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:342) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:422) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3910) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4393) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl cannot be cast to javax.xml.parsers.DocumentBuilderFactory at javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source) at org.springframework.beans.factory.xml.DefaultDocumentLoader.createDocumentBuilderFactory(DefaultDocumentLoader.java:89) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:70) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) Does someone know what can I do to deploy this? Thanks

    Read the article

  • Time Service will not start on Windows Server - System error 1290

    - by paradroid
    I have been trying to sort out some time sync issues involving two domain controllers and seem to have ended up with a bigger problem. It's horrible. They are both virtual machines (one being on Amazon EC2), which I think may complicate things regarding time servers. The primary DC with all the FSMO roles is on the LAN. I reset its time server configuration like this (from memory): net stop w32time w23tm /unregister shutdown /r /t 0 w32tm /register w32tm /config /manualpeerlist:”0.uk.pool.ntp.org,1.uk.pool.ntp.org,2.uk.pool.ntp.org,3.uk.pool.ntp.org” /syncfromflags:manual /reliable:yes /update W32tm /config /update net start w32time reg QUERY HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config /v AnnounceFlags I checked to see if it was set to 0x05, which it was. The output for... w32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 10/04/2012 15:03:27 Source: Local CMOS Clock Poll Interval: 6 (64s) While this was not what was intended, I thought I would sort it out after I made sure that the remote DC was syncing with it first. On the Amazon EC2 remote replica DC (Windows Server 2008 R2 Core)... net stop w32time w32tm /unregister shutdown /r /t 0 w32time /register net start w32time This is where it all goes wrong System error 1290 has occurred. The service start failed since one or more services in the same process have an incompatible service SID type setting. A service with restricted service SID type can only coexist in the same process with other services with a restricted SID type. If the service SID type for this service was just configured, the hosting process must be restarted in order to start this service. I cannot get the w32time service to start. I've tried resetting the time settings and tried to reverse what I have done. The Ec2Config service cannot start either, as it depends on the w32time service. All the solutions I have seen involve going into the telephony service registry settings, but as it is Server Core, it does not have that role, and I cannot see the relationship between that and the time service. w32time runs in the LocalService group and this telephony service which does not exist on Core runs in the NetworkService group. Could this have something to do with the process (svchost.exe) not being able to be run as a domain account, as it now a domain controller, but originally it ran as a local user group, or something like that? There seem to be a lot of cases of people having this problem, but the only solution has to do with the (non-existant on Core) telephony service. Who even uses that?

    Read the article

  • JSF Portlets in Liferay on JBoss

    - by JBirch
    I'm currently looking at working with and deploying JSF portlets into Liferay 6.0.5, sitting on JBoss 5.1.0. I ran into a lot of trouble trying to port some JSF-y/Seam-y/EJB-y stuff I had lying around, so I thought I'd start simple and work my way up. I could generate generic portlets using the NetBeans Maven archetype for Liferay portlets absolutely fine, but it's rather irrelevant because I wanted JSF portlets I took an example JSF portlet from http://www.liferay.com/downloads/liferay-portal/community-plugins/-/software_catalog/products/5546866 and attempted to deploy into a clean, vanilla installation of Liferay 6.0.5/JBoss 5.1.0 to no avail. The log messages are reproduced at the end of this. This particular example was actually tested for GlassFish and Tomcat, so it's not particularly helpful considering I'm deplying into JBoss. I tried ripping it apart and removing the jsf implementation contained within as there is a jsf implementation shipped with JBoss (Mojarra 1.2_12, in this case). 03:16:17,173 INFO [PortletAutoDeployListener] Copying portlets for /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/deploy/richfaces-sun-jsf1.2-facelets-portlet-1.2.war Expanding: /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/deploy/richfaces-sun-jsf1.2-facelets-portlet-1.2.war into /tmp/20110201031617188 Copying 1 file to /tmp/20110201031617188/WEB-INF Copying 1 file to /tmp/20110201031617188/WEB-INF/classes Copying 1 file to /tmp/20110201031617188/WEB-INF/classes Copying 47 files to /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/jboss-5.1.0/server/default/deploy/richfaces-sun-jsf1.2-facelets-portlet.war Copying 1 file to /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/jboss-5.1.0/server/default/deploy/richfaces-sun-jsf1.2-facelets-portlet.war Deleting directory /tmp/20110201031617188 03:16:20,075 INFO [PortletAutoDeployListener] Portlets for /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/deploy/richfaces-sun-jsf1.2-facelets-portlet-1.2.war copied successfully. Deployment will start in a few seconds. 03:16:23,632 INFO [TomcatDeployment] deploy, ctxPath=/richfaces-sun-jsf1.2-facelets-portlet 03:16:24,446 INFO [PortletHotDeployListener] Registering portlets for richfaces-sun-jsf1.2-facelets-portlet 03:16:24,492 INFO [faces] Init GenericFacesPortlet for portlet 1 03:16:24,495 INFO [faces] Bridge class name is org.jboss.portletbridge.AjaxPortletBridge 03:16:24,509 INFO [faces] The bridge does not support doHeaders method 03:16:24,510 INFO [faces] GenericFacesPortlet for portlet 1 initialized 03:16:24,555 INFO [PortletHotDeployListener] 1 portlet for richfaces-sun-jsf1.2-facelets-portlet is available for use 03:16:24,627 SEVERE [webapp] Initialization of the JSF runtime either failed or did not occurr. Review the server''s log for details. java.lang.InstantiationException: org.jboss.portletbridge.context.FacesContextFactoryImpl at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at javax.faces.FactoryFinder.getImplGivenPreviousImpl(FactoryFinder.java:537) at javax.faces.FactoryFinder.getImplementationInstance(FactoryFinder.java:394) at javax.faces.FactoryFinder.access$400(FactoryFinder.java:135) at javax.faces.FactoryFinder$FactoryManager.getFactory(FactoryFinder.java:717) at javax.faces.FactoryFinder.getFactory(FactoryFinder.java:239) at javax.faces.webapp.FacesServlet.init(FacesServlet.java:164) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1048) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:950) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4122) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4421) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.GeneratedMethodAccessor286.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) 03:16:24,629 INFO [2-facelets-portlet]] Marking servlet FacesServlet as unavailable 03:16:24,630 ERROR [2-facelets-portlet]] Servlet /richfaces-sun-jsf1.2-facelets-portlet threw load() exception javax.servlet.UnavailableException: Initialization of the JSF runtime either failed or did not occurr. Review the server''s log for details. at javax.faces.webapp.FacesServlet.init(FacesServlet.java:172) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1048) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:950) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4122) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4421) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.GeneratedMethodAccessor286.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Diff 2 large XML files to produce a delta xml file

    - by aniln
    Need to be able to diff 2 large / very large XML files and produce the delta XML file. Also, as this process will be part of a larger automated process on below hardware / OS config. Machine hardware: sun4v OS version: 5.10 Processor type: sparc Hardware: SUNW,SPARC-Enterprise-T5220 Please let me know if there's an installable application on Solaris which can be called as part of a ksh script Example: Run driver_script.ksh Above script will have a line: xml_delta file1.xml file2.xml delta_file.xml where xml_delta is the installable application which produces the delta file after comparing file1.xml and file2.xml

    Read the article

  • Proliant Support Pack and Ubuntu on older HP Proliants

    - by snl
    We have a couple of oldish HP Proliant servers -- one DL385 G1 and one DL360 G5, to be exact -- that we'd like to upgrade from CentOS 5 to Ubuntu LucidLynx. The problem is that HP doesn't offer Ubuntu Proliant Support Packs for these particular models. Would you upgrade regardless, skipping the PSPs altogether? Are there alternative hardware monitoring tools that would match the functionality of the PSPs? Is there a hack to install the PSP RPMs on an Ubuntu system?

    Read the article

  • How can I run supervisord without using root?

    - by Jason Baker
    I seem to be having trouble figuring out why supervisord won't run as a non-root user. If I start it with the user set to jason (pid 1000), I get the following in the log file: 2010-05-24 08:53:32,143 CRIT Set uid to user 1000 2010-05-24 08:53:32,143 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:53:32,189 INFO RPC interface 'supervisor' initialized 2010-05-24 08:53:32,189 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:53:32,189 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:53:32,190 INFO daemonizing the supervisord process 2010-05-24 08:53:32,191 INFO supervisord started with pid 3444 ...then the process dies for some unknown reason. If I start it without sudo (under the user jason), I get similar output: 2010-05-24 08:51:32,859 INFO supervisord started with pid 3306 2010-05-24 08:52:15,761 CRIT Can't drop privilege as nonroot user 2010-05-24 08:52:15,761 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:52:15,807 INFO RPC interface 'supervisor' initialized 2010-05-24 08:52:15,807 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:52:15,807 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:52:15,808 INFO daemonizing the supervisord process 2010-05-24 08:52:15,809 INFO supervisord started with pid 3397 ...and it still doesn't run. If it's any help, here's the supervisord.conf file I'm using: [unix_http_server] file=/tmp/supervisor.sock ; path to your socket file [supervisord] logfile=./supervisord.log ; supervisord log file logfile_maxbytes=50MB ; maximum size of logfile before rotation logfile_backups=10 ; number of backed up logfiles loglevel=debug ; info, debug, warn, trace pidfile=./supervisord.pid ; pidfile location nodaemon=false ; run supervisord as a daemon minfds=1024 ; number of startup file descriptors minprocs=200 ; number of process descriptors user=jason ; default user childlogdir=./supervisord/ ; where child log files will live [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use unix:// schem for a unix sockets. [include] # Uncomment this line for celeryd for Python files=celeryd.conf # Uncomment this line for celeryd for Django. ;files=django/celeryd.conf ...and here's celeryd.conf: [program:celery] command=bin/celeryd --loglevel=INFO --logfile=./celeryd.log environment=PYTHONPATH='./tsched_worker', JIVA_DB_PLATFORM='oracle', ORACLE_HOME='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server', LD_LIBRARY_PATH='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib', TNS_ADMIN='/home/jason', CELERY_CONFIG_MODULE='tsched_worker.celeryconfig' directory=. user=jason numprocs=1 stdout_logfile=/var/log/celeryd.log stderr_logfile=/var/log/celeryd.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 ; if rabbitmq is supervised, set its priority higher ; so it starts first priority=998 Can anyone help me figure out what's going on?

    Read the article

  • sudo apt-get install mysql-server fails

    - by danwoods
    Hi all, I'm coming from a fresh install of Ubuntu server 9.10 and trying to install mysql-server by using 'sudo apt-get mysql-server' I get the following errors: dan@dev:~$ sudo apt-get install mysql-server [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server-5.1 Suggested packages: dbishell libipc-sharedcache-perl tinyca The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server mysql-server-5.1 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. Need to get 16.5MB of archives. After this operation, 39.0MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com karmic/main libnet-daemon-perl 0.43-1 [46.9kB] Get:2 http://us.archive.ubuntu.com karmic/main libplrpc-perl 0.2020-2 [36.0kB] Get:3 http://us.archive.ubuntu.com karmic/main libdbi-perl 1.609-1 [800kB] Get:4 http://us.archive.ubuntu.com karmic/main libdbd-mysql-perl 4.011-1ubuntu1 [136kB] Get:5 http://us.archive.ubuntu.com karmic-updates/main mysql-client-5.1 5.1.37- 1ubuntu5.1 [8,202kB] Get:6 http://us.archive.ubuntu.com karmic-updates/main mysql-server-5.1 5.1.37-1ubuntu5.1 [7,186kB] Get:7 http://us.archive.ubuntu.com karmic/main libhtml-template-perl 2.9-1 [65.8kB] Get:8 http://us.archive.ubuntu.com karmic-updates/main mysql-server 5.1.37-1ubuntu5.1 [64.3kB] Fetched 16.5MB in 1min 34s (175kB/s) Preconfiguring packages ... Selecting previously deselected package libnet-daemon-perl. (Reading database ... 123083 files and directories currently installed.) Unpacking libnet-daemon-perl (from .../libnet-daemon-perl_0.43-1_all.deb) ... Selecting previously deselected package libplrpc-perl. Unpacking libplrpc-perl (from .../libplrpc-perl_0.2020-2_all.deb) ... Selecting previously deselected package libdbi-perl. Unpacking libdbi-perl (from .../libdbi-perl_1.609-1_i386.deb) ... Selecting previously deselected package libdbd-mysql-perl. Unpacking libdbd-mysql-perl (from .../libdbd-mysql-perl_4.011-1ubuntu1_i386.deb) ... Selecting previously deselected package mysql-client-5.1. Unpacking mysql-client-5.1 (from .../mysql-client-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package mysql-server-5.1. Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.37-1ubuntu5.1_all.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up libnet-daemon-perl (0.43-1) ... Setting up libplrpc-perl (0.2020-2) ... Setting up libdbi-perl (1.609-1) ... Setting up libdbd-mysql-perl (4.011-1ubuntu1) ... Setting up mysql-client-5.1 (5.1.37-1ubuntu5.1) ... Setting up mysql-server-5.1 (5.1.37-1ubuntu5.1) ... * Stopping MySQL database server mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) What am I missing? [update] mysqld returns: dan@dev:~$ sudo mysqld [sudo] password for dan: 100220 12:18:17 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:18:17 InnoDB: Retrying to lock the first data file InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process This goes on for a while... InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. ^[[BInnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:19:57 InnoDB: Unable to open the first data file InnoDB: Error in opening ./ibdata1 100220 12:19:57 InnoDB: Operating system error number 11 in a file operation. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 100220 12:19:57 [ERROR] Plugin 'InnoDB' init function returned error. 100220 12:19:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100220 12:19:57 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use 100220 12:19:57 [ERROR] Do you already have another mysqld server running on port: 3306 ? 100220 12:19:57 [ERROR] Aborting 100220 12:19:57 [Warning] Forcing shutdown of 1 plugins 100220 12:19:57 [Note] mysqld: Shutdown complete How can I check what process is using port: 3306? [Update]: sudo netstat -anp | grep LISTEN returns dan@dev:~$ sudo netstat -anp | grep LISTEN [sudo] password for dan: tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1372/master tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4391/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1409/cupsd tcp6 0 0 ::1:631 :::* LISTEN 1409/cupsd [More Updates]: I can log into mysql if that makes a difference

    Read the article

  • automate Clonezilla

    - by oidfrosty
    Is there a way to automate process to clone a disk with Clonezilla on different hardwares (centOS)? we need to make it easier as possible for the final user, right now the process is quite complex

    Read the article

  • How to find other end of unix socket connection?

    - by depesz
    I have a process (dbus-daemon) which has many open connection over UNIX sockets. One of these connections is fd #36: =$ ps uw -p 23284 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND depesz 23284 0.0 0.0 24680 1772 ? Ss 15:25 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session =$ ls -l /proc/23284/fd/36 lrwx------ 1 depesz depesz 64 2011-03-28 15:32 /proc/23284/fd/36 -> socket:[1013410] =$ netstat -nxp | grep 1013410 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD =$ netstat -nxp | grep dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013953 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013825 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013726 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013471 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1013410 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012325 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012302 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012289 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1012151 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011957 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011937 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011900 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011775 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011771 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011769 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011766 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011663 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011635 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011627 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011540 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011480 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011349 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011312 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011284 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011250 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011231 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011155 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011061 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011049 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011035 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1011013 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1010961 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD unix 3 [ ] STREAM CONNECTED 1010945 23284/dbus-daemon @/tmp/dbus-3XDU4PYEzD Based on number connections, I assume that dbus-daemon is actually server. Which is OK. But how can I find which process is connected to it - using the connection that is 36th file handle in dbus-launcher? Tried lsof and even greps on /proc/net/unix but I can't figure out a way to find the client process.

    Read the article

  • VB6 Scheduled tasks on Windows Server 2008 Standard

    - by Terry
    Hello, this is my first time using this forum. Here is my situation: We are having issues with specific tasks written in VB6 it would seem. I am not a developer, but I am told these tasks exe are written in VB6. The task is initiated by task scheduler, the process begins to run (you can view the task in task manager, but no resources are used, 00 CPU, 760 K RAM), but nothing occurs. In a normal operating situation, the task will use 25% CPU and up to 20 MB RAM. When the task fails to run, you can still end and start it via Task Scheduler, but nothing happens. If you run just the process via the exe, it runs fine. The problem just seems to be when it is initiated via Task Scheduler. And this is a random issue, which always disappears after a server reboot. All of these tasks are VB 6 applications on Windows Server 2008 Standard, some servers are SP1, some are SP2, but both versions experience the issue. The task has been configured to run with highest priviledges, and to run whether logged on or not. Setting compatibility mode on the exe to 2003 does not make a difference. Situation 1: 51 - ERROR - Program did not appear to complete, check server!! (Desc: Input past end of file) in this situation, the task is running in task scheduler and you can view the process in task manager. . In the log file, all that is logged is: 12/17/2009 03:16 Starting T2 Populator version - 1.0.12 You can just end the task via task scheduler and start it via task scheduler and away it goes Situation 2: 36 - ERROR - Program last ran on 16-Dec-2009 in this situation the task is running in Task Scheduler and you can view the process in task manager, but no resources are used, 00 CPU, 760 K RAM. Nothing is logged in the log file. You end the task via task scheduler, but you must manually run the exe for it to complete. I was wondering if anyone else has experienced issues with VB6 tasks, or any tasks for that matter, on Server 2008?

    Read the article

  • Automate ripping TV show DVDs

    - by skarface
    RipIt & Handbrake do a really good job of ripping and compressing. For normal "single main feature" DVDs I have a good workflow, and for the most part handbrake does a good job of figuring out what the main title is. The process for ripping a DVD that has multiple episodes of something kinda sucks. Has anyone made any progress on automating (or at least simplifying) the process of getting show-s01e01.avi from a DVD?

    Read the article

  • Why can't I play DVDs on Windows 8 Pro with Media Center Pack?

    - by ligos
    I have a laptop with Windows 8 Pro with Media Center (64 bit), but neither Media Player or Media Center can play DVDs. Have I done something wrong? Did the Feature Pack not install correctly? Should this work? Can I somehow uninstall and reinstall the Media Pack? Details So I upgraded by Windows 7 Home Premium laptop to Windows 8 Pro based on Microsoft's low pricing. I also grabbed my free upgrade to Media Pack and followed the instructions on that page to add my feature pack. Alas! I still cannot play DVDs via either Media Center or Player. Various Context Thinking I might need to re-install the pack, I found that I could no longer add any more feature packs (searching for add features settings only shows Turn Windows Features On and Off). Media Centre and Media Player are both enabled in Windows Features. I cannot see any way to remove or downgrade from the Media Pack, nor to add any more feature packs. I installed a codec pack (32bit) from Shark007, which has not allowed me to play DVDs (although did allow me to play various other media files). Media Player can play DTV recorded on another Windows 7 box, but Media Center cannot. VLC plays DVDs OK, but I'd prefer to figure out what the root cause of this problem is. There were no errors or other indications that the Media Pack failed to install; the installation itself was quite smooth. Although I have not checked my event log in detail. Before upgrading to Windows 7, I could play DVDs OK. Screenshots System Information, showing I have Windows 8 Pro with Media Center When playing a DVD, Media Player gives and error: The selected file has an extension that is not recognised by windows... When you click Yes, it fails saying: Windows Media Player cannot find the file... Media Center says: The file type is not recognisd and cannot be played, along with some codec related stuff. I can browse the files OK via My Computer on any video DVD.

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Office 2007 and Office 2010 installed side by side

    - by BlueDevil
    I have Office 2007 and Office 2010 installed side-by-side. How do I stop the setup / configuration window from appearing each time I open a different version? If I open 2007, it will go through the configuration process, then I can use 2007 without issue until I open a 2010 application. Then, when I open any 2010 Office application it will go through the configuration process.

    Read the article

  • How to revert AIX service pack?

    - by unknown (google)
    I have an AIX 6.1 box with service pack level 6100-02-03-0909 swax23 # oslevel -s -q Known Service Packs 6100-02-03-0909 6100-02-02-0849 6100-02-01-0847 6100-02-00-0000 6100-01-04-0909 6100-01-03-0846 6100-01-02-0834 6100-01-01-0823 6100-00-08-0909 6100-00-07-0846 6100-00-06-0834 6100-00-05-0822 6100-00-04-0815 6100-00-03-0808 6100-00-02-0750 6100-00-01-0748 I want to revert the latest serive pack and go back to 6100-02-02-0849. How to do it?

    Read the article

  • CPU Limits for Application Pools in IIS 7.5

    - by Kyle Brandt
    I see that in iis 7.5 I can set a CPU % utilization limit for a specified amount of time for an application pool. I can have also have it kill the worker process if this limit is violated. If tell it to do this, will the worker process automatically restart after it is killed, or is manual intervention required? Over at Stack Overflow there is the mention that it can restarted at the completion of the interval...

    Read the article

  • Python error after installing libboost-all-dev on debian [migrated]

    - by Cameron Metzke
    A friend of mine wanted the liboost libraries installed on our shared computer so after installing libboost-all-dev 1.49.0.1 ( A debian wheezy machine ), I get this error when using the "pydoc modules" command on the commandline. It spits out the following error -- root@debian:/usr/include/c++/4.7# pydoc modules Please wait a moment while I gather a list of all available modules... **[debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 357 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 230 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../orte/runtime/orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "A system-required executable either could not be found or was not executable by this user" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** The MPI_Init() function was called before MPI_INIT was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [debian:49065] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!** root@debian:/usr/include/c++/4.7# I tried looking into the problem and ended up uninstalling the following to get it to work again. openmpi common all 1.4.5-1 libibverbs-dev amd64 1.1.6-1 libopenmpi-dev amd64 1.4.5-1 mpi-default-dev amd64 1.0.1 libboost-mpi-python1.49.0 although pydoc works again, I'm assuming the packages I removed are gunna hurt somethiong else down the track ? As you guessed im not a c/c++ programmer. So I guess my question is, will this hurt something later ? is their a way to install those packages without hurting python ?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >