Search Results

Search found 34654 results on 1387 pages for 'start job'.

Page 459/1387 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • Complete Guide to Symbolic Links (symlinks) on Windows or Linux

    - by Matthew Guay
    Want to easily access folders and files from different folders without maintaining duplicate copies?  Here’s how you can use Symbolic Links to link anything in Windows 7, Vista, XP, and Ubuntu. So What Are Symbolic Links Anyway? Symbolic links, otherwise known as symlinks, are basically advanced shortcuts. You can create symbolic links to individual files or folders, and then these will appear like they are stored in the folder with the symbolic link even though the symbolic link only points to their real location. There are two types of symbolic links: hard and soft. Soft symbolic links work essentially the same as a standard shortcut.  When you open a soft link, you will be redirected to the folder where the files are stored.  However, a hard link makes it appear as though the file or folder actually exists at the location of the symbolic link, and your applications won’t know any different. Thus, hard links are of the most interest in this article. Why should I use Symbolic Links? There are many things we use symbolic links for, so here’s some of the top uses we can think of: Sync any folder with Dropbox – say, sync your Pidgin Profile Across Computers Move the settings folder for any program from its original location Store your Music/Pictures/Videos on a second hard drive, but make them show up in your standard Music/Pictures/Videos folders so they’ll be detected my your media programs (Windows 7 Libraries can also be good for this) Keep important files accessible from multiple locations And more! If you want to move files to a different drive or folder and then symbolically link them, follow these steps: Close any programs that may be accessing that file or folder Move the file or folder to the new desired location Follow the correct instructions below for your operating system to create the symbolic link. Caution: Make sure to never create a symbolic link inside of a symbolic link. For instance, don’t create a symbolic link to a file that’s contained in a symbolic linked folder. This can create a loop, which can cause millions of problems you don’t want to deal with. Seriously. Create Symlinks in Any Edition of Windows in Explorer Creating symlinks is usually difficult, but thanks to the free Link Shell Extension, you can create symbolic links in all modern version of Windows pain-free.  You need to download both Visual Studio 2005 redistributable, which contains the necessary prerequisites, and Link Shell Extension itself (links below).  Download the correct version (32 bit or 64 bit) for your computer. Run and install the Visual Studio 2005 Redistributable installer first. Then install the Link Shell Extension on your computer. Your taskbar will temporally disappear during the install, but will quickly come back. Now you’re ready to start creating symbolic links.  Browse to the folder or file you want to create a symbolic link from.  Right-click the folder or file and select Pick Link Source. To create your symlink, right-click in the folder you wish to save the symbolic link, select “Drop as…”, and then choose the type of link you want.  You can choose from several different options here; we chose the Hardlink Clone.  This will create a hard link to the file or folder we selected.  The Symbolic link option creates a soft link, while the smart copy will fully copy a folder containing symbolic links without breaking them.  These options can be useful as well.   Here’s our hard-linked folder on our desktop.  Notice that the folder looks like its contents are stored in Desktop\Downloads, when they are actually stored in C:\Users\Matthew\Desktop\Downloads.  Also, when links are created with the Link Shell Extension, they have a red arrow on them so you can still differentiate them. And, this works the same way in XP as well. Symlinks via Command Prompt Or, for geeks who prefer working via command line, here’s how you can create symlinks in Command Prompt in Windows 7/Vista and XP. In Windows 7/Vista In Windows Vista and 7, we’ll use the mklink command to create symbolic links.  To use it, we have to open an administrator Command Prompt.  Enter “command” in your start menu search, right-click on Command Prompt, and select “Run as administrator”. To create a symbolic link, we need to enter the following in command prompt: mklink /prefix link_path file/folder_path First, choose the correct prefix.  Mklink can create several types of links, including the following: /D – creates a soft symbolic link, which is similar to a standard folder or file shortcut in Windows.  This is the default option, and mklink will use it if you do not enter a prefix. /H – creates a hard link to a file /J – creates a hard link to a directory or folder So, once you’ve chosen the correct prefix, you need to enter the path you want for the symbolic link, and the path to the original file or folder.  For example, if I wanted a folder in my Dropbox folder to appear like it was also stored in my desktop, I would enter the following: mklink /J C:\Users\Matthew\Desktop\Dropbox C:\Users\Matthew\Documents\Dropbox Note that the first path was to the symbolic folder I wanted to create, while the second path was to the real folder. Here, in this command prompt screenshot, you can see that I created a symbolic link of my Music folder to my desktop.   And here’s how it looks in Explorer.  Note that all of my music is “really” stored in C:\Users\Matthew\Music, but here it looks like it is stored in C:\Users\Matthew\Desktop\Music. If your path has any spaces in it, you need to place quotes around it.  Note also that the link can have a different name than the file it links to.  For example, here I’m going to create a symbolic link to a document on my desktop: mklink /H “C:\Users\Matthew\Desktop\ebook.pdf”  “C:\Users\Matthew\Downloads\Before You Call Tech Support.pdf” Don’t forget the syntax: mklink /prefix link_path Target_file/folder_path In Windows XP Windows XP doesn’t include built-in command prompt support for symbolic links, but we can use the free Junction tool instead.  Download Junction (link below), and unzip the folder.  Now open Command Prompt (click Start, select All Programs, then Accessories, and select Command Prompt), and enter cd followed by the path of the folder where you saved Junction. Junction only creates hard symbolic links, since you can use shortcuts for soft ones.  To create a hard symlink, we need to enter the following in command prompt: junction –s link_path file/folder_path As with mklink in Windows 7 or Vista, if your file/folder path has spaces in it make sure to put quotes around your paths.  Also, as usual, your symlink can have a different name that the file/folder it points to. Here, we’re going to create a symbolic link to our My Music folder on the desktop.  We entered: junction -s “C:\Documents and Settings\Administrator\Desktop\Music” “C:\Documents and Settings\Administrator\My Documents\My Music” And here’s the contents of our symlink.  Note that the path looks like these files are stored in a Music folder directly on the Desktop, when they are actually stored in My Documents\My Music.  Once again, this works with both folders and individual files. Please Note: Junction would work the same in Windows 7 or Vista, but since they include a built-in symbolic link tool we found it better to use it on those versions of Windows. Symlinks in Ubuntu Unix-based operating systems have supported symbolic links since their inception, so it is straightforward to create symbolic links in Linux distros such as Ubuntu.  There’s no graphical way to create them like the Link Shell Extension for Windows, so we’ll just do it in Terminal. Open terminal (open the Applications menu, select Accessories, and then click Terminal), and enter the following: ln –s file/folder_path link_path Note that this is opposite of the Windows commands; you put the source for the link first, and then the path second. For example, let’s create a symbolic link of our Pictures folder in our Desktop.  To do this, we entered: ln -s /home/maguay/Pictures /home/maguay/Desktop   Once again, here is the contents of our symlink folder.  The pictures look as if they’re stored directly in a Pictures folder on the Desktop, but they are actually stored in maguay\Pictures. Delete Symlinks Removing symbolic links is very simple – just delete the link!  Most of the command line utilities offer a way to delete a symbolic link via command prompt, but you don’t need to go to the trouble.   Conclusion Symbolic links can be very handy, and we use them constantly to help us stay organized and keep our hard drives from overflowing.  Let us know how you use symbolic links on your computers! Download Link Shell Extension for Windows 7, Vista, and XP Download Junction for XP Similar Articles Productive Geek Tips Using Symlinks in Windows VistaHow To Figure Out Your PC’s Host Name From the Command PromptInstall IceWM on Ubuntu LinuxAdd Color Coding to Windows 7 Media Center Program GuideSync Your Pidgin Profile Across Multiple PCs with Dropbox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • I want to construct web page for my department, I want your advice

    - by gcc
    I want to help freshmen , so I will construct web page for them . In that webpage , I will have some topic ; While installing ubuntu what you should consider ? ( ex : are there any driver-confliction ? ) [ So freshman do not know how to install ubuntu, or they think everything is completed when ubuntu-cd finish its job ] problem-solving style best book to learn ( x ) language general advice for departman how should I study programming languages some web page to introduce ubuntu, deeply some web page to introduce Makefile Assume; If you are in my position, Would you construct web page like that If you want to construct, which topic will you add ? which topic will you remove? NOTE: If you do not like my language you are free to give me advice to fix my fault. EDIT: I am student . How they expect I will send a great question.If they havenot fix me , How they expect I will improve myself, or help the other. I just want help freshman.Is it a big mistake ?

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 09, 2010

    CodePlex Daily Summary for Tuesday, March 09, 2010New Projects.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: .NET Excel Wrapper encapsulates the complexity of working with multiple Excel objects giving you one central point to do all your processing. It h...Advancement Voyage: Advancement Voyage is a high quality RPG experience that provides all the advancement and voyaging that a player could hope for.ASP.Net Routing configuration: ASP.NET routing configuration enables you to configure the routes in the web.config bbinjest: bbinjestBuildUp: BuildUp is a build number increment tool for C# .net projects. It is run as a post build step in Visual Studio.Controlled Vocabulary: This project is devoted to creating tools to assist with Controlling Vocabulary in communication. The initial delivery is an Outlook 2010 Add-in w...CycleList: A replacement for the WPF ListBox Control. Displays only a single item and allows the user to change the selected item by clicking on it once. Very...Forensic Suite: A suite of security softwareFREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat is a live chat solution and its DotNetNuke Chat Module helps to embed a live chat room into website with DotNetNuke(DNN) integrated ...HouseFly experimental controls: Experimental controls for use in HouseFly.ICatalogAll: junkMidiStylus: MidiStylus allows you to control MIDI-enabled hardware or software using your pressure-sensitive pen tablet. The program maps the X position, Y po...myTunes: Search for your favorite artistsNColony - Pluggable Socialism?: NColony will maximize the use of MEF to create flexible application architectures through a suite of plug-in solutions. If MEF is an outlet for plu...Network Monitor Decryption Expert: NmDecrypt is a Network Monitor Expert which when given a trace with encrypted frames, a security certificate, and a passkey will create a new trace...occulo: occulo is a free steganography program, meant to embed files within images with optional encrytion. Open Ant: A implementation of a Open Source Ant which is created to show what is possible in the serious game AntMe! The First implementation of that ProjectProgramming Patterns by example: Design patterns provide solutions to common software design problems. This project will contain samples, written in c# and ruby, of each design pat...project4k: Developing bulk mail system storing email informationQuail - Selenium Remote Control Made Easy: Quail makes it easy for Quality Assurance departments write automated tests against web applications. Both HTML and Silverlight applications can b...RedBulb for XNA Framework: RedBulb is a collection of utility functions and classes that make writing games with XNA a lot easier. Key features: Console,GUI (Labels, Buttons,...RegExpress: RegExpress is a WPF application that combines interactive demos of regular expressions with slide content. This was designed for a user group prese...RemoveFolder: Small utility program to remove empty foldersScrumTFS: ScrumTFSSharePoint - Open internal link in new window list definition: A simple SharePoint list definition to render SharePoint internal links with the option to open them in a new window.SqlSiteMap4MVC: SqlSiteMapProvider for ASP.Net MVC.T Sina .NET Client: t.sina.com.cn api 新浪微博APITest-Lint-Extensions: Test Lint is a free Typemock VS 2010 Extension that finds common problems in your unit tests as you type them. this project will host extensions ...ThinkGearNET: ThinkGearNET is a library for easy usage of the Neurosky Mindset headset from .NET .Wiki to Maml: This project enables you to write wiki syntax and have it converted into MAML syntax for Sandcastle documentation projects.WPF Undo/Redo Framework: This project attempts to solve the age-old programmer problem of supporting unlimited undo/redo in an application, in an easily reusable manner. Th...WPFValidators: WPF Validators Validações de campos para WPFWSP Listener: The WSP listener is a windows service application which waits for new WSC and WSP files in a specific folder. If a new WSC and WSP file are added, ...New Releases.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: First Release: This is the first release which includes the main library release..NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: Updated Version: New Features:SetRangeValue using multidimensional array Print current worksheet Print all worksheets Format ranges background, color, alig...ArkSwitch: ArkSwitch v1.1.2: This release removes all memory reporting information, and is focused on stability.BattLineSvc: V2.1: - Fixed a bug where on system start-up, it would pop up a notification box to let you know the service started. Annoying! And fixed! - Fixed the ...BuildUp: BuildUp 1.0 Alpha 1: Use at your own risk!Not yet feature complete. Basic build incrementing and attribute overriding works. Still working on cascading build incremen...Controlled Vocabulary: 1.0.0.1: Initial Alpha Release. System Requirements Outlook 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running i...CycleList: CycleList: The binaries contain the .NET 3.5 DLL ONLY. Please download source for usage examples.FluentNHibernate.Search: 0.3 Beta: 0.3 Beta take the following changes : Mappings : - Field Mapping without specifying "Name" - Id Mapping without specifiying "Field" - Builtin Anal...FREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat DNN Chat Module: With FREE DotNetNuke Chat Module of 123 Flash Chat, webmaster will be assist to add a chat room into DotNetNuke instantly and help to attract more ...GameStore League Manager: League Manager 1.0 release 3: This release includes a full installer so that you can get your league running faster and generate interest quicker.iExporter - iTunes playlist exporting: iExporter gui v2.3.1.0 - console v1.2.1.0: Paypal donate! Solved a big bug for iExporter ( Gui & Console ) When a track isn't located under the main iTunes library, iExporter would crash! ...jQuery.cssLess: jQuery.cssLess 0.3: New - Removed the dependency from XRegExp - Added comment support (both CSS style and C style) - Optimised it for speed - Added speed test TOD...jQuery.cssLess: jQuery.cssLess 0.4: NEW - @import directive - preserving of comments in the resulting CSS - code refactoring - more class oriented approach TODO - implement operation...MapWindow GIS: MapWindow 6.0 msi (March 8): Rewrote the shapefile saving code in the indexed case so that it uses the shape indices rather than trying to create features. This should allow s...MidiStylus: MidiStylus 0.5.1: MidiStylus Beta 0.5.1 This release contains basic functionality for transmitting MIDI data based on X position, Y position, and pressure value rea...MiniTwitter: 1.09.1: MiniTwitter 1.09.1 更新内容 修正 URL に & が含まれている時に短縮 URL がおかしくなるバグを修正Mosaictor: first executable: .exe file of the app in its current state. Mind you that this will likely be highly unstable due to heaps of uncaught errors.MvcContrib a Codeplex Foundation project: T4MVC: T4MVC is a T4 template that generates strongly typed helpers for ASP.NET MVC. You can download it below, and check out the documention here.N2 CMS: 2.0 beta: Major Changes ASP.NET MVC 2 templates Refreshed management UI LINQ support Performance improvements Auto image resize Upgrade Make a comp...NotesForGallery: ASP.NET AJAX Photo Gallery Control: NotesForGallery 2.0: PresentationNotesForGallery is an open source control on top of the Microsoft ASP.NET AJAX framework for easy displaying image galleries in the as...occulo: occulo 0.1 binaries: Windows binaries. Tested on Windows XP SP2.occulo: occulo 0.1 source: Initial source release.Open NFe: DANFE 1.9.5: Ajuste de layout e correção dos campos de ISS.patterns & practices Web Client Developer Guidance: Web Application Guidance -- March 8th Drop: This iteration we focused on documentation and bug fixes.PoshConsole: PoshConsole 2.0 Beta: With this release, I am refocusing PoshConsole... It will be a PowerShell 2 host, without support for PowerShell 1.0 I have used some of the new P...Quick Performance Monitor: QPerfmon 1.1: Now you can specify different updating frequencies.RedBulb for XNA Framework: Cipher Puzzle (Sample) Creators Club Package: RedBulb Sample Game: Cipher Puzzle http://bayimg.com/image/galgfaacb.jpgRedBulb for XNA Framework: RedBulbStarter (Base Code): This is the code you need to start with. Quick Start Guide: Download the latest version of RedBulb: http://redbulb.codeplex.com/releases/view/415...RoTwee: RoTwee 7.0.0.0 (Alpha): Now this version is under improvement of code structure and may be buggy. However movement of rotation is quite good in this version thanks to clea...SCSI Interface for Multimedia and Block Devices: Release 9 - Improvements and Bug Fixes: Changes I have made in this version: Fixed INQUIRY command timeout problem Lowered ISOBurn's memory usage significantly by not explicitly setting...SharePoint - Open internal link in new window list definition: Open link in new window list definition: First release, with english and italian localization supportSharePoint Outlook Connector: Version 1.2.3.2: Few bug fixing and some ui enhancementsSysI: sysi, release build: Better than ever -- now allows for escalation to adminThe Silverlight Hyper Video Player [http://slhvp.com]: Beta 1: Beta (1.1) The code is ready for intensive testing. I will update the code at least every second day until we are ready to freeze for V1, which wi...Truecrafting: Truecrafting 0.52: fixed several trinkets that broke just before i released 0.51, sorry fixed water elemental not doing anything while summoned if not using glyph o...Truecrafting: Truecrafting 0.53: fixed mp5 calculations when gear contained mp5 and made the formulas more efficient no need to rebuild profiles with this release if placed in th...umbracoSamplePackageCreator (beta): Working Beta: For Visual Studio 2008 creating packages for Umbraco 4.0.3.VCC: Latest build, v2.1.30307.0: Automatic drop of latest buildVCC: Latest build, v2.1.30308.0: Automatic drop of latest buildVOB2MKV: vob2mkv-1.0.3: This is a maintenance update of the VOB2MKV utility. The MKVMUX filter now describes the cluster locations using a separate SeekHead element at th...WPFValidators: WPFValidators 1.0 Beta: Primeira versão do componente ainda em Beta, pode ser utilizada em produção pois esta funcionando bem e as futuras alterações não sofreram muito im...WSDLGenerator: WSDLGenerator 0.0.06: - Added option to generate SharePoint compatible *disco.aspx file. - Changed commandline optionsWSP Listener: WSP Listener version 1.0.0.0: First version of the WSP Listener includes: Easy cop[y paste installation of WSP solutions Extended logging E-mail when installation is finish...Yet another pali text reader: Pali Text Reader App v1.1: new features/updates + search history is now a tab + format codes in dictionary + add/edit terms in the dictionary + pali keyboard inserts symbols...Most Popular ProjectsMetaSharpi4o - Indexed LINQResExBraintree Client LibraryGeek's LibrarySharepoint Feature ManagerConfiguration ManagementOragon Architecture SqlBuilderTerrain Independant Navigating Automaton v2.0WBFS ManagerMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web ServicesFasterflect - A Fast and Simple Reflection APIFarseer Physics Enginepatterns & practices – Enterprise LibraryTeam FTW - Software ProjectIonics Isapi Rewrite Filter

    Read the article

  • Dual Boot issues with Windows 7 and Ubuntu

    - by Michael
    I'm finding myself in a rather unique situation. I've read through just about every resource I can find about this and while things have helped me understand some background, I haven't yet been able to find a solution. So I'm asking here. I originally had just a Windows 7 64-bit OS installation on my desktop. Learning that I couldn't do anything with Apache, PHP and MySql from within a 64-bit system, I did some research and found out that I could use Ubuntu. I've installed the latest version: 11.04. I created a CD to install Ubuntu from and the install went just fine. I installed it side-by-side with Windows 7. I can boot into Ubuntu just fine through the dual-boot option. When I reboot to load Windows though, the Grub2 list shows Windows 7 (loader) and when I select this option the Windows System Recovery loads instead of the actual OS. I haven't made it past there because I didn't know what to do. I just shut the computer down and rebooted into Ubuntu. I've been working for the last hour and a half to try to figure out how to boot into the Windows 7 OS and I haven't got a clue. While I'm somewhat proficient with Windows 7, I'm totally new to Ubuntu, so if you do know what needs to happen, please keep it simple enough that I'll be able to understand. Thanks for all your help in advance. Here's the results after using the Boot Info Script: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #5 for cbh. => Windows is installed in the MBR of /dev/sdb => Grub 2 is installed in the MBR of /dev/mapper/pdc_bdadcfbdif and looks on the same drive in partition #5 for cbh. sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sdb1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /boot/BCD sdb4: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb6: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: pdc_bdadcfbdif1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD pdc_bdadcfbdif2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe pdc_bdadcfbdif3: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy mount: unknown filesystem type '' =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sda2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/sda3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sdb1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sdb2 206,911 1,342,554,688 1,342,347,778 7 HPFS/NTFS /dev/sdb3 1,930,344,448 1,953,521,663 23,177,216 7 HPFS/NTFS /dev/sdb4 1,342,556,158 1,930,344,447 587,788,290 5 Extended /dev/sdb5 1,342,556,160 1,896,806,399 554,250,240 83 Linux /dev/sdb6 1,896,808,448 1,930,344,447 33,536,000 82 Linux swap / Solaris Drive: pdc_bdadcfbdif ___________________ _____________________________________________________ Disk /dev/mapper/pdc_bdadcfbdif: 750.0 GB, 749999947776 bytes 255 heads, 63 sectors/track, 91182 cylinders, total 1464843648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/mapper/pdc_bdadcfbdif1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 ends after the last sector of /dev/mapper/pdc_bdadcfbdif blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/mapper/pdc_bdadcfbdif1 888E54CC8E54B482 ntfs SYSTEM /dev/mapper/pdc_bdadcfbdif2 C2766BF6766BEA1D ntfs OS /dev/mapper/pdc_bdadcfbdif: PTTYPE="dos" /dev/sda1 888E54CC8E54B482 ntfs SYSTEM /dev/sda2 C2766BF6766BEA1D ntfs OS /dev/sda3 BE6CA31D6CA2CF87 ntfs HP_RECOVERY /dev/sda promise_fasttrack_raid_member /dev/sdb1 20B65685B6565B7C ntfs SYSTEM /dev/sdb2 B4467A314679F508 ntfs HP /dev/sdb3 6E10B7A410B77227 ntfs FACTORY_IMAGE /dev/sdb4: PTTYPE="dos" /dev/sdb5 266f9801-cf4f-4acc-affa-2092be035f0c ext4 /dev/sdb6 1df35749-a887-45ff-a3de-edd52239847d swap /dev/sdb: PTTYPE="dos" error: /dev/mapper/pdc_bdadcfbdif3: No such file or directory error: /dev/sdc: No medium found error: /dev/sdd: No medium found error: /dev/sde: No medium found error: /dev/sdf: No medium found error: /dev/sdg: No medium found ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sdb5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc- affa-2092be035f0c ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic-pae } menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c echo 'Loading Linux 2.6.38-8-generic-pae ...' linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc-affa-2092be035f0c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic-pae } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos1)' search --no-floppy --fs-uuid --set=root 20B65685B6565B7C chainloader +1 } menuentry "Windows Recovery Environment (loader) (on /dev/sdb3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos3)' search --no-floppy --fs-uuid --set=root 6E10B7A410B77227 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb5 during installation UUID=266f9801-cf4f-4acc-affa-2092be035f0c / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb6 during installation UUID=1df35749-a887-45ff-a3de-edd52239847d none swap sw 0 0 =================== sdb5: Location of files loaded by Grub: =================== 900.1GB: boot/grub/core.img 825.0GB: boot/grub/grub.cfg 688.7GB: boot/initrd.img-2.6.38-8-generic-pae 688.0GB: boot/vmlinuz-2.6.38-8-generic-pae 688.7GB: initrd.img 688.0GB: vmlinuz =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on pdc_bdadcfbdif3 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde sdf sdg =============================== StdErr Messages: =============================== ERROR: dos: partition address past end of RAID device hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory ERROR: dos: partition address past end of RAID device

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Envista: Coordinating Utilities with Oracle Spatial 11g

    - by stephen.garth
    It's annoying when the same streets seem to be perpetually dug up for utility construction or maintenance by your water or sewer department, electric utility, gas company or telephone company. Can't they do a better job of coordinating these activities? In this podcast, Marc Fagan, Executive VP of Product Management from Envista describes a Software-as-a-Service solution that Envista provides for utilities and public works departments to coordinate upcoming construction work, using Oracle Database 11g with Oracle Spatial. Each participating utility enters key data into the Web-based application, including when and where their work is to take place, and who to contact for more information. The data is then available on a common base map, enabling all participants to coordinate their activities, save money, and minimize inconvenience to their customers. Listen to the podcast Find out more about Oracle Spatial 11g var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • GWB | First Blogger Suggestion - More Customization To Skins

    - by Geekswithblogs Administrator
    The first suggested item (and only at this point) was a higher level of skin customization on Geekswithblogs.net. Currently we have over 10 skins to pick from and the ability to change the CSS for those skins. In the skins list, you will find one labeled Naked that doesn't add any formatting, but allows you to define what items look like and what gets displayed. I understand our skin approach is not good by any means and needs a dramatic update. Here is what we came up with in our conversation with the blogger. Better tools for editing CSS from the serverPreview of changes before publishingRetrievable CSS configurations for different custom themesBetter image upload for use of custom themesMore skins selectionUpdated current skins to fix formatting issues with newer browsers and Firefox/Opera/ChromeI would really like to gauge your feedback on these items to see what is important. I know one of my biggest complaints I get is customizable themes, but typically users don't know about the options so we have done a bad job of explaining what is possible and showing every user how to do so. We have a public tutorial system we once tried and wanted it to be member managed as well and the GWB staff contributing, but that went downhill at some point. Maybe it is time to kick that back into gear for the new bloggers we have added over the past couple of years.What do you think? What else is on your mind?

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • NorthWest Arkansas TechFest

    - by dmccollough
    David Walker is taking Tulsa TechFest on the road to NorthWest Arkansas When Thursday, July 8th 2010 Where Center for Nonprofits @St. Mary’s 1200 West Walnut Street Rogers, Ar 72756 479-936-8218 Map it with Bing! What is NorthWest Arkansas TechFest ? It is a technical conference with a primary focus to provide training/teaching sessions that are immediately beneficial to the broadest range of IT professionals in their day-to-day jobs. We can accomplish this with numerous national and international speakers delivering 75 minute sessions. A charitable non-profit event organized by local area volunteers. Even though it its a free event, we ask that you support the community and PLEASE bring TWO CANS or TWO BUCKS. All canned food will be donated to the NWA Food Bank and all proceeds will be donated to the The Jones Center. Since our first event in the Tulsa area back in 1996, many other communities have been following our example by hosting their own TechFest events: Vancouver TechFest, Houston TechFest, Dallas TechFest, Alberta TechFest and Indy TechFest. We are very PROUD to now bring the event to NorthWest Arkansas! Who should Attend? Every IT Professional IT Job seekers and IT Recruiters and Hiring Managers Developers of all languages Graphic and Web Designers Infrastructure, IT and System Administrators eMarketing Professionals Project Managers Compliance Managers IT Directors and Mangers Chief Compliance Officers Chief Security Officers CIOs/CTOs CEOs/Executive Officers With this many hours of training, anyone in the or wanting to get into the IT Industry will definitely find interesting and instructional presentations by professional speakers. Want to keep informed? More information can be found here.

    Read the article

  • Video Recording Not Working in ICS

    - by Nirav Ranpara
    I have implement code Record video in Android Phone . This code is working in 2.2 , 2.3 . not in ICS But when I checked in ICS code is not working ? here I posted code and xml file. videorecord.java import java.io.File; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.SharedPreferences; import android.hardware.Camera; import android.media.CamcorderProfile; import android.media.MediaRecorder; import android.os.Bundle; import android.os.CountDownTimer; import android.os.Environment; import android.util.Log; import android.view.Display; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.Toast; public class videorecord extends Activity{ SharedPreferences.Editor pre; String filename; CountDownTimer t; private Camera myCamera; private MyCameraSurfaceView myCameraSurfaceView; private MediaRecorder mediaRecorder; Integer cnt=0; LinearLayout myButton; TextView myButton1; SurfaceHolder surfaceHolder; boolean recording; private TextView txtcount; private ImageView btnplay; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); recording = false; setContentView(R.layout.videorecord); init(); myCamera = getCameraInstance(); if(myCamera == null){ } myCameraSurfaceView = new MyCameraSurfaceView(this, myCamera); FrameLayout myCameraPreview = (FrameLayout)findViewById(R.id.videoview); Display display = getWindowManager().getDefaultDisplay(); int width = display.getWidth(); int height = display.getHeight(); myCameraSurfaceView.setLayoutParams(new LinearLayout.LayoutParams(width, height-60)); myCameraPreview.addView(myCameraSurfaceView); myButton = (LinearLayout)findViewById(R.id.mybutton); btnplay.setOnClickListener(myButtonOnClickListener); } private void init() { txtcount = (TextView) findViewById(R.id.txtcounter); //myButton1 = (TextView) findViewById(R.id.mybutton1); btnplay = (ImageView)findViewById(R.id.btnplay); t = new CountDownTimer( Long.MAX_VALUE , 1000) { @Override public void onTick(long millisUntilFinished) { cnt++; String time = new Integer(cnt).toString(); long millis = cnt; int seconds = (int) (millis / 60); int minutes = seconds / 60; seconds = seconds % 60; txtcount.setText(String.format("%d:%02d:%02d", minutes, seconds,millis)); } @Override public void onFinish() { } }; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if ((keyCode == KeyEvent.KEYCODE_BACK)) { if(recording) { new AlertDialog.Builder(videorecord.this).setTitle("Do you want to save Video ?") .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { filename(); //finish(); } }).setNegativeButton("Cancle", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub } }).show(); } else { if ((keyCode == KeyEvent.KEYCODE_BACK)) { //Intent homeIntent= new Intent(Intent.ACTION_MAIN); //homeIntent.addCategory(Intent.CATEGORY_HOME); //homeIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); //startActivity(homeIntent); //this.finishActivity(1); finish(); } //moveTaskToBack(true); // finish(); return super.onKeyDown(keyCode, event); } } else { // Toast.makeText(getApplicationContext(), "asd", Toast.LENGTH_LONG).show(); android.os.Process.killProcess(android.os.Process.myPid()) ; } return super.onKeyDown(keyCode, event); } ImageView.OnClickListener myButtonOnClickListener = new ImageView.OnClickListener(){ public void onClick(View v) { if(recording){ Log.e("Record error", "error in recording ."); mediaRecorder.stop(); t.cancel(); filename(); releaseMediaRecorder(); }else{ releaseCamera(); Log.e("Record Stop error", "error in recording ."); // if(!prepareMediaRecorder()){ prepareMediaRecorder(); finish(); } mediaRecorder.start(); recording = true; // myButton1.setText("STOP Recording"); // btnplay.setImageResource(android.R.drawable.ic_media_pause); btnplay.setImageResource(R.drawable.stoprec); t.start(); } }}; private Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); } catch (Exception e){ } return c; } private void filename() { AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Save Video"); alert.setMessage("Enter File Name"); final EditText input = new EditText(this); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(input.getText().length()>=1) { filename = input.getText().toString(); File sdcard = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); File from = new File(sdcard,"null.mp4"); File to = new File(sdcard,filename+".mp4"); from.renameTo(to); SharedPreferences sp = videorecord.this.getSharedPreferences("data", MODE_WORLD_WRITEABLE); pre = sp.edit(); pre.clear(); pre.commit(); pre.putString("lastvideo", filename+".mp4"); pre.commit(); //btnplay.setImageResource(android.R.drawable.ic_media_play); btnplay.setImageResource(R.drawable.startrec); // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); Intent myIntent = new Intent(getApplicationContext(), StopVidoWatch_Activity.class).setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(myIntent); } else { filename(); } } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); File file = new File(Environment.getExternalStorageDirectory() + "/VideoRecord/null.mp4"); //boolean deleted = file.delete(); file.delete(); finish(); } }); alert.show(); } private boolean prepareMediaRecorder(){ myCamera = getCameraInstance(); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH)); File folder = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); } if (!success) { } else { } mediaRecorder.setOutputFile("/sdcard/VideoRecord/"+filename+".mp4"); mediaRecorder.setMaxDuration(60000); mediaRecorder.setMaxFileSize(5000000); Display display = getWindowManager().getDefaultDisplay(); int width = display.getHeight(); int height = display.getWidth(); String s = new String(); s= s.valueOf(width); String s1 = new String(); s1= s1.valueOf(height); // Toast.makeText(videorecord.this, "Width : " + s , Toast.LENGTH_LONG).show(); // Toast.makeText(videorecord.this, "Height : " + s1 , Toast.LENGTH_LONG).show(); mediaRecorder.setVideoSize(height, width); mediaRecorder.setPreviewDisplay(myCameraSurfaceView.getHolder().getSurface()); try { mediaRecorder.prepare(); } catch (IllegalStateException e) { releaseMediaRecorder(); return false; } catch (IOException e) { releaseMediaRecorder(); return false; } return true; } @Override protected void onPause() { super.onPause(); releaseMediaRecorder(); releaseCamera(); } private void releaseMediaRecorder() { if (mediaRecorder != null) { mediaRecorder.reset(); mediaRecorder.release(); mediaRecorder = null; myCamera.lock(); } } private void releaseCamera(){ if (myCamera != null){ myCamera.release(); myCamera = null; } } public class MyCameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback{ private SurfaceHolder mHolder; private Camera mCamera; public MyCameraSurfaceView(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceChanged(SurfaceHolder holder, int format, int weight, int height) { if (mHolder.getSurface() == null){ return; } try { mCamera.stopPreview(); } catch (Exception e){ } try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ } } public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { } } public void surfaceDestroyed(SurfaceHolder holder) { } } } videorecord.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/videoview" android:layout_width="fill_parent" android:layout_height="fill_parent"></FrameLayout> <LinearLayout android:id="@+id/mybutton" android:layout_width="fill_parent" android:layout_marginBottom="0dip" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="0" > <!-- <TextView android:text="START Recording" android:id="@+id/mybutton1" android:layout_height="wrap_content" android:layout_width="wrap_content" style="@style/savestyle" android:layout_weight="1" android:gravity="left" > </TextView> --> <ImageView android:layout_height="wrap_content" android:id="@+id/btnplay" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" android:layout_width="wrap_content" android:src="@drawable/startrec" /> </LinearLayout> <TextView android:text="00:00:00" android:id="@+id/txtcounter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|bottom" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" /> </FrameLayout> <RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/bgcolor" > <LinearLayout android:layout_above="@+id/mybutton" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout> </RelativeLayout> </LinearLayout>

    Read the article

  • Upgrade to 11.2.0.3 - OCM: ORA-12012 and ORA-29280

    - by Mike Dietrich
    OCM is the Oracle Configuration Manager, a tool to proactively monitor your Oracle environment to provide this information to Oracle Software Support. As OCM is installed by default in many databases but is some sort of independent from the database's version you won't expect any issues during or after a database upgrade But after the upgrade from Oracle 11.1.0.7 to Oracle 11.2.0.3 on Exadata X2-2 one of my customers found the following error in the alert.log every 24 hours: Errors in file /opt/oracle/diag/rdbms/db/trace/db_j001_26027.trc: ORA-12012: error on auto execute of job "ORACLE_OCM"."MGMT_CONFIG_JOB_2_1" ORA-29280: invalid directory path ORA-06512: at "ORACLE_OCM.MGMT_DB_LL_METRICS", line 2436 ORA-06512: at line 1 Why is that happening and how to solve that issue now? OCM is trying to write to a local directory which does not exist. Besides that the OCM version delivered with Oracle Database Patch Set 11.2.0.3 is older than the newest available OCM Collector 10.3.7 - the one which has that issue fixed. So you'll either drop OCM completely if you won't use it: SQL> drop user ORACLE_OCM cascade; or you'll disable the collector jobs: SQL> exec dbms_scheduler.disable('ORACLE_OCM.MGMT_CONFIG_JOB');SQL> exec dbms_scheduler.disable('ORACLE_OCM.MGMT_STATS_CONFIG_JOB'); or you'll have to reconfigure OCM - and please see MOS Note:1453959.1 for a detailed description how to do that - it's basically executing the script ORACLE_HOME/ccr/admin/scripts/installCCRSQL - but there maybe other things to consider especially in a RAC environment. - Mike

    Read the article

  • CodePlex Daily Summary for Friday, April 30, 2010

    CodePlex Daily Summary for Friday, April 30, 2010New ProjectsAcres2: This is the new hotnessAsoraX.de - AirlineTycoon: EntwicklungsPhase von AsoraX.deazshop: Ecommerce macros and user controls based on Commerce4Umbraco from Umbraco CMS Project.BioPhotoAnalyzer: Some exercises with F#, WPF and Mathematical methods for quantitative analysis of biological photos, such as those obtained from microscopy, PAGE etcCECS622 - Simulations - South Computing Center: This simulation will be used to study the traffic flow of the South Computing Center (SCC) on the UofL campus. Document.Viewer: Basic document viewer for Windows XP, Vista and Windows 7 that supports FlowDocument, Rich and Plan Text FormatsDotNetNuke 5 Thai Language Pack: DotNetNuke Thai Language PackDotNetNuke Skins Pack: We derive html/css template from www.freecsstemplates.org and apply to DotNetNuke 4 & 5 Skins Pack.Dynamics AX Business Intelligence: Sample code for creating and using Dynamics AX Business Intelligence Easy Chat: Very simple kind of chat; it got DOS interface. It's just to start the server start the client enter the IP, enter your name, and you're ready to go!Html Source Transmitter Control: This web control allows getting a source of a web page, that will displayed before submit. So, developer can store a view of the html page, that wa...HydroServer - CUAHSI Hydrologic Information System Server: HydroServer is a set of software applications for publishing hydrologic datasets on the Internet. HydroServer leverages commercial software such a...Lisp.NET: Lisp.NET is a project that demonstrates how the Embedded Common Lisp (ECL) environment can be loaded and used from .NET. This enables Common Lisp ...Live-Exchange Calendar Sync: Live - Exchange Calendar Sync synchronizes calendars of users between 2 Exchange servers (it can be Live Calendar also). It's developed in C#. It u...MaLoRT : Mac Lovin' RayTracer: RaytracerMojo Workflow Web UI: Mojo Workflow Web UIMoonRover - Enter Island: Simple Graphics consumer application that simulates Moon RoversMVC Music Store: MVC Music Store is a tutorial application built on ASP.NET MVC 2. It's a lightweight sample store which sells albums online, demonstrating ASP.NET ...OpenPOS: OpenPOS, a completely open and free point-of-sale systemPgcDemuxCLI: A modified version of PgcDemux (http://download.videohelp.com/jsoto/dvdtools.htm) that has better CLI support and progress reporting.Sample 01: basic summarySharedEx: App dev in VS2010 RTM and Silverlight 4.0 for Windows Phone 7!SharePoint 2010 Managed Metadata WebPart: This SharePoint 2010 webpart creates a navigation control from a Managed Metadata column assigned to a list/library. The Termset the column relates...SharePoint 2010 Service Manager: The SharePoint Service manager let's you start and stop all the SharePoint 2010 services on your workstation. This is useful if you are running S...Sharp DOM: Sharp DOM is a view engine for ASP.NET MVC platform allowing developers to design extendable and maintenable dynamic HTML layouts using C# 4.0 lang...simple fx: simple framework written in javascriptSimpleGFC.NET: Simple GFC is a .NET library written in C# for simplifying the connection to Google Friend Connect. It is designed to be simple and concise, not a...SPSS & PSPP file library: Library for importing SPSS (now called PASW) & PSPP .sav datafiles, developed in C#. Support for writing and SPSS portable-files will be implemente...TailspinSpyworks - WebForms Sample Application: Tailspin Spyworks demonstrates how extraordinarily simple it is to create powerful, scalable applications for the .NET platform. It shows off how t...VividBed: Business website.卢融凯的个人主页: 卢融凯的个人主页New ReleasesAssemblyVerifier: 4.0.0.0 Release: Updated to target .NET 4.0Bricks' Bane: Bricks' Bane Editor v1.0.0.4: I've added functionality to the editor, after some personal use. Plus I fixed a bug that was occurring on file loading mostly when the file was mod...Dambach Linear Algebra Framework: Linear Algebra Framework - Sneak Peak: This is just to get the code out there into the hands of potential users and to elicit feedback.DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 4a: DevTreks calculators, analyzers, and story tellers are being upgraded so that they can be extended by third parties using either add-ins (Managed ...DirectQ: Release 1.8.3c: Fixes some more late-breaking bugs and adds some optimizations from the forthcoming 1.8.4 release. No Windows 98 build this time. I may add one a...Document Assembly from a SharePoint Library using the Open XML SDK: Solution for March 2010 rel of the Open XML SDK: Microsoft changed some class names between the December 2009 CTP and the March 2010 release of the Open XML SDK which means that old solution will ...Document.Viewer: 0.9.0: Whats New?: New icon set Bug fix'sDotNetNuke Skins Pack: DNN 80 Skins Pack: This released is the first for DNN 4 & 5 with Skin Token Design (legacy skin support on DNN 4 & 5)DotNetWinService: Release 1.0.0.0: This release includes four different scheduled tasks: 1) TaskURL: Petition a specific URL: when working on a ASP.NET website, it's sometimes ...Hammock for REST: Hammock v1.0.1: v1.0.1 ChangesFixes to improve OAuth stability Fixes for asynchronous timeouts Fixes and simplification for retries v1.0 FeaturesSimple, clean...Home Access Plus+: v4.1.1: v4.1.1 Change Log: Added Enable for Booking Resource to allow Resources to be disabled but not deleted Updated HAP.Config for above File Changes...HouseFly controls: HouseFly controls alpha 0.9.6.0: HouseFly controls release 0.9.6.0 alphaHydroServer - CUAHSI Hydrologic Information System Server: ODM Data Loader 1.1.3: Version 1.1.3 of the ODM Data Loader is only compatible with version 1.1 of ODM This Version Addresses : Memory issues in Version 1.1.2 that were ...Jobping Url Shortener: Source Code v0.1: Source code for the 0.1 release.LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.6: Fixed the crash after accepting the use of internet access.mojoPortal: 2.3.4.3: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2343-released.aspxMoonRover - Enter Island: Moon Rover: Simulates Rover Moves on MoonNanoPrompt: Release 0.4.1 - Beta 1.1: Now released: Beta 1.1 for .NET 4.0ORAYLIS BI.SmartDiff: Release 1.0.0: see change log for detailsQuickStart Engine (3D Game Engine for XNA): QuickStart Engine v0.22: Main FeaturesClean engine architecture Makes it easy to make your own game using the engine. Messaging system allows you to communicate between s...SharePoint 2010 Managed Metadata WebPart: Taxonomy WebPart 0.0.1: Initial version releasedSharePoint 2010 Service Manager: SharePoint Service Manager 2010 v1: Starts and stops all SharePoint services Works with both Microsoft SQL Server and SQL Server Express Edition (for SharePoint) Supports disablin...Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet RTM Release: Shweet has been updated to work with the RTM of SharePoint and the latest version of Pex Visual Studio 2010 Power Tools. Know issues: Continuous I...Silverlight Testing Automation Tool: StatLight v1.0 - Beta 1: Things still to accomplish (before official v1 release)Item # 10716: Release build hangs on x64 machine Still awaiting UnitDriven change feedback...SPSS & PSPP file library: 0.1 alpha: This first test release allows to read an .sav SPSS/PSPP/PASW file and retreive the data, either in the form of a .net DataReader or as a custom Sp...SqlDiffFramework-A Visual Differencing Engine for Dissimilar Data Sources: SqlDiffFramework 1.0.0.0: Initial Public Release Requirements: .NET Framework 2.0 SqlDiffFramework is designed for the enterprise! Check with your system administrator (or...sqwarea: Sqwarea 0.0.210.0 (alpha): First public release. Deploy package contains : the documentation for classes. the azure package with the Web Role. the zip with assemblies wh...Sweeper: Sweeper Alpha 2: SweeperA Visual Studio Add-in for C# Code Formatting - Visual Studio 2008 Includes: A UI for options, Enable or disable any specific task you want ...TailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.8: ASP.NET 4 ECommerce ApplicationTweetSharp: TweetSharp Release Candidate: New FeaturesComplete core rewrite for stability and speed, introducing Hammock http://hammock.codeplex.com Windows Phone 7, Silverlight 4 OOB (El...Wicked Compression ASP.NET HTTP Module: Wicked Compression ASP.NET HTTP Module Beta: Now supports AJAX from .NET Framework 2.0 through 4.0! Bug Fix for Excluded Paths! Binary Release Included for .NET Framework 2.0 through 4.0!Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterHydroServer - CUAHSI Hydrologic Information System ServerGMap.NET - Great Maps for Windows Forms & PresentationParticle Plot Pivotpatterns & practices: Azure Security GuidanceSqlDiffFramework-A Visual Differencing Engine for Dissimilar Data SourcesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleN2 CMS

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • Method extension for safely type convert

    - by outcoldman
    Recently I read good Russian post with many interesting extensions methods after then I remembered that I too have one good extension method “Safely type convert”. Idea of this method I got at last job. We often write code like this: int intValue; if (obj == null || !int.TryParse(obj.ToString(), out intValue)) intValue = 0; This is method how to safely parse object to int. Of course will be good if we will create some unify method for safely casting. I found that better way is to create extension methods and use them then follows: int i; i = "1".To<int>(); // i == 1 i = "1a".To<int>(); // i == 0 (default value of int) i = "1a".To(10); // i == 10 (set as default value 10) i = "1".To(10); // i == 1 // ********** Nullable sample ************** int? j; j = "1".To<int?>(); // j == 1 j = "1a".To<int?>(); // j == null j = "1a".To<int?>(10); // j == 10 j = "1".To<int?>(10); // j == 1 Read more... (redirect to http://outcoldman.ru)

    Read the article

  • push email / email server tutorial

    - by David A
    Does anyone happen to know the current status of push email in the linux world? From my searching at the moment I have seen Z-push http://www.ifusio.com/blog/setup-your-own-push-mail-server-with-z-push-on-debian-linux and https://peterkieser.com/2011/03/25/androids-k-9-mail-battery-life-and-dovecots-push-imap/ Are there other solutions? Does anyone have any experiences with these? They're somewhat different in that Z-push seems to work in conjunction with an existing imap server? Some time ago I did manage to compile and build Dovecot 2 (since only Dovecot 1 was available in the Ubuntu repos at the time), it would have been a real fluke because I had no idea what I was doing but it seemed to work well with my mobile phone, that said, I can't say for sure that it was pushing, but it seemed like it. Anyway, I'm here again and looking to set up a mail server. I'm hoping to do a better of a job this time around with virtual users and such. Without installing ispconfig3 (or something similar), does anyone have any recent email server tutorials (that cover all aspects MTA, MDA...) that can supply push email on a Ubuntu 12.04 server? (I'm probably of slightly above newb status, but not far) Thanks a bunch

    Read the article

  • SQL SERVER – Find Most Expensive Queries Using DMV

    - by pinaldave
    The title of this post is what I can express here for this quick blog post. I was asked in recent query tuning consultation project, if I can share my script which I use to figure out which is the most expensive queries are running on SQL Server. This script is very basic and very simple, there are many different versions are available online. This basic script does do the job which I expect to do – find out the most expensive queries on SQL Server Box. SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_logical_reads DESC -- logical reads -- ORDER BY qs.total_logical_writes DESC -- logical writes -- ORDER BY qs.total_worker_time DESC -- CPU time You can change the ORDER BY clause to order this table with different parameters. I invite my reader to share their scripts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology Tagged: SQL DMV

    Read the article

  • System can't sleep, can't shutdown (might be networking related)

    - by Kevin Q
    I speculate it's networking related since I could only shut down my system if I do /etc/init.d/networking stop first. If I tried to put the system to sleep, it disconnect the network then reconnect it automatically, never goes to sleep. I uninstalled the network manager, then /etc/init.d/networking stop will not help shutdown the system anymore. When I boot into the OS, there were some warning about Unknown job:S20acpid,S20network-interface, S20network-manager . I got some advices to do update-rc.d -f to remove all those script, which I am not certain if it's right thing to do. The messages are gone but the problem remains. When I tried to restart, I used to get this message It doesn't seem to be the modem manager since I later removed it. Once upon a time, the fronze shutting down/restarting can be restarted when I presses Ctrl+Alt+Delete, and it says something like networking is stopped externally. If I log in single user mode, and shutdown or reboot, it show this screen I also tried GRUB_CMDLINE_LINUX_DEFAULT = "acpi=force" This is a Ubuntu 12.04 running on Thinkpad T520, not new installation, no such problems occurred before.

    Read the article

  • The Oracle Linux Advantage

    - by Monica Kumar
    It has been a while since we've summed up the Oracle Linux advantage over other Linux products. Wim Coekaerts' new blog entries prompted me to write this article. Here are some highlights. Best enterprise Linux - Since launching UEK almost 18 months ago, Oracle Linux has leap-frogged the competition in terms of the latest innovations, better performance, reliability, and scalability. Complete enterprise Linux solution: Not only do we offer an enterprise Linux OS but it comes with management and HA tools that are integrated and included for free. In addition, we offer the entire "apps to disk" solution for Linux if a customer wants a single source. Comprehensive testing with enterprise workloads: Within Oracle, 1000s of servers run incredible amount of QA on Oracle Linux amounting to100,000 hours everyday. This helps in making Oracle Linux even better for running enterprise workloads. Free binaries and errata: Oracle Linux is free to download including patches and updates. Highest quality enterprise support: Available 24/7 in 145 countries, Oracle has been offering affordable Linux support since 2006. The support team is a large group of dedicated professionals globally that are trained to support serious mission critical environments; not only do they know their products, they also understand the inter-dependencies with database, apps, storage, etc. Best practices to accelerate database and apps deployment: With pre-installed, pre-configured Oracle VM Templates, we offer virtual machine images of Oracle's enterprise software so you can easily deploy them on Oracle Linux. In addition, Oracle Validated Configurations offer documented tips for configuring Linux systems to run Oracle database. We take the guesswork out and help you get to market faster. More information on all of the above is available on the Oracle Linux Home Page. Wim Coekaerts did a great job of detailing these advantages in two recent blog posts he published last week. Blog article: Oracle Linux components http://bit.ly/JufeCD Blog article: More Oracle Linux options: http://bit.ly/LhY0fU These are must reads!

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • Building a regex builder

    - by i.h4d35
    I am a beginner in programming in general and web development in particular. I am especially bad at regular expressions. Recently I was involved in building a couple of cPanel plugins(Perl-CGI) and that's when I realized how bad I am in regex. As a result, I have decided to build an online regex builder - this will help me to learn regex and help other struggling with regex. I have checked out GSkinner, Rubular and a couple of others like regexpal. It seemed to be a little difficult to use, hence i thought of writing another one. I do not know which tool is best suited for the job. should I write it in Perl or Python? My skill level is between beginner and intermediate in both those languages. What would be a good starting point - building it for the CLI or for the browser? I plan to get a string as an input, ask if the user want to search or search and replace, enter the search string (and the replace string where applicable) and then generate a regex. Would this be the right way to go?

    Read the article

  • How Star Wars Changed the World [Infographic]

    - by ETC
    The Star Wars film franchise has had an enormous impact on the world of film, gaming, and special effects. Check out this interesting infographic to see how Star Wars has impacted the world. Created by Michelle Devereau, the “How Star Wars Changed the World” infographic is a massive under taking of charting and cross-referencing. It does an excellent job highlighting the impact the Star Wars films have had on film, television, gaming, and the surrounding technologies. At minimum you’ll nail down some new trivia (I learned, for example, that famed puppeteer and voice actor Frank Oz was the man behind Yoda), even better you’ll have an appreciate for what a sweeping effect Star Wars has had. For readers behind finicky firewalls, click here to view a local mirror of the image. How Star Wars Changed the World [Daily Infographic] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper Read On Phone Pushes Data from Your Desktop to the Appropriate Android App

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >