Search Results

Search found 10463 results on 419 pages for 'required'.

Page 271/419 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Introducing AutoVue Document Print Service

    - by celine.beck
    We recently announced the availability of our new AutoVue Document Print Service products. For more information, please read the article entitled Print Any Document Type with AutoVue Document Print Services that was posted on our blog. The AutoVue Document Print Service products help address a trivial, yet very common challenge: printing and batch printing documents. The AutoVue Document Print Service is a Web-Services based interface, which allows developers to complement their print server solutions by leveraging AutoVue's printing capabilities within broader enterprise applications like Asset Lifecycle Management, Product Lifecycle Management, Enterprise Content Management solutions, etc. This means that you can leverage the AutoVue Document Print Service products as part of your printing solution to automate the printing of virtually any document type required in any business process. Clients that consume AutoVue's Document Print Service can be written in any language (for example Java or .NET) as long as they understand Web Services Description Language (WSDL) and communicate using Simple Object Access Protocol (SOAP). The print solution consists of three main components, as described in the diagram below: a print server (not included in the AutoVue Document Print Service offering) that will interact with your application to identify the files that need to be printed, the printer to send each file, as well as the print options needed for each file (paper size, page orientation, etc), and collate the print job requests. The print server will also take care of calling the AutoVue Document Print Service to perform the actual printing. The AutoVue Document Print Services send files to a printer for printing. The AutoVue Document Print Service products leverage AutoVue's format- and platform agnostic technology to let you print/batch virtually any type of files, without requiring the authoring application installed on your machine. and Printers As shown above, you can trigger printing from your application either programmatically through automated business processes or manually through human interaction. If documents that need to be printed from your application are stored inside a content repository/Document Management System (DMS) such as Oracle Universal Content Management System (UCM), then the Print Server will need to identify the list of documents and pass the ID of each document to the AutoVue DPS to print. In this case, AutoVue DPS leverages the AutoVue VueLink integration (note: AutoVue VueLink integrations are pre-packaged AutoVue integrations with most common enterprise systems. Check our Website for more information on the subject) to fetch documents out of the document management system for printing. In lieu of the AutoVue VueLink integration, you can also leverage the AutoVue Integration Software Development Kit (iSDK) to build your own connector. If the documents you need to print from your application are not stored in a content management system, the Print Server will need to ensure that files are made available to the AutoVue Document Print Service. The Print Server could for example fetch the files out of your application or an extension to the application could be developed to fetch the files and make them available to the AutoVue DPS. More information on methods to pass on file information to the AutoVue Document Print Service products can be found in the AutoVue Document Print Service Overview documentation available on the Oracle Technology Network. Related article: Any Document Type with AutoVue Document Print Services

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Learn about MySQL with the Authentic MySQL for Beginners course

    - by Antoinette O'Sullivan
    Learn about the MySQL Server and other MySQL products by taking the authentic MySQL for Beginners course. This course covers all the basics from MySQL download and installation, to relational database concepts and database design. This course is your first step to becoming a MySQL administrator. You can take this course through one of the following delivery types: Training-on-Demand: Start the class from your desk, at your base and within 24 hrs of registering. Read Ben Krug on Day 3 of his experience taking the MySQL for Beginners course Training-on-Demand option. Live-Virtual Class: Attend this live class from your own office - no travel required. Choose from a selection of events on the schedule to suit different timezones. Delivery languages include English and German. In-Class event: Attend this class in an education center. Events already on the schedule include:  Location  Date  Delivery Language  Mechelen, Belgium  14 January 2013  English  London, England  5 March 2013  English  Hamburg, Germany  25 March 2013  German  Munich, Germany  3 June 2013  German  Budapest, Hungary  5 February 2013  Hungary  Milan, Italy  11 February 2013  Italian  Rome, Italy  4 March 2013  Italian  Riga, Latvia  18 February 2013  Latvian  Amsterdam, Netherlands  21 May 2013  Dutch  Nieuwegein, Netherlands  18 February 2013  Dutch  Warsaw, Poland  18 February 2013  Polish  Lisbon, Portugal  25 March 2013  European Portugese  Porto, Portugal  25 March 2013  European Portugese  Barcelona, Spain  11 February 2013  Spanish  Madrid, Spain  22 April 2013  Spanish  Nairobi, Kenya  14 January 2013  English  Capetown, South Africa  22 July 2013  English  Pretoria, South Africa  22 April 2013  English  Petaling Jaya, Malaysia  28 January 2013  English  Ottawa, Canada  25 March 2013  English  Toronto, Canada  25 March 2013  English  Montreal, Canada 25 March 2013   English Mexico City, Mexico  14 January 2013   Spanish  San Pedro Garza Garcia, Mexico  5 February 2013  Spanish  Sao Paolo, Brazil  29 January 2013  Brazilian Portugese For more information on this or other courses on the authentic MySQL Curriculum, go to http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • SOA, Cloud and Service Technology Symposium a super success!

    - by JuergenKress
    SOA, Cloud and Service Technology Symposium in London was a huge success. More than 600 international attendees participated in it. Our SOA & BPM Community had a great presence there. At joint booth with the Specialized partners link consulting, eProseed and Griffiths Waite, we presented the latest product updates and had many interesting discussions with customers and speakers. Special thanks to our HQ product management team Demed, Tim, Manas for coming over right before OOW. Also a very big Thank to Matthias Ziegler from Accenture for presenting our joint presentation individually! If you missed the conference here are the key presentations links for your reference: Big Data and its impact on SOA Demed L'Her [View PDF] Building 21st Century Service-Oriented Airports Shyam Kumar [View PDF] Building Cloudy Services Anne Thomas Manes [View PDF] Community Management: The Next Wave of SOA Governance and API Management Tim E. Hall [View PDF] Elastic SOA in the Cloud Steve Millidge [View PDF] Governing Shared Services: On-Premise & In the Cloud Thomas Erl [View Video] Introducing the Cloud Computing Design Patterns Catalogue Thomas Erl and Amin Naserpour [CloudPatterns.org] Lost in Translation - Common Mistakes Interpreting Patterns Mark Simpson [View PDF] Moving Applications to the Cloud: Migration Options Anne Thomas Manes [View PDF] New Paradigms for Application Architecture: From Applications to IT Services Anne Thomas Manes [View PDF] NoSQL for Data Services, Data Virtualization & Big Data Guido Schmutz [View PDF] A Pragmatic Approach to Cloud Computing Andrea Morena [View PDF] The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts and Examples Clemens Utschig and Manas Deb [View PDF] Service Modeling & BPM Business Value Patterns Matthias Ziegler [View PDF] [Podcast] SOA Adoption in the Brazilian Ministry of Health - Case Study Ricardo Puttini and Andre Toffanello [PDF Coming Soon] SOA Environments are a Big Data Problem Markus Zirn, Splunk and Maciej Barcz [View PDF] SOA Governance at EDP: A Global Energy Company Manuel Rosa [View PDF] For all presentations please visit the SOA, Cloud and Service Technology Symposium Website SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • 64-bit 13.10 shows 1GB less RAM than 64-bit 13.04 did

    - by kiloseven
    Multiple 64-bit versions (Kubuntu, Lubuntu and Xubuntu) once installed on my ThinkPad R60 show 3GB of RAM, not the correct 4GB of RAM. Last week with 13.04, I had 4GB of RAM (which matches the BIOS) and this week I have 3GB available. Inquiring minds want to know. Details follow: Linux R60 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux r60 free -m reports: _ total used free shared buffers cached Mem: 3001 854 2146 0 22 486 -/+ buffers/cache: 346 2655 Swap: 0 0 0 . . . . . . lshw shows: description: Notebook product: 9459AT8 () vendor: LENOVO version: ThinkPad R60/R60i serial: redacted width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal chassis=notebook family=ThinkPad R60/R60i frontpanel_password=unknown keyboard_password=disabled power-on_password=disabled uuid=126E4001-48CA-11CB-9D53-B982AE0D1ABB *-core description: Motherboard product: 9459AT8 vendor: LENOVO physical id: 0 version: Not Available *-firmware description: BIOS vendor: LENOVO physical id: 0 version: 7CETC1WW (2.11 ) date: 01/09/2007 size: 144KiB capacity: 1984KiB capabilities: pci pcmcia pnp upgrade shadowing escd cdboot bootselect socketedrom edd acpi usb biosbootspecification {snip} *-memory description: System Memory physical id: 29 slot: System board or motherboard size: 4GiB *-bank:0 description: SODIMM DDR2 Synchronous physical id: 0 slot: DIMM 1 size: 2GiB width: 64 bits *-bank:1 description: SODIMM DDR2 Synchronous physical id: 1 slot: DIMM 2 size: 2GiB width: 64 bits dpkg -l linux-* returns: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-======================================-=======================================-========================================================================== un linux-doc-3.2.0 (no description available) ii linux-firmware 1.79.6 Firmware for Linux kernel drivers ii linux-generic 3.2.0.52.62 Complete Generic Linux kernel un linux-headers (no description available) un linux-headers-3 (no description available) un linux-headers-3.0 (no description available) un linux-headers-3.2.0-23 (no description available) un linux-headers-3.2.0-23-generic (no description available) ii linux-headers-3.2.0-52 3.2.0-52.78 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-52-generic 3.2.0-52.78 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-generic 3.2.0.52.62 Generic Linux kernel headers un linux-image (no description available) un linux-image-3.0 (no description available) ii linux-image-3.2.0-52-generic 3.2.0-52.78 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-generic 3.2.0.52.62 Generic Linux kernel image un linux-initramfs-tool (no description available) un linux-kernel-headers (no description available) un linux-kernel-log-daemon (no description available) ii linux-libc-dev 3.2.0-52.78 Linux Kernel Headers for development un linux-restricted-common (no description available) ii linux-sound-base 1.0.25+dfsg-0ubuntu1.1 base package for ALSA and OSS sound systems un linux-source-3.2.0 (no description available) un linux-tools (no description available)

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • Speaking at SPTechCon SF 2011 and SPSNOLA 2011

    - by Brian Jackett
    From Feb 7th-9th I’ll be presenting two sessions at SPTechCon San Francisco 2011.  My first presentation is a new session called “The Expanding Developer Toolbox for SharePoint 2010” which covers many of the new tools and functionality available to SharePoint 2010 developers.  My second sessions is called “Real World Deployment of SharePoint 2007 Solutions” (presented at last SPTech Con Boston) which covers tips, tricks, and advice on deploying SharePoint 2007 solutions.  If you hurry you may still be able to register for this SPTechCon.  Click here for registration information.  Hope to see you there.     In addition to SPTechCon, I’ll also be speaking at SharePoint Saturday New Orleans 2011 on Feb 26th.  My presentation is called “Managing SharePoint 2010 Farms with PowerShell”.  I’ve given this presentation at a number of recent conferences and it has been popular.  I’m excited for this weekend as well since it will be my first time visiting New Orleans.  Click here for registration information.   Sessions Where: SPTech Con San Francisco 2011 Title: The Expanding Developer Toolbox for SharePoint 2010 Audience and Level: Developer, Beginner/Intermediate Abstract: LINQ to SharePoint, native Visual Studio 2010 support, easier access to logging, Business Connectivity Services… The list of new features and tools available to developers rapidly grew between SharePoint 2007 and 2010.  In this session we will cover these and many of the other newest features added for SharePoint developers to utilize.  This session is targeted to SharePoint 2007 developers upgrading their skills to SharePoint 2010 or developers new to SharePoint 2010.   Where: SPTech Con San Francisco 2011 Title: Real World Deployment of SharePoint 2007 Solutions Audience and Level: Admin/Developer, Intermediate Abstract: “All I have to do is run some STSADM commands to deploy my SharePoint solutions, right?”  If you are saying that to yourself then you are missing out on some of the more advanced processes you can employ to deploy and maintain your SharePoint solutions and farm.  In this session we will cover lessons learned from 3 years of deploying and automating SharePoint solutions.  This will include using a combination of STSADM, PowerShell, SharePoint API and a number of other tools in a real world situation to deploy an entire suite of custom SharePoint solutions.  This session is targeted to farm administrators and developers.  Prior experience with SharePoint solutions, STSADM and minimal PowerShell experience is suggested.   Where: SharePoint Saturday New Orleans Title: Managing SharePoint 2010 Farms with PowerShell Audience and Level: Admin, Beginner Abstract: Having you been using STSADM (or worse hand editing processes) to manage your SharePoint 2007 farms? Are you hearing about needing to learn PowerShell to manage SharePoint 2010 farms? This session will serve as part introduction to PowerShell and part overview of how you can use PowerShell to more efficiently and effectively manage your SharePoint 2010 farm. This session is targeted to farm administrators and IT pros and no previous experience with PowerShell is required.         -Frog Out

    Read the article

  • How to upgrade boost lib using apt-get?

    - by sam
    I use ubuntu 11.04. My boost version: sam@sam:~/code/ros/pcl$ apt-cache showpkg libboost-all-dev Package: libboost-all-dev Versions: 1.42.0.1ubuntu1 (/var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages MD5: 72efad05a3c79394c125b79e1d4eb3a7 Reverse Depends: libvtk5-dev,libboost-all-dev libfeel++-dev,libboost-all-dev Dependencies: 1.42.0.1ubuntu1 - libboost-dev (0 (null)) libboost-date-time-dev (0 (null)) libboost-filesystem-dev (0 (null)) libboost-graph-dev (0 (null)) libboost-iostreams-dev (0 (null)) libboost-math-dev (0 (null)) libboost-program-options-dev (0 (null)) libboost-python-dev (0 (null)) libboost-regex-dev (0 (null)) libboost-serialization-dev (0 (null)) libboost-signals-dev (0 (null)) libboost-system-dev (0 (null)) libboost-test-dev (0 (null)) libboost-thread-dev (0 (null)) libboost-wave-dev (0 (null)) Provides: 1.42.0.1ubuntu1 - Reverse Provides: sam@sam:~/code/ros/pcl$ How to upgrade boost to 1.44+ by using apt tools? Thank you~ When I run apt-add-repository,it shows: sam@sam:~/code/ros/pcl$ sudo apt-add-repository ppa:timklingt/ppa Error reading https://launchpad.net/api/1.0/~timklingt/+archive/ppa: GnuTLS recv error (-9): A TLS packet with unexpected length was received. sam@sam:~/code/ros/pcl$ How to fix it? Thank you~ I try to install libboost1.46-all-dev: sam@sam:~/code/ros/pcl$ sudo apt-get install libboost1.46-all-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libboost1.46-all-dev : Depends: libboost1.46-dev but it is not going to be installed Depends: libboost-date-time1.46-dev but it is not going to be installed Depends: libboost-filesystem1.46-dev but it is not going to be installed Depends: libboost-graph1.46-dev but it is not going to be installed Depends: libboost-iostreams1.46-dev but it is not going to be installed Depends: libboost-math1.46-dev but it is not going to be installed Depends: libboost-program-options1.46-dev but it is not going to be installed Depends: libboost-python1.46-dev but it is not going to be installed Depends: libboost-regex1.46-dev but it is not going to be installed Depends: libboost-serialization1.46-dev but it is not going to be installed Depends: libboost-signals1.46-dev but it is not going to be installed Depends: libboost-system1.46-dev but it is not going to be installed Depends: libboost-test1.46-dev but it is not going to be installed Depends: libboost-thread1.46-dev but it is not going to be installed Depends: libboost-wave1.46-dev but it is not going to be installed E: Broken packages sam@sam:~/code/ros/pcl$ What's these error means? And how to solve it? Thank you~

    Read the article

  • Disable IE 8 Thumbnail Previews on Windows 7 Taskbar

    - by Asian Angel
    The Aero thumbnail previews are a great new feature, but if you are not a fan of the flashy eye-candy, you can get rid of them with a simple tweak. Here is how to do it. Before Here we are…Internet Explorer 8 with a lot of How-To Geek Network goodness ready to go. The Taskbar Thumbnail Previews look very nice, but perhaps they take up too much room for those of you who like to keep things simple. The Taskbar Icon has the classic “fanned edge” look just like any other software with Taskbar Thumbnail Previews active. Disabling the Thumbnail Previews If you want to deactivate the Taskbar Thumbnail Previews for Internet Explorer, it is quite easy and will only take you a few moments to complete. Open IE and go to Tools \ Internet Options. When the Internet Options Window opens you will already be on the General Tab. Under the Tabs Section, click on the Settings button. The Tabbed Browsing Settings window opens. Uncheck Show previews for individual tabs in the taskbar and click OK. When you are returned to the Internet Options Window, click OK once again to totally exit out. Note: A browser restart will be required for the changes to take effect. After you have restarted Internet Explorer, you will see the simple default Taskbar Thumbnail Preview and standard icon look. Conclusion If you have been looking to disable the Taskbar Thumbnail Previews for Internet Explorer, then you are only a few clicks away from satisfaction. If you want to change it back, it is as simple as re-enabling the Show previews for individual tabs in the taskbar setting. Similar Articles Productive Geek Tips Increase the size of Taskbar Preview Thumbnails in Windows 7Vista Style Popup Previews for Firefox TabsWorkaround for Vista Taskbar Thumbnail Previews Not Showing CorrectlyDisable Thumbnail Previews in Windows 7 or Vista ExplorerGet Vista Taskbar Thumbnail Previews in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Asp.net session on browser close

    - by budugu
    Note: Cross posted from Vijay Kodali's Blog. Permalink How to capture logoff time when user closes browser? Or How to end user session when browser closed? These are some of the frequently asked questions in asp.net forums. In this post I'll show you how to do this when you're building an ASP.NET web application. Before we start, one fact: There is no full-proof technique to catch the browser close event for 100% of time. The trouble lies in the stateless nature of HTTP. The Web server is out of the picture as soon as it finishes sending the page content to the client. After that, all you can rely on is a client side script. Unfortunately, there is no reliable client side event for browser close. Solution: The first thing you need to do is create the web service. I've added web service and named it AsynchronousSave.asmx.    Make this web service accessible from Script, by setting class qualified with the ScriptServiceAttribute attribute...  Add a method (SaveLogOffTime) marked with [WebMethod] attribute. This method simply accepts UserId as a string variable and writes that value and logoff time to text file. But you can pass as many variables as required. You can then use this information for many purposes. To end user session, you can just call Session.Abandon() in the above web method. To enable web service to be called from page’s client side code, add script manager to page. Here i am adding to SessionTest.aspx page When the user closes the browser, onbeforeunload event fires on the client side. Our final step is adding a java script function to that event, which makes web service calls. The code is simple but effective My Code HTML:( SessionTest.aspx ) C#:( SessionTest.aspx.cs ) That’s’ it. Run the application and after browser close, open the text file to see the log off time. The above code works well in IE 7/8. If you have any questions, leave a comment.

    Read the article

  • Asp.net session on browser close

    - by budugu
    Note: Cross posted from Vijay Kodali's Blog. Permalink How to capture logoff time when user closes browser? Or How to end user session when browser closed? These are some of the frequently asked questions in asp.net forums. In this post I'll show you how to do this when you're building an ASP.NET web application. Before we start, one fact: There is no full-proof technique to catch the browser close event for 100% of time. The trouble lies in the stateless nature of HTTP. The Web server is out of the picture as soon as it finishes sending the page content to the client. After that, all you can rely on is a client side script. Unfortunately, there is no reliable client side event for browser close. Solution: The first thing you need to do is create the web service. I've added web service and named it AsynchronousSave.asmx.    Make this web service accessible from Script, by setting class qualified with the ScriptServiceAttribute attribute...  Add a method (SaveLogOffTime) marked with [WebMethod] attribute. This method simply accepts UserId as a string variable and writes that value and logoff time to text file. But you can pass as many variables as required. You can then use this information for many purposes. To end user session, you can just call Session.Abandon() in the above web method. To enable web service to be called from page’s client side code, add script manager to page. Here i am adding to SessionTest.aspx page When the user closes the browser, onbeforeunload event fires on the client side. Our final step is adding a java script function to that event, which makes web service calls. The code is simple but effective My Code HTML:( SessionTest.aspx ) C#:( SessionTest.aspx.cs ) That’s’ it. Run the application and after browser close, open the text file to see the log off time. The above code works well in IE 7/8. If you have any questions, leave a comment.

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Part 2: The Customization Lifecycle

    - by volker.eckardt(at)oracle.com
    To understand the challenges when working with Customizations better, please allow me to explain my understanding from the Customization Lifecycle.  The starting point is the functional GAP list. Any GAP can lead to a customization (but not have to). The decision is driven by priority, gain, costs, future functionality, accepted workarounds etc. Let's assume the customization has been accepted as such - including estimation. (Otherwise this blog would not have any value)Now the customization life-cycle starts and could look like this:-    Functional specification-    Technical specification-    Technical development-    Functional setup-    Module Test-    System Test-    Integration Test (if required)-    Acceptance Test-    Production mode-    Usage-    10 x Rework-    10 x Retest -    2 x Upgrade-    2 x Upgrade Test-    Usage-    10 x Rework-    10 x Retest -    1 x Upgrade-    1 x Upgrade Test-    Usage-    Review for Retirement-    Accepted Retirement-    De-installationWhat I like to highlight herewith is that any material and documentation you create upfront or during the first phases will usually be used multiple times, partial or complete, will be enhanced, reviewed, retested. The better the quality right from the beginning is, the better we can perform the next steps.What I see very often is the wish to remove a customization, our customers are upgrading and they like to get at least some of the customizations replaced with standard functionality. To be able to support this process best, the customization documentation should contain at least the following key information: What is/are the business process(es) where this customization is used or linked to?Who was involved in the different customization phases?What are the objects comprising the customization?What is the setup necessary for the customization?What setup comes with the customization, what has to be done via other tools or manually?What are the test steps and test results (in all test areas)?What are linked customizations? What is the customization complexity?How is this customization classified?Which technologies were used?How many days were needed to create/test/upgrade the customization?Etc.If all this is available, a replacement / retirement can be done much more efficient and precise, or an estimation and upgrade itself can be executed with much better support.In the following blog entries I will explain in more detail why we suggest tracking such information, by whom this task shall be done and how.Volker Eckardt

    Read the article

  • Retrieving only the first record or record at a certain index in LINQ

    - by vik20000in
    While working with data it’s not always required that we fetch all the records. Many a times we only need to fetch the first record, or some records in some index, in the record set. With LINQ we can get the desired record very easily with the help of the provided element operators. Simple get the first record. If you want only the first record in record set we can use the first method [Note that this can also be done easily done with the help of the take method by providing the value as one].     List<Product> products = GetProductList();      Product product12 = (         from prod in products         where prod.ProductID == 12         select prod)         .First();   We can also very easily put some condition on which first record to be fetched.     string[] strings = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" };     string startsWithO = strings.First(s => s[0] == 'o');  In the above example the result would be “one” because that is the first record starting with “o”.  Also the fact that there will be chances that there are no value returned in the result set. When we know such possibilities we can use the FirstorDefault() method to return the first record or incase there are no records get the default value.        int[] numbers = {};     int firstNumOrDefault = numbers.FirstOrDefault();  In case we do not want the first record but the second or the third or any other later record then we can use the ElementAt() method. In the ElementAt() method we need to pass the index number for which we want the record and we will receive the result for that element.      int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };      int fourthLowNum = (         from num in numbers         where num > 5         select num )         .ElementAt(1); Vikram

    Read the article

  • A Small Utility to Delete Files recursively by Date

    - by Rick Strahl
    It's funny, but for me the following seems to be a recurring theme: Every few months or years I end up with a host of files on my server that need pruning selectively and often under program control. Today I realized that my SQL Server logs on my server were really piling up and nearly ran my backup drive out of drive space. So occasionally I need to check on that server drive and clean out files. Now with a bit of work this can be done with PowerShell or even a complicated DOS batch file, but heck, to me it's always easier to just create a small Console application that handles this sort of thing with a full command line parser and a few extra options, plus in the end I end up with code that I can actually modify and add features to as is invariably the case. No more searching for a script each time :-) So for my typical copy needs the requirements are: Need to recursively delete files Need to be able to specify a filespec (ie. *.bak) Be able to specify a cut off date before which to delete files And it'd be nice to have an option to send files to the Recycle bin just in case for operator error :-)(and yes that came in handy as I blew away my entire database backup folder by accident - oops!) The end result is a small Console file copy utility that I popped up on Github: https://github.com/RickStrahl/DeleteFiles The source code is up there along with the binary file you can just run. Creating DeleteFiles It's pretty easy to create a simple utility like DeleteFiles of course, so I'm not going to spend any talking about how it works. You can check it out in the repository or download and compile it. The nice thing about using a full programming language like C over something like PowerShell or batch file is that you can make short work of the recursive tree walking that's required to make this work. There's very little code, but there's also a very small, self-contained command line parser in there that might be useful that can be plugged into any project - I've been using it quite a bit for just about any Console application I've been building. If you're like me and don't have the patience or the persistence (that funky syntax requires some 'sticking with it' that I simply can't get over) to get into Powershell coding, having an executable file that I can just copy around or keep in my Utility directory is the only way I'll ever get to reuse this functionality without going on a wild search each time :-) Anyway, hope some of you might find this useful. © Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • Addin Central Windows Home Server

    - by Mysticgeek
    If you’re a Windows Home Server user, you’ve probably come across a lot of cool addins that you can use to enhance its functionality. Today we take a look at Addin Central… which gives you easy access to the many addin possibilities out there. Addin Central from HomeServerLand is a free well…addin…that brings you information and easy access to several other addins that are available for your Windows Home Server. It essentially creates an “app store” of sorts for your Windows Home Server that you can view in the console. Install Addin Central Browse to your shared folders on the server and open the Add-Ins folder and copy the AddinCentral.msi installer (link below). Next open WHS Console from one of the computers connected to your network, and click Settings then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Addin Central installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using Addin Central When you reconnect to WHS Console, you’ll see Addin Central in the menu with a list of the newest updated addins. You can go through the list of addins and check out their rating, version, author, amount of downloads, and if they’re free or you need to purchase a license. You can sort through the addins by popularity, author, categories, or look at them all.   When you select an addin you can read more detail about it in the pane on the right side…from here you can go directly to the addin website as well. Hover over the thumbnail of the addin to get a larger screenshot of what it looks like. The toolbox gives you different options for controlling Addin Central, and notice you can easily search for addins too. In the Addin Central Toolbox you can change settings such as the startup view, showing tooltips, and more.   Conclusion If you’re a Windows Home Server user, you’ll definitely want to give Addin Central a go. It lets you easily find and download various addins for essentially anything you might want to use to customize your home server. HomeServerLand calls it “The Mother of All Addins” and for a good reason. Download Addin Central (registration required) Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerGet Extended Access to Windows Home Server with Advanced Admin ConsoleShare Ubuntu Home Directories using SambaInstalling Windows Home ServerAnother Blog You Should Subscribe To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time

    Read the article

  • Silverlight Cream for June 15, 2010 -- #882

    - by Dave Campbell
    In this Issue: Colin Eberhardt Zoltan Arvai, Marcel du Preez, Mark Tucker, John Papa, Phil Middlemiss, Andy Beaulieu, and Chad Campbell. From SilverlightCream.com: Throttling Silverlight Mouse Events to Keep the UI Responsive Colin Eberhardt sent me this link to his latest at Scott Logic... about how to throttle Silverlight -- no not that, you'd have to go to one of the *other* blogs for that :) ... this is throttling the mouse, particularly the mouse wheel to keep the UI from freezing up ... check out the demos, you'll want to read the code Data Driven Applications with MVVM Part I: The Basics Zoltan Arvai started a series of tutorials on Data-Driven Applications with MVVM at SilverlightShow... this is number 1, and it looks like it's going to be a good series to read. Red-To-Green scale using an IValueConverter Marcel du Preez has an interesting post up at SilverlightShow using an IValueConverter to do a red/yellow/green progress bar ... this is pretty cool. Infragistics XamWebOutlookBar & Caliburn With assistance from Rob Eisenburg, Mark Tucker was able to build a Caliburn sample including the Infragistics XamWebOutlookBar, and he's sharing his experience (and code) with all of us. Printing Tip – Handling User Initiated Dialogs Exceptions John Papa responded to a common printing problem by writing it up in his blog. Note this problem quite often appears during debug, so check it out... John also has a quick tip on an update to the PrintAPI in Silverlight 4. Automatic Rectangle Radius X and Y Phil Middlemiss has another great Blend post up -- this one on rounding off buttons... they look great to me, but he's looking for advice -- how about that Phil? They look great to me :) WP7 Back Button in Games Planning on selling 'stuff' in the Windows Phone Marketplace? Are you familiar with the required use of the Back Button? How about in a game? ... Andy Beaulieu discusses all this and has some code you'll want to use. Windows Phone 7 – Call Phone Number from HyperlinkButton Chad Campbell [no relation :) ] is discussing dialing a number from a hyperlink in WP7 - oh yeah, it's a phone as well :) -- I think I've only seen a number attempt to be called -- hmm... and we're not yet either because we all have emulators, but this is a good intro to the functionality for when we may actually have devices! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Sending Parameters with the BizTalk HTTP Adapter

    - by Christopher House
    I've never had occaison to use the BizTalk HTTP adapter since I've always needed SOAP rather than just POX (plain old XML).  Yesterday we decided that we're going to expose some data via a Java servlet that will accept an HTTP post and respond with POX.  I knew BizTalk had an HTTP adapter but I had no idea what it's capabilities were. After a quick read through the BizTalk docs, it was apparent that the HTTP send adapter does in fact do posts.  The concern I had though was how we were going to supply parameters to the servlet.  The examples I had seen using the HTTP adapter all involved posting an XML message to some HTTP location.  Our Java guy, however didn't want to take that approach.  He wanted us to provide a query string via post, much like you'd expect to see on an HTTP get.  I decided to put together a little test scenario and see what I could come up with.  We didn't have a test servlet I could go against and my Java experience is virtually nill, so I decided to put together an ASP.Net project to act as the servlet.  It didn't need to be fancy, just one HttpHandler that accepts a post, reads a parameter and returns XML.  With the HttpHandler done, I put together a simple orchestration to send a message to the handler.  I started by having the orch send a message of type System.String to see what it would look like when the handler received it. I set a breakpoint in my handler and kicked off the orchestration.  Below is what I saw: As I suspected, because of BizTalk's XML serialization, System.String was not going to work.  I thought back to my BizTalk 2004 days and I project I worked on that required sending HTML formatted emails via the SMTP adapter.  To acomplish that, I had used a .Net class with a custom serialization formatter that I got from a Microsoft sample.  The code for the class, RawString can be found here. I created a new class library with the RawString class as well as a static factory class, referenced that in my orchestration project and changed my message type from System.String to RawString.  Below is what the code in my message construction looks like: After deploying the updated orchestration, I fired it off again and checked the breakpoint in my HttpHandler.  This is what I saw: And there you have it.  The RawString message type allowed me to pass a query string in the HTTP post without wrapping it in XML.

    Read the article

  • Fix: SqlDeploy Task Fails with NullReferenceException at ExtractPassword

    Still working on getting a TeamCity build working (see my last post).  Latest exception is: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(120, 5): error MSB4018: The "SqlDeployTask" task failed unexpectedly. System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Data.Schema.Common.ConnectionStringPersistence.ExtractPassword(String partialConnection, String dbProvider) at Microsoft.Data.Schema.Common.ConnectionStringPersistence.RetrieveFullConnection(String partialConnection, String provider, Boolean presentUI, String password) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.ConfigureConnectionString(String connectionString, String databaseName) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.OnBuildConnectionString(String partialConnectionString, String databaseName) at Microsoft.Data.Schema.Build.Deployment.FinishInitialize(String targetConnectionString) at Microsoft.Data.Schema.Build.Deployment.Initialize(FileInfo sourceDbSchemaFile, ErrorManager errors, String targetConnectionString) at Microsoft.Data.Schema.Build.DeploymentConstructor.ConstructServiceImplementation() at Microsoft.Data.Schema.Extensibility.ServiceConstructor'1.ConstructService() at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult)   This time searching yielded some good stuff, including this thread that talks about how to resolve this via permissions.  The short answer is that the account that your build server runs under needs to have the necessary permissions in SQL Server.  Youll need to create a Login and then ensure at least the minimum rights are configured as described here: Required Permissions in Database Edition Alternately, you can just make your buildserver account an admin on the database (which is probably running on the same machine anyway) and at that point it should be able to do whatever it needs to. If youre certain the account has the necessary permissions, but youre still getting the error, the problem may be that the account has never logged into the build server.  In this case, there wont be any entry in the HKCU hive in the registry, which the system is checking for permissions (see this thread).  The solution in this case is quite simple: log into the machine (once is enough) with the build server account.  Then, open Visual Studio (thanks Brendan for the answer in this thread). Summary Make sure the build service account has the necessary database permissions Make sure the account has logged into the server so it has the necessary registry hive info Make sure the account has run Visual Studio at least once so its settings are established In my case I went through all 3 of these steps before I resolved the problem. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Running Windows Phone Developers Tools CTP under VMWare Player - Yes you can! - But do you want to?

    - by Liam Westley
    This blog is the result of a quick investigation of running the Windows Phone Developer Tools CTP under VMWare Player.  In the release notes for Windows Phone Developer Tools CTP it mentions that it is not supported under VirtualPC or Hyper-V.  Some developers have policies where ‘no non-production code’ can be installed on their development workstation and so the only way they can use a CTP like this is in a virtual machine. The dilemma here is that the emulator for Windows Phone itself is a virtual machine and running a virtual machine within another virtual machine is normally frowned upon.  Even worse, previous Windows Mobile emulators detected they were in a virtual machine and refused to run.  Why VMWare? I selected VMWare as a possible solution as it is possible to run VMWare ESXi under VMWare Workstation by manually setting configuration options in the VMX configuration file so that it does not detect the presence of a virtual environment. I actually found that I could use VMWare Player (the free version, that can now create VM images) and that there was no need for any editing of the configuration file (I tried various switches, none of which made any difference to performance). So you can run the CTP under VMWare Player, that’s the good news. The bad news is that it is incredibly slow, bordering on unusable.  However, if it’s the only way you can use the CTP, at least this is an option. VMWare Player configuration I used the latest VMWare Player, 3.0, running under Windows x64 on my HP 6910p laptop with an Intel T7500 Dual Core CPU running at 2.2GHz, 4Gb of memory and using a separate drive for the virtual machines. I created a machine in VMWare Player with a single CPU, 1536 Mb memory and installed Windows 7 x64 from an ISO image.  I then performed a Windows Update, installed VMWare Tools, and finally the Windows Phone Developer Tools CTP After a few warnings about performance, I configured Windows 7 to run with Windows 7 Basic theme rather than use Aero (which is available under VMWare Player as it has a WDDM driver). Timings As a test I first launched Microsoft Visual Studio 2010 Express for Windows Phone, and created a default Windows Phone Application project.  I then clicked the run button, which starts the emulator and then loads the default application onto the emulator. For the second test I left the emulator running, stopped the default application, added a single button to change the page title and redeployed to the already running emulator by clicking the run button.   Test 1 (1st run) Test 2 (emulator already running)   VMWare Player 10 minutes  1 minute   Windows x64 native 1 minute  < 10 seconds   Conclusion You can run the Windows Phone Developer Tools CTP under VMWare Player, but it’s really, really slow and you would have to have very good reasons to try this approach. If you need to keep a development system free of non production code, and the two systems aren’t required to run simultaneously, then I’d consider a boot from VHD option.  Then you can completely isolate the Windows Phone Developer Tools CTP and development environment into a single VHD separate from your main development system.

    Read the article

  • Using MAC Authentication for simple Web API’s consumption

    - by cibrax
    For simple scenarios of Web API consumption where identity delegation is not required, traditional http authentication schemas such as basic, certificates or digest are the most used nowadays. All these schemas rely on sending the caller credentials or some representation of it in every request message as part of the Authorization header, so they are prone to suffer phishing attacks if they are not correctly secured at transport level with https. In addition, most client applications typically authenticate two different things, the caller application and the user consuming the API on behalf of that application. For most cases, the schema is simplified by using a single set of username and password for authenticating both, making necessary to store those credentials temporally somewhere in memory. The true is that you can use two different identities, one for the user running the application, which you might authenticate just once during the first call when the application is initialized, and another identity for the application itself that you use on every call. Some cloud vendors like Windows Azure or Amazon Web Services have adopted an schema to authenticate the caller application based on a Message Authentication Code (MAC) generated with a symmetric algorithm using a key known by the two parties, the caller and the Web API. The caller must include a MAC as part of the Authorization header created from different pieces of information in the request message such as the address, the host, and some other headers. The Web API can authenticate the caller by using the key associated to it and validating the attached MAC in the request message. In that way, no credentials are sent as part of the request message, so there is no way an attacker to intercept the message and get access to those credentials. Anyways, this schema also suffers from some deficiencies that can generate attacks. For example, brute force can be still used to infer the key used for generating the MAC, and impersonate the original caller. This can be mitigated by renewing keys in a relative short period of time. This schema as any other can be complemented with transport security. Eran Rammer, one of the brains behind OAuth, has recently published an specification of a protocol based on MAC for Http authentication called Hawk. The initial version of the spec is available here. A curious fact is that the specification per se does not exist, and the specification itself is the code that Eran initially wrote using node.js. In that implementation, you can associate a key to an user, so once the MAC has been verified on the Web API, the user can be inferred from that key. Also a timestamp is used to avoid replay attacks. As a pet project, I decided to port that code to .NET using ASP.NET Web API, which is available also in github under https://github.com/pcibraro/hawknet Enjoy!.

    Read the article

  • Haskell's cabal dependency problem with happy

    - by wirrbel
    I have problems installing ghc-mod on my linux machine. cabal worries about "happy" not being available in versione = 1.17: $ cabal install ghc-mod Resolving dependencies... [1 of 1] Compiling Main ( /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/Setup.hs, /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/Main.o ) Linking /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/setup ... Configuring haskell-src-exts-1.14.0... setup: The program happy version =1.17 is required but it could not be found. Failed to install haskell-src-exts-1.14.0 cabal: Error: some packages failed to install: ghc-mod-3.1.3 depends on haskell-src-exts-1.14.0 which failed to install. haskell-src-exts-1.14.0 failed during the configure step. The exception was: ExitFailure 1 hlint-1.8.53 depends on haskell-src-exts-1.14.0 which failed to install. However, it even is installed in v. 1.19, as you can see here: $ cabal install happy Resolving dependencies... [1 of 1] Compiling Main ( /tmp/happy-1.19.0-1124/happy-1.19.0/Setup.lhs, /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/Main.o ) Linking /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/setup ... Configuring happy-1.19.0... Building happy-1.19.0... Preprocessing executable 'happy' for happy-1.19.0... [ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o ) [ 2 of 18] Compiling Target ( src/Target.lhs, dist/build/happy/happy-tmp/Target.o ) [ 3 of 18] Compiling AbsSyn ( src/AbsSyn.lhs, dist/build/happy/happy-tmp/AbsSyn.o ) [ 4 of 18] Compiling ParamRules ( src/ParamRules.hs, dist/build/happy/happy-tmp/ParamRules.o ) [ 5 of 18] Compiling GenUtils ( src/GenUtils.lhs, dist/build/happy/happy-tmp/GenUtils.o ) [ 6 of 18] Compiling ParseMonad ( src/ParseMonad.lhs, dist/build/happy/happy-tmp/ParseMonad.o ) [ 7 of 18] Compiling Lexer ( src/Lexer.lhs, dist/build/happy/happy-tmp/Lexer.o ) [ 8 of 18] Compiling Parser ( dist/build/happy/happy-tmp/Parser.hs, dist/build/happy/happy-tmp/Parser.o ) [ 9 of 18] Compiling AttrGrammar ( src/AttrGrammar.lhs, dist/build/happy/happy-tmp/AttrGrammar.o ) [10 of 18] Compiling AttrGrammarParser ( dist/build/happy/happy-tmp/AttrGrammarParser.hs, dist/build/happy/happy-tmp/AttrGrammarParser.o ) [11 of 18] Compiling Grammar ( src/Grammar.lhs, dist/build/happy/happy-tmp/Grammar.o ) [12 of 18] Compiling First ( src/First.lhs, dist/build/happy/happy-tmp/First.o ) [13 of 18] Compiling LALR ( src/LALR.lhs, dist/build/happy/happy-tmp/LALR.o ) [14 of 18] Compiling Paths_happy ( dist/build/autogen/Paths_happy.hs, dist/build/happy/happy-tmp/Paths_happy.o ) [15 of 18] Compiling ProduceCode ( src/ProduceCode.lhs, dist/build/happy/happy-tmp/ProduceCode.o ) [16 of 18] Compiling ProduceGLRCode ( src/ProduceGLRCode.lhs, dist/build/happy/happy-tmp/ProduceGLRCode.o ) [17 of 18] Compiling Info ( src/Info.lhs, dist/build/happy/happy-tmp/Info.o ) [18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o ) Linking dist/build/happy/happy ... Installing executable(s) in /home/hope/.cabal/bin Installed happy-1.19.0 Any ideas? cabal-install version 1.16.0.2 using version 1.16.0 of the Cabal library

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >