Search Results

Search found 71108 results on 2845 pages for 'application project deplo'.

Page 424/2845 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • creating objects in list

    - by prince23
    List myList = new List { new Users{ Name="Kumar", Gender="M", Age=75, Parent="All"}, new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, new Users{ Name="kian", Gender="M",Age=20, Parent="Suresh"}, new Users{ Name="Nathani", Gender="M",Age=15, Parent="Suresh"}, new Users{ Name="Peter", Gender="M",Age=90, Parent="All"}, new Users{ Name="Mica", Age=56, Gender="M",Parent="Peter"}, new Users{ Name="Linderman", Gender="M",Age=51, Parent="Peter"}, new Users{ Name="john", Age=25, Gender="M",Parent="Mica"}, new Users{ Name="tom", Gender="M",Age=21, Parent="Mica"}, new Users{ Name="Ando", Age=64, Gender="M",Parent="All"}, new Users{ Name="Maya", Age=24, Gender="M",Parent="Ando"}, new Users{ Name="Niki Sanders", Gender="F",Age=2, Parent="Maya"}, new Users{ Name="Angela Patrelli", Gender="F",Age=3, Parent="Maya"}, }; now i need to format the data like here i need to check the parent based on the parent i will be creating objects under the parent objects now { Name="Kumar", Gender="M", Age=75, Parent="All" } this is the top level as a property Parent ="all" now under parent object kumar here we again create a new object to store these information under object(kumar who is the parent) new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, what i need to do here is i need to check the parent based on the parent create further objects under it ex: i need to achive like this. looking for the syntax how i can do it. public class SampleProjectData { public static ObservableCollection<Product> GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project2", OverallStartTime = dtS + TimeSpan.FromDays(1.5), OverallEndTime = dtS + TimeSpan.FromDays(5.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Victor's Task" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project3", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Jason's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(7), EndTime = dtS + TimeSpan.FromDays(9), TaskName = "Jason's Task 2" }); teams[0].Projects.Add(emp); teams.Add(new Product() { PDName = "Product2", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project4", OverallStartTime = dtS + TimeSpan.FromDays(0.5), OverallEndTime = dtS + TimeSpan.FromDays(3.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Vicky's Task" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project5", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2.2), EndTime = dtS + TimeSpan.FromDays(3.8), TaskName = "Oleg's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(6), TaskName = "Oleg's Task 2" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(8), EndTime = dtS + TimeSpan.FromDays(9.6), TaskName = "Oleg's Task 3" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project6", OverallStartTime = dtS + TimeSpan.FromDays(2.5), OverallEndTime = dtS + TimeSpan.FromDays(4.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(0.8), EndTime = dtS + TimeSpan.FromDays(2), TaskName = "Kim's Task" }); teams[1].Projects.Add(emp); teams.Add(new Product() { PDName = "Product3", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project7", OverallStartTime = dtS + TimeSpan.FromDays(5), OverallEndTime = dtS + TimeSpan.FromDays(7.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Balaji's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(8), TaskName = "Balaji's Task 2" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project8", OverallStartTime = dtS + TimeSpan.FromDays(3), OverallEndTime = dtS + TimeSpan.FromDays(6.3) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.75), EndTime = dtS + TimeSpan.FromDays(2.25), TaskName = "Li's Task" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project9", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2), EndTime = dtS + TimeSpan.FromDays(3), TaskName = "Stacy's Task" }); teams[2].Projects.Add(emp); return teams; } } above is an sample data where i am grouping them with static data in the same way i need **to for teh data which is cmg from DB and i need to store them list** all these three data are comig from different services. and i am storing them in a list now i have three tables data Product , Project, Task. all the data are coming from webservies. i have created three list where i am storing the data in list. List<Project>objpro= new List<Project>(); List<Product>objproduct= new List<Product>(); LIst<Task>objTask= new List<Task>(); **now what i need to do is i need to do the mapping between these tables. if you see above. i have object of Product under Product i have added object of Project and then under project object i have added task object.** now from the above data which is stored in the list i need to do the same mapping between class. and group the data. public class Product : INotifyPropertyChanged { public Product() { this.Projects = new ObservableCollection<Project>(); } public string PDName { get; set; } public ObservableCollection<Project> Projects { get; set; } private DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } private DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Project : INotifyPropertyChanged { public Project() { this.Tasks = new ObservableCollection<Task>(); } public string PName { get; set; } public ObservableCollection<Task> Tasks { get; set; } DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Task : INotifyPropertyChanged { public string TaskName { get; set; } DateTime _st; public DateTime StartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("StartTime"); this.EndTime = value + dur; } } } private DateTime _et; public DateTime EndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("EndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } hope my question is clear

    Read the article

  • How to use EJB 3.1 DI in Servlet ? (Could not inject session bean by @EJB from web application)

    - by kislo_metal
    I am tying to merging web application(gwt, jpa) to an separate 2 application(business login in ejb/jpa and web client in gwt). Currently i can`t inject my beans from web application (simple servlet) I am using glassfish v3. module limbo(ejb jar) is in dependency of module lust (war). If I use lust with compiler output of limbo everything work perfect (if ejb in web application and the are deploying together as one application). Have I messed some container configuration ? Here is my steps: I have some limbo.jar (ejb-jar) deployed to ejb container. I do not use any ejb-jar.xml, only annotations. package ua.co.inferno.limbo.persistence.beans; import javax.ejb.Remote; @Remote public interface IPersistentServiceRemote { ArrayList<String> getTerminalACPList(); ArrayList<String> getBoxACPList(); ArrayList<String> getCNPList(); ArrayList<String> getCNSList(); String getProductNamebyID(int boxid); ArrayList<String> getRegionsList(String lang); long getSequence(); void persistEntity (Object ent); } package ua.co.inferno.limbo.persistence.beans; import ua.co.inferno.limbo.persistence.entitis.EsetChSchemaEntity; import ua.co.inferno.limbo.persistence.entitis.EsetKeyActionsEntity; @Local public interface IPersistentService { ArrayList<String> getTerminalACPList(); ArrayList<String> getBoxACPList(); ArrayList<String> getCNPList(); ArrayList<String> getCNSList(); String getProductNamebyID(int boxid); ArrayList<String> getRegionsList(String lang); long getSequence(); long persistPurchaseBox(EsetRegPurchaserEntity rp); void removePurchaseTempBox(EsetRegPurchaserTempEntity rpt); EsetRegionsEntity getRegionsById(long rid); void persistEntity (Object ent); } package ua.co.inferno.limbo.persistence.beans; import ua.co.inferno.limbo.persistence.entitis.EsetChSchemaEntity; import ua.co.inferno.limbo.persistence.entitis.EsetKeyActionsEntity; import ua.co.inferno.limbo.persistence.entitis.EsetRegBoxesEntity; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @Stateless(name = "PersistentService") public class PersistentServiceEJB implements IPersistentService, IPersistentServiceRemote{ @PersistenceContext(unitName = "Limbo") EntityManager em; public PersistentServiceEJB() { } ......... } Than i trying to use PersistentService session bean(included in limbo.jar) from web application in lust.war (the limbo.jar & lust.war is not in ear) package ua.co.lust; import ua.co.inferno.limbo.persistence.beans.IPersistentService; import javax.ejb.EJB; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; @WebServlet(name = "ServletTest", urlPatterns = {"/"}) public class ServletTest extends HttpServlet { protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { service(request, response); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { service(request, response); } @EJB private IPersistentService pService; public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String hi = pService.getCNPList().toString(); System.out.println("testBean.hello method returned: " + hi); System.out.println("In MyServlet::init()"); System.out.println("all regions" + pService.getRegionsList("ua")); System.out.println("all regions" + pService.getBoxACPList()); } } web.xm <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0"> <servlet> <servlet-name>ServletTest</servlet-name> <servlet-class>ua.co.lust.ServletTest</servlet-class> </servlet> </web-app> When servelt is loading i ge 404 eror (The requested resource () is not available.) And errors in logs : global Log Level SEVERE Logger global Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=31} Record Number 1421 Message ID Complete Message Class [ Lua/co/inferno/limbo/persistence/beans/IPersistentService; ] not found. Error while loading [ class ua.co.lust.ServletTest ] javax.enterprise.system.tools.deployment.org.glassfish.deployment.common Log Level WARNING Logger javax.enterprise.system.tools.deployment.org.glassfish.deployment.common Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=31} Record Number 1422 Message ID Error in annotation processing Complete Message java.lang.NoClassDefFoundError: Lua/co/inferno/limbo/persistence/beans/IPersistentService; ejb jar was deployed with this info log : Log Level INFO Logger javax.enterprise.system.container.ejb.com.sun.ejb.containers Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=26} Record Number 1436 Message ID Glassfish-specific (Non-portable) JNDI names for EJB PersistentService Complete Message [ua.co.inferno.limbo.persistence.beans.IPersistentServiceRemote#ua.co.inferno.limbo.persistence.beans.IPersistentServiceRemote, ua.co.inferno.limbo.persistence.beans.IPersistentServiceRemote] Log Level INFO Logger javax.enterprise.system.tools.admin.org.glassfish.deployment.admin Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=26} Record Number 1445 Message ID Complete Message limbo was successfully deployed in 610 milliseconds. Do i nee to add some additional configuration in a case of injections from others application? Some ideas?

    Read the article

  • After exiting, the home folder cannot be visited?

    - by Keating Wang
    I ssh to a ubuntu sever, start a web project(Rails), then I can visit this project. I close the the ssh terminal, then the project says can not find files(view pages,css files and so on). I put the project in the home folder(/home/byht). why? When closing the ssh terminal, the user's folder can not be visited ? You know, when I put the project in another folder(/usr/local), everything goes well.

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Visual Studio 2010 SP1 Beta supports IIS Express

    - by DigiMortal
    Visual Studio 2010 SP1 Beta and ASP.NET MVC 3 RC2 were both announced today. I made a little test on one of my web applications to see how Visual Studio 2010 works with IIS Express. In this posting I will show you how to make your ASP.NET MVC 3 application work with IIS Express. Installing new stuff You can install IIS Express using Web Platform Installer. It is not part of WebMatrix anymore and you can just install IIS Express without WebMatrix. NB! You have to install IIS Express using Web Platform installer because IIS Express is not installed by SP1. After installing Visual Studio 2010 SP1 Beta on my machine (it took a long-long-long time to install) I installed also ASP.NET MVC 3 RC2. If you have Async CTP installed on your machine you have to uninstall it to get ASP.NET MVC 3 RC2 installed and run without problems. Screenshot on right shows what kinf of horrors my old laptop had to survive to get all new stuff installer. Setting IIS Express as server for web application Now, when you right-click on some web project you should see new menu item in context menu – Use IIS Express…. If you click on it you are asked for confirmation and if you say Yes then your web application is reconfigured to use IIS Express. After configuration you will see dialog box like this. And you are done. You can run your application now. Running web application When you run your application it is run on IIS Express. You can see IIS Express icon on taskbar and when you click it you can open IIS Express settings. If you closed your application in browser you can open it again from IIS Express icon. Modifying IIS Express settings for web application You can modify IIS Express settings for your application. Just open your project properties and move to Web tab. IIS and IIS Express are using same settings. The difference is if you make check to Use IIS Express checkbox or not. Switching back to Visual Studio Development Server If you don’t want or you can’t use IIS Express for some reason you can easily switch back to Visual Studio Development Server. Just right-click on your web application project and select Use Visual Studio Development Server from context menu. Conclusion IIS Express is more independent than full version of IIS and it can be also installed and run on machines where are very strict rules (some corporate and academic environments by example). IIS Express was previously part of WebMatrix package but now it is separate product and Visual Studio 2010 has very nice support for it thanks to SP1. You can easily make your web applications use IIS Express and if you want to switch back to development server it is also very easy.

    Read the article

  • How to embed evince in firefox 4?

    - by Alaukik
    I installed mozplugger and created the file mozpluggerrc with the following content according to this post But whenever I open a .pdf it opens in a separate evince windows is there a way I can truly embed it in Firefox like the chrome pdf reader? application/pdf: pdf: PDF file application/x-pdf: pdf: PDF file text/pdf: pdf: PDF file text/x-pdf: pdf: PDF file application/x-postscript: ps: PostScript file application/postscript: ps: PostScript file application/x-dvi: dvi: DVI file : evince $file

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • TFS, G.I. Joe and Under-doing

    If I were to rank the most consistently irritating parts of my work day, using TFS would come in first by a wide margin. Even repeated network outages this week seem like a pleasant reprieve from this monolithic beast. This is not a reflexive anti-Microsoft feeling, that attitude just wouldnt work for a consultant who does .NET development. It is also not an utter dismissal of TFS as worthless; Ive seen people use it effectively on several projects. So why? Ill start with a laundry list of shortcomings. An out of the box UI for work items that is insultingly bad, a source control system that is confoundingly fragile when handling merges, folder renames and long file names, the arcane XML wizardry necessary to customize a template and a build system that adds an extra layer of oddness on top of msbuild. Im sure my legion of readers will soon point out to me how I can work around all these issues, how this is fixed in TFS 2010 or with this add-in, and how once you have everything set up, youre fine. And theyd be right, any one of these problems could be worked around. If not dirty laundry, what else? I thought about it for a while, and came to the conclusion that TFS is so irritating to me because it represents a vision of software development that I find unappealing. To expand upon this, lets start with some wisdom from those great PSAs at the end of the G.I. Joe cartoons of the 80s: Now you know, and knowing is half the battle. In software development, Id go further and say knowing is more than half the battle. Understanding the dimensions of the problem you are trying to solve, the needs of the users, the value that your software can provide are more than half the battle. Implementation of this understanding is not easy, but it is not even possible without this knowledge. Assuming we have a fixed amount of time and mental energy for any project, why does this spell trouble for TFS? If you think about what TFS is doing, its offering you a huge array of options to track the day to day implementation of your project. From tasks, to code churn, to test coverage. All valuable metrics, but only in exchange for valuable time to get it all working. In addition, when you have a shiny toy like TFS, the temptation is to feel obligated to use it. So the push from TFS is to encourage a project manager and team to focus on process and metrics around process. You can get great visibility, and graphs to show your project stakeholders, but none of that is important if you are not implementing the right product. Not just unimportant, these activities can be harmful as they drain your time and sap your creativity away from the rest of the project. To be more concrete, lets suppose your organization has invested the time to create a template for your projects and trained people in how to use it, so there is no longer a big investment of time for each project to get up and running. First, Id challenge if that template could be specific enough to be full featured and still applicable for any project. Second, the very existence of this template would be a indication to a project manager that the success of their project was somehow directly related to fitting management of that project into this format. Again, while the capabilities are wonderful, the mirage is there; just get everything into TFS and your project will run smoothly. Ill close the loop on this first topic by proposing a thought experiment. Think of the projects youve worked on. How many times have you been chagrined to discover youve implemented the wrong feature, misunderstood how a feature should work or just plain spent too much time on a screen that nobody uses? That sounds like a really worthwhile area to invest time in improving. How about going back to these projects and thinking about how many times you wished you had optimized the state change flow of your tasks or been embarrassed to not have a code churn report linked back to the latest changeset? With thanks to the Real American Heroes, Ill move on to a more current influence, that of the developers at 37signals, and their philosophy towards software development. This philosophy, fully detailed in the books Getting Real and Rework, is a vision of software that under does the competition. This is software that is deliberately limited in functionality in order to concentrate fully on making sure ever feature that is there is awesome and needed. Why is this relevant? Well, in one of those fun seeming paradoxes in life, constraints can be a spark for creativity. Think Twitter, the small screen of an iPhone, the limitations of HTML for applications, the low memory limits of older or embedded system. As long as there is some freedom within those constraints, amazing things emerge. For project management, some of the most respected people in the industry recommend using just index cards, pens and tape. They argue that with change the constant in software development, your process should be as limited (yet rigorous) as possible. Looking at TFS, this is not a system designed to under do anybody. It is a big jumble of components and options, with every feature you could think of. Predictably this means many basic functions are hard to use. For task management, many people just use an Excel spreadsheet linked up to TFS. Not a stirring endorsement of the tooling there. TFS as a whole would be far more appealing to me if there was less of it, but better. Id cut 50% of the features to make the other half really amaze and inspire me. And thats really the heart of the matter. TFS has great promise and I want to believe it can work better. But ultimately it focuses your attention on a lot of stuff that doesnt really matter and then clamps down your creativity in a mess of forms and dialogs obscuring what does.   --- Relevant Links --- All those great G.I. Joe PSAs are on YouTube, including lots of mashed up versions. A simple Google search will get you on the right track.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • How to use the Netduino Go Piezo Buzzer Module

    - by Chris Hammond
    Originally posted on ChrisHammond.com Over the next couple of days people should be receiving their Netduino Go Piezo Buzzer Modules , at least if they have ordered them from Amazon. I was lucky enough to get mine very quickly from Amazon and put together a sample project the other night. This is by no means a complex project, and most of it is code from the public domain for projects based on the original Netduino. Project Overview So what does the project do? Essentially it plays 3 “tunes” that...(read more)

    Read the article

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • How to set up secure cookie on weblogic server

    - by adejuanc
    WebLogic Server allows a user to securely access HTTPS resources in a session that was initiated using HTTP, without loss of session data. To enable this feature, add AuthCookieEnabled="true" to the WebServer element in config.xml: <WebServer Name="myserver" AuthCookieEnabled="true"/>Setting AuthCookieEnabled to true, which is the default setting, causes the WebLogic Server instance to send a new secure cookie, _WL_AUTHCOOKIE_JSESSIONID, to the browser when authenticating via an HTTPS connection. Once the secure cookie is set, the session is allowed to access other security-constrained HTTPS resources only if the cookie is sent from the browser.Thus, WebLogic Server uses two cookies: the JSESSIONID cookie and the _WL_AUTHCOOKIE_JSESSIONID cookie. By default, the JSESSIONID cookie is never secure, but the _WL_AUTHCOOKIE_JSESSIONID cookie is always secure. A secure cookie is only sent when an encrypted communication channel is in use. Assuming a standard HTTPS login (HTTPS is an encrypted HTTP connection), your browser gets both cookies.For subsequent HTTP access, you are considered authenticated if you have a valid JSESSIONID cookie, but for HTTPS access, you must have both cookies to be considered authenticated. If you only have the JSESSIONID cookie, you must re-authenticate.To configure on Admin Console : Log into WebLogic Admin Console. Under Domain Structure, press click on <domainname> Select the "Web Applications" tab Select "Lock and Edit" in change center. Click on  "Auth Cookie Enabled" checkbox. Restart to confirm changes. Test an application and view the cookie which got stored as "JSESSIONID" To Configure the Web application's weblogic-application.xml file: Run the following to extract the file from the web application's weblogic-application.xml: $PATH_JDK_HOME\binjar -xvf easy-web-examples.ear META-INF/weblogic-application.xml Add <cookie-secure>true</cookie-secure> between <session-descriptor> </session-descriptor> to the weblogic-application.xml. Run the following to repackage the file to the application: $PATH_JDK_HOME\bin\jar -uvf easy-web-examples.ear META-INF/weblogic-application.xml Deploy the application into WebLogic For further information, please read the documentation on "Using Secure Cookies to Prevent Session Stealing " : http://download.oracle.com/docs/cd/E12840_01/wls/docs103/security/thin_client.html#wp1053780

    Read the article

  • Iterative and Incremental Principle Series 3: The Implementation Plan (a.k.a The Fitness Plan)

    - by llowitz
    Welcome back to the Iterative and Incremental Blog series.  Yesterday, I demonstrated how shorter interval sets allowed me to focus on my fitness goals and achieve success.  Likewise, in a project setting, shorter milestones allow the project team to maintain focus and experience a sense of accomplishment throughout the project lifecycle.  Today, I will discuss project planning and how to effectively plan your iterations. Admittedly, there is more to applying the iterative and incremental principle than breaking long durations into multiple, shorter ones.  In order to effectively apply the iterative and incremental approach, one should start by creating an implementation plan.   In a project setting, the Implementation Plan is a high level plan that focuses on milestones, objectives, and the number of iterations.  It is the plan that is typically developed at the start of an engagement identifying the project phases and milestones.  When the iterative and incremental principle is applied, the Implementation Plan also identified the number of iterations planned for each phase.  The implementation plan does not include the detailed plan for the iterations, as this detail is determined prior to each iteration start during Iteration Planning.  An individual iteration plan is created for each project iteration. For my fitness regime, I also created an “Implementation Plan” for my weekly exercise.   My high level plan included exercising 6 days a week, and since I cross train, trying not to repeat the same exercise two days in a row.  Because running on the hills outside is the most difficult and consequently, the most effective exercise, my implementation plan includes running outside at least 2 times a week.   Regardless of the exercise selected, I always apply a series of 6-minute interval sets.  I never plan what I will do each day in advance because there are too many changing factors that need to be considered before that level of detail is determined.  If my Implementation Plan included details on the exercise I was to perform each day of the week, it is quite certain that I would be unable to follow my plan to that level.  It is unrealistic to plan each day of the week without considering the unique circumstances at that time.  For example, what is the weather?  Are there are conflicting schedule commitments?  Are there injuries that need to be considered?  Likewise, in a project setting, it is best to plan for the iteration details prior to its start. Join me for tomorrow’s blog where I will discuss when and how to plan the details of your iterations.

    Read the article

  • Work Item Traceability in TFS 2010

    - by Sam Patrick
    I have created a Windows Form project (VS solution) under a TFS 2010 project. I may eventually add more solutions to the TFS project. My question: Can we create a Use Case WIT for a specific solution within a TFS project? Furthermore, is it possible to create a "traceability matrix" that starts at the Use Case level and goes down to the the code level (at least the namespace level) of that particular VS solution?

    Read the article

  • How to handle lookup data in a C# ASP.Net MVC4 application?

    - by Jim
    I am writing an MVC4 application to track documents we have on file for our clients. I'm using code first, and have created models for my objects (Company, Document, etc...). I am now faced with the topic of document expiration. Business logic dictates certain documents will expire a set number of days past the document date. For example, Document A might expire in 180 days, Document 2 in 365 days, etc... I have a class for my documents as shown below (simplified for this example). What is the best way for me to create a lookup for expiration values? I want to specify documents of type DocumentA expire in 30 days, type DocumentB expire in 75 days, etc... I can think of a few ways to do this: Lookup table in the database I can query New property in my class (DaysValidFor) which has a custom getter that returns different values based on the DocumentType A method that takes in the document type and returns the number of days and I'm sure there are other ways I'm not even thinking of. My main concern is a) not violating any best practices and b) maintainability. Are there any pros/cons I need to be aware of for the above options, or is this a case of "just pick one and run with it"? One last thought, right now the number of days is a value that does not need to be stored anywhere on a per-document basis -- however, it is possible that business logic will change this (i.e., DocumentA's are 30 days expiration by default, but this DocumentA associated with Company XYZ will be 60 days because we like them). In that case, is a property in the Document class the best way to go, seeing as I need to add that field to the DB? namespace Models { // Types of documents to track public enum DocumentType { DocumentA, DocumentB, DocumentC // etc... } // Document model public class Document { public int DocumentID { get; set; } // Foreign key to companies public int CompanyID { get; set; } public DocumentType DocumentType { get; set; } // Helper to translate enum's value to an integer for DB storage [Column("DocumentType")] public int DocumentTypeInt { get { return (int)this.DocumentType; } set { this.DocumentType = (DocumentType)value; } } [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:MM-dd-yyyy}", ApplyFormatInEditMode = true)] public DateTime DocumentDate { get; set; } // Navigation properties public virtual Company Company { get; set; } } }

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Adding PostSharp to new projects, when it's installed for some projects in solution.

    - by Michael Freidgeim
    Recently I've posted my experience with installation of PostSharp Once PostSharp  is installed in  solution's packages folder for some project(s), I often need to add PostSharp to another project in the same solutionSection "Adding PostSharp to your project using PostSharp HQ" of documentation described the process quite well.I only want to add that the  actual location of  PostSharp HQ ( if it was installed from NuGet) is[solution root ]packages\PostSharp.2.1.7.15\tools\Release\PostSharp.HQ.exe.Also you need to ensure that the project is checked out,i.e. not readOnly.

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Static teams or dynamic teams?

    - by Richard DesLonde
    Is it better to assemble permanent teams of developers within the company that always work together, from project to project, or is it better to have dynamic teams that assemble for a project, and then dissasemble afterwards? My inclination is to treat the entire company as a "platoon" and to assemble "fireteams" for individual projects, choosing from the "platoon" those developers best suited for the project.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >