Search Results

Search found 1755 results on 71 pages for 'publish'.

Page 12/71 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Drag and drop fearture for a website

    - by gpuguy
    I have to design a website which will have drag and drop features for creating an e-card. So you select items from a tool box and drag and drop this item on the card area. Once you have completed the design you can publish the e-card on the web by clicking "Save and publish" button. What are the possible technologies for implementing this feature? The requirement is that the application should not degrade the performance of the website, and should not take much time in publishing once the user click "Save and publish" button.

    Read the article

  • Error launching application after ClickOnce deployment - "An application for this deployment is alre

    - by digiduck
    As part of my continuous integration build the application is deployed as a ClickOnce application. This works great the first time, but when I try the launch the app after an update has been deployed I get the following error. An application for this deployment is already installed with a different application identity. If I run mage.exe -cc to clear the application cache for all ClickOnce apps then I can launch the application just fine. Has anyone run into this before? How can I fix this? Here are the steps in my build script that publish the ClickOnce application. ./tools/windows_sdk/mage.exe -New Application -Processor msil -ToFile "C:\temp\build\RoadrunnerTrap.exe.manifest" -Name "Roadrunner Trap" -Version 1.0.0.1 -FromDirectory "C:\temp\build" # artifacts from C:\temp\build\ are copied to \\server\publish\v1.0.0.1\ ./tools/windows_sdk/mage.exe -New Deployment -Processor msil -Install false -Publisher "Acme, Inc." -ProviderUrl "\\server\publish\RoadrunnerTrap.application" -Name "Roadrunner Trap" -AppManifest "\\server\publish\v1.0.0.1\RoadrunnerTrap.exe.manifest" -ToFile "\\server\publish\RoadrunnerTrap.application" Note that the version number does change with every deployment.

    Read the article

  • ClickOnce Deployment of an Application with more then one executable file

    - by Ahron Levi
    Hi I am trying to deploy an application with two executable files one of which is the application it self. I used the publish tub on the VS 2008 and tried to publish manually using the MageUI.exe. in both cases I get the "Reference in the manifest does not match the identity of the downloaded assembly" error in regards to the second executable file. Dose Any one know how to publish an application with two executable files? Thanks, Ahron

    Read the article

  • allowDefinition='MachineToApplication' error when publishing from VS2010 (but only after a previous

    - by burnt_hand
    I can run my Asp.Net MVC 2 application without an issue on my local computer. Just Run / Debug. But if I have already built it, I can't publish it! I have to clean the solution and publish it again. I know this is not system critical, but it's really annoying. "One Click Publish" is not "Clean solution and then One click publish" The exact error is as follows: Error 11 It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. I suspect it's something to do with the Web.Config in the Views folder, but then why only after I build once previously. And just to note, the app works fine once published.

    Read the article

  • How to go about converting this classic asp to asp.net

    - by Phil
    I have some classic asp code that needs converting to asp.net. So far I have tried to achieve this using datareaders and repeaters and had no luck as the menu loops through 4 different record sets, passing along the menuNid before moving to the next record. Please can you tell me what method you would use to conver this code... i.e datareaders? dataset? etc? Thanks <% set RSMenuLevel0 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from T where (DepartmentID = 0 and GroupingID = 0 and Publish <> 0) order by OrderID") %> <% if session("JavaScriptEnabled") = "False" Then %> <% while not RSMenuLevel0.EOF if RSMenuLevel0("Publish") <> 0 then Menu0heading = RSMenuLevel0("Heading") Menu0id = RSMenuLevel0("id") %> <%if RSMenuLevel0("url") > "" and RSMenuLevel0("moduleid") = 0 then%> &nbsp;<a href="http://<%=RSMenuLevel0("url")%>" target="<%=RSMenuLevel0("urltarget")%>"><%=Menu0heading%></a> <%else%> &nbsp;<a href="/default.asp?id=<%=Menu0id%>"><%=Menu0heading%></a> <%end if%> <% end if RSMenuLevel0.MoveNext wend %> <% else %> <ul id="Menu1" class="MM"> <%if home <> 1 then%> <!-- <li><a href="/default.asp"><span class="item">Home</span></a> --> <%end if%> <% numone=0 while not RSMenuLevel0.EOF ' numone = numone + 1 Menu0heading = RSMenuLevel0("Heading") 'itemID = lcase(replace(Menu0heading," ","")) Menu0id = RSMenuLevel0("id") if RSMenuLevel0("url") > "" and RSMenuLevel0("moduleid") = 0 then url = RSMenuLevel0("url") if instr(url,"file:///") > 0 then %> <li><a href="<%=RSMenuLevel0("url")%>" target="<%=RSMenuLevel0("urltarget")%>" <%if numone=1 then%>class="CURRENT"<%end if%>><span class="item"><%=Menu0heading%></span></a> <%else%> <li><a href="http://<%=RSMenuLevel0("url")%>" target="<%=RSMenuLevel0("urltarget")%>" <%if numone=1 then%>class="CURRENT"<%end if%>><span class="item"><%=Menu0heading%></span></a> <%end if%> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel0("id")%>" <%if numone=1 then%>class="CURRENT"<%end if%>><span class="item"><%=Menu0heading%></span></a> <%end if%> <% set RSMenuLevel1 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from T where (DepartmentID = 0 and GroupingID = " & Menu0id & " and Publish <> 0) order by OrderID") if not RSMenuLevel1.EOF then %> <ul> <% while not RSMenuLevel1.EOF Menu1heading = RSMenuLevel1("Heading") Menu1id = RSMenuLevel1("id") if RSMenuLevel1("url") > "" and RSMenuLevel1("moduleid") = 0 then url = RSMenuLevel1("url") if instr(url,"file:///") > 0 then %> <li><a href="<%=RSMenuLevel1("url")%>" target="<%=RSMenuLevel1("urltarget")%>"><%=Menu1heading%></a> <%else%> <li><a href="http://<%=RSMenuLevel1("url")%>" target="<%=RSMenuLevel1("urltarget")%>"><%=Menu1heading%></a> <%end if%> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel1("id")%>"><%=Menu1heading%></a> <%end if%> <% set RSMenuLevel2 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from T where (DepartmentID = 0 and GroupingID = " & Menu1id & " and Publish <> 0) order by OrderID") if not RSMenuLevel2.EOF then %> <ul> <% while not RSMenuLevel2.EOF Menu2heading = RSMenuLevel2("Heading") Menu2id = RSMenuLevel2("id") if RSMenuLevel2("url") > "" and RSMenuLevel2("moduleid") = 0 then %> <li><a href="http://<%=RSMenuLevel2("url")%>" target="<%=RSMenuLevel2("urltarget")%>"><%=Menu2heading%></a> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel2("id")%>"><%=Menu2heading%></a> <%end if%> <% set RSMenuLevel3 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from T where (DepartmentID = 0 and GroupingID = " & Menu2id & " and Publish <> 0) order by OrderID") if not RSMenuLevel3.EOF then %> <ul> <% while not RSMenuLevel3.EOF Menu3heading = RSMenuLevel3("Heading") Menu3id = RSMenuLevel3("id") if RSMenuLevel3("url") > "" and RSMenuLevel3("moduleid") = 0 then %> <li><a href="http://<%=RSMenuLevel3("url")%>" target="<%=RSMenuLevel3("urltarget")%>"><%=Menu3heading%></a></li> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel3("id")%>"><%=Menu3heading%></a></li> <%end if%> <% RSMenuLevel3.MoveNext wend %> </ul> <% end if RSMenuLevel2.MoveNext %> </li> <% wend %> </ul> <% end if RSMenuLevel1.MoveNext %> </li> <% wend %> </ul> <% end if RSMenuLevel0.MoveNext %> </li> <% wend %> </ul> <% end if %>

    Read the article

  • ClickOnce Deplyment of an Application with more then one executable file

    - by Ahron Levi
    Hi I am trying to deploy an application with two executable files one of which is the application it self. I used the publish tub on the VS 2008 and tried to publish manually using the MageUI.exe. in both cases I get the "Reference in the manifest does not match the identity of the downloaded assembly" error in regards to the second executable file. Dose Any one know how to publish an application with two executable files? Thanks, Ahron

    Read the article

  • Is it possible to convert this asp to asp.net?

    - by Phil
    I have been tasked with sifting through the worst classic asp spaghetti i've ever come across. The script runs a series of recordsets in sequence, getting 1 record at a time. As the record is built it takes the id and passes it to the next loop, which gets data, and passes on the id to the next loop. It then continues in this manner and builds an unordered list, kicking out the required html as it goes. Here are my efforts so far: have a class delivering data via sqldatareaders and output these to nested repeaters (this failed due to not being able to loop and get the id) Have a datatable populated with all the required data, then datatable.select to filter it out. have 4 datareaders looping and building the ul arraylists (I couldnt get the id's to match up) Please can you suggest the best method (with a bit of sample code if possible) to go about doing this conversion? Here is the code (sorry its long / horrible / spaghetti!!!) <% set RSMenuLevel0 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from Grouping where (DepartmentID = 0 and GroupingID = 0 and Publish <> 0) order by OrderID") %> <% if session("JavaScriptEnabled") = "False" Then %> <% while not RSMenuLevel0.EOF if RSMenuLevel0("Publish") <> 0 then Menu0heading = RSMenuLevel0("Heading") Menu0id = RSMenuLevel0("id") %> <%if RSMenuLevel0("url") > "" and RSMenuLevel0("moduleid") = 0 then%> &nbsp;<a href="http://<%=RSMenuLevel0("url")%>" target="<%=RSMenuLevel0("urltarget")%>"><%=Menu0heading%></a> <%else%> &nbsp;<a href="/default.asp?id=<%=Menu0id%>"><%=Menu0heading%></a> <%end if%> <% end if RSMenuLevel0.MoveNext wend %> <% else %> <ul id="Menu1" class="MM"> <%if home <> 1 then%> <!-- <li><a href="/default.asp"><span class="item">Home</span></a> --> <%end if%> <% numone=0 while not RSMenuLevel0.EOF ' numone = numone + 1 Menu0heading = RSMenuLevel0("Heading") 'itemID = lcase(replace(Menu0heading," ","")) Menu0id = RSMenuLevel0("id") if RSMenuLevel0("url") > "" and RSMenuLevel0("moduleid") = 0 then url = RSMenuLevel0("url") if instr(url,"file:///") > 0 then %> <li><a href="<%=RSMenuLevel0("url")%>" target="<%=RSMenuLevel0("urltarget")%>" <%if numone=1 then%>class="CURRENT"<%end if%>><span class="item"><%=Menu0heading%></span></a> <%else%> <li><a href="http://<%=RSMenuLevel0("url")%>" target="<%=RSMenuLevel0("urltarget")%>" <%if numone=1 then%>class="CURRENT"<%end if%>><span class="item"><%=Menu0heading%></span></a> <%end if%> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel0("id")%>" <%if numone=1 then%>class="CURRENT"<%end if%>><span class="item"><%=Menu0heading%></span></a> <%end if%> <% set RSMenuLevel1 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from Grouping where (DepartmentID = 0 and GroupingID = " & Menu0id & " and Publish <> 0) order by OrderID") if not RSMenuLevel1.EOF then %> <ul> <% while not RSMenuLevel1.EOF Menu1heading = RSMenuLevel1("Heading") Menu1id = RSMenuLevel1("id") if RSMenuLevel1("url") > "" and RSMenuLevel1("moduleid") = 0 then url = RSMenuLevel1("url") if instr(url,"file:///") > 0 then %> <li><a href="<%=RSMenuLevel1("url")%>" target="<%=RSMenuLevel1("urltarget")%>"><%=Menu1heading%></a> <%else%> <li><a href="http://<%=RSMenuLevel1("url")%>" target="<%=RSMenuLevel1("urltarget")%>"><%=Menu1heading%></a> <%end if%> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel1("id")%>"><%=Menu1heading%></a> <%end if%> <% set RSMenuLevel2 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from Grouping where (DepartmentID = 0 and GroupingID = " & Menu1id & " and Publish <> 0) order by OrderID") if not RSMenuLevel2.EOF then %> <ul> <% while not RSMenuLevel2.EOF Menu2heading = RSMenuLevel2("Heading") Menu2id = RSMenuLevel2("id") if RSMenuLevel2("url") > "" and RSMenuLevel2("moduleid") = 0 then %> <li><a href="http://<%=RSMenuLevel2("url")%>" target="<%=RSMenuLevel2("urltarget")%>"><%=Menu2heading%></a> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel2("id")%>"><%=Menu2heading%></a> <%end if%> <% set RSMenuLevel3 = conn.execute("select id, DepartmentID, GroupingID, Heading, OrderID, Publish, moduleid, url, urltarget " &_ "from Grouping where (DepartmentID = 0 and GroupingID = " & Menu2id & " and Publish <> 0) order by OrderID") if not RSMenuLevel3.EOF then %> <ul> <% while not RSMenuLevel3.EOF Menu3heading = RSMenuLevel3("Heading") Menu3id = RSMenuLevel3("id") if RSMenuLevel3("url") > "" and RSMenuLevel3("moduleid") = 0 then %> <li><a href="http://<%=RSMenuLevel3("url")%>" target="<%=RSMenuLevel3("urltarget")%>"><%=Menu3heading%></a></li> <%else%> <li><a href="/default.asp?id=<%=RSMenuLevel3("id")%>"><%=Menu3heading%></a></li> <%end if%> <% RSMenuLevel3.MoveNext wend %> </ul> <% end if RSMenuLevel2.MoveNext %> </li> <% wend %> </ul> <% end if RSMenuLevel1.MoveNext %> </li> <% wend %> </ul> <% end if RSMenuLevel0.MoveNext %> </li> <% wend %> </ul> <% end if %>

    Read the article

  • Outlook 2007 VSTO Add-in deployed by click-once doesn't detect published updates

    - by Matt
    I have created an outlook 2007 add-in project in vs2008, targeting .net 3.5, then migrated the project to vs2010. I have then published the project from vs2010 to a web site, and installed the add-in using click-once to a virtual machine running xp, .net 3.5 sp1, and outlook 2007. This all works great and I can see my add-in within outlook. Publish update settings are set to update the add-in at startup rather than every 7 days. However when I then make a simple change to the add-in, update the AssemblyVersion and AssemblyFileVersion of the add-in project, and then publish the updates, when I run outlook it doesn't detect that there is a new version, and just runs the current one that is installed. I can see that the publish has generated a new setup.exe and added a new folder to the 'Application Files' folder with the current (autogenerated) publish version. Can anyone suggest anything how I can get the update to be deployed to the client?

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Opensource, noncommercial License ?

    - by nick-russler
    Hey, i want to publish my software under a opensource license with the following conditions: you are allowed to: Share — to copy, distribute and transmit the work use a modified version of the code in your application you are not allowed to: publish modified versions of the code use the code in anything commercial is there a software license out there that fits my needs ? (crosspost: http://stackoverflow.com/questions/4558546/opensource-noncommercial-license)

    Read the article

  • EZ Systems publie trois patchs de sécurité, qui concernent des failles sur les versions 4.1 et 4.2 d

    EZ System, éditeur du gestionnaire de contenu EZ Publish vient de publier une série de trois patchs de sécurité. [IMG]http://djug.developpez.com/rsc/Ez-publish-Logo_medium.gif[/IMG] ces patchs concernent des failles affectant les versions 4.1 et 4.2 du CMS, il est vivement recommandé d'appliquer ce patch. -> Les patchs se trouvent ici http://ez.no/developer/security/secu...y_in_ez_search -> Communiqué officiel http://share.ez.no/blogs/ez/security...lish-instances...

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Using a TS-Gateway through a Apache reverse-proxy

    - by Helder
    Hey all, I've set up a Windows 2008 server as Terminal Services Gateway, to funnel the RDP access to a bunch of backend servers. However, since I only need to publish SSL to the "outside", I've tried to publish it with our reverse proxy, but it's not working. The Apache box is timing out, while trying to reach the tsgateway. However, if I ping it straight from the same box, there is connectivity. I've read a bit, and with ISA 2006 you can publish TS-Gateways on the internet, so I was wondering it anyone ever got it working with an Apache reverse proxy instead :)

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • Today's Links (6/21/2011)

    - by Bob Rhubart
    Keeping your process clean: Hiding technology complexity behind a service | Izaak de Hullu Izaak de Hullu offers a solution to "technology pollution like exception handling, technology adapters and correlation." WebLogic Weekly for June 20th, 2011 | James Bayer James Bayer presents "a round-up what has been going on in WebLogic over the past week." Publish to EDN from Java & OSB with JMS | Edwin Biemond Busy blogger and Oracle ACE Edwin Biemond shows "how you can publish events from Java and OSB." How is HTML 5 changing web development? | Audrey Watters - O'Reilly Radar In this interview, OSCON speaker Remy Sharp discusses HTML5's current usage and how it could influence the future of web apps and browsers. SOA Governance Book | SOA Partner Community Blog Information on how those in EMEA can win a free copy of SOA Governance: Governing Shared Services On-Premise and in the Cloud by Thomas Erl, et al. Keeping The Faith on 11i | Floyd Teter "The iceberg is melting, the curtain is coming down, the lights are dimming, the fat lady is singing," says Oracle ACE Director Floyd Teter. Configure and test JMS based EDN in SOA Suite 11g | Edwin Biemond Oracle ACE Edwin Biemond shows you "how to configure EDN-JMS and how to publish an Event to this JMS Queue." Choosing the best way for SOA Suite and Oracle Service Bus to interact with the Oracle Database | Lucas Jellema Oracle ACE Director Lucas Jellema illustrates "over 20 different interaction channels" covering "a fairly wild variation of attributes, required skills, productivity and performance characteristics." Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets | Alex Kotopoulis ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. Java Spotlight Podcast Episode 35: JVM Performance and Quality Featuring an interview with Vladimir Ivanov, Ivan Krylov, and Sergey Kuksenko on the JDK 7 Java Virtual Machine performance and quality. Also includes the Java All Star Developer Panel featuring Dalibor Topic, Java Free and Open Source Software Ambassador, and Alexis Moussine-Pouchkine, Java EE Developer Advocate.

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Why are pieces of my HTML showing up on the page and breaking it? Is it PHP related?

    - by Jason Rhodes
    I've been building a site in PHP, HTML, CSS, and using a healthy dose of jQuery javascript. The site looks absolutely fine on my Mac browsers, but for some reason, when my client uses PC Safari, she's seeing strange bits of my HTML show up on the page. Here are some (small) screenshot examples: Figure 1: This one is just a closing </li> tag that should've been on the Media li element. Not much harm done, but strange. Figure 2: Here this was part of <div class='submenu'> and since the div tag didn't render properly, the entire contents of that div don't get styled correctly by CSS. Figure 3: This last example shows what should have been <a class='top current' href=... but for some reason half of the HTML tag stops being rendered and just gets printed out. So the rest of that list menu is completely broken. Here's the code from the header.php file itself. The main navigation section (seen in the screenshots) is further down, marked by a line of asterisks if you want to skip there. <?php // Setting up location variables if(isset($_GET['page'])) { $page = Page::find_by_slug($_GET['page']); } elseif(isset($_GET['post'])) { $page = Page::find_by_id(4); } else { $page = Page::find_by_id(1); } $post = isset($_GET['post']) ? Blogpost::find_by_slug($_GET['post']) : false; $front = $page->id == 1 ? true : false; $buildblog = $page->id == 4 ? true : false; $eventpage = $page->id == 42 ? true : false; // Setting up content edit variables $edit = isset($_GET['edit']) ? true : false; $preview = isset($_GET['preview']) ? true : false; // Finding page slug value $pageslug = $page->get_slug($loggedIn); ?> <!DOCTYPE html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title> <?php if(!$post) { if($page->id != 1) { echo $page->title." | "; } echo $database->site_name(); } elseif($post) { echo "BuildBlog | ".$post->title; } ?> </title> <link href="<?php echo SITE_URL; ?>/styles/style.css" media="all" rel="stylesheet" /> <?php include(SITE_ROOT."/scripts/myJS.php"); ?> </head> <body class=" <?php if($loggedIn) { echo "logged"; } else { echo "public"; } if($front) { echo " front"; } ?>"> <?php $previewslug = str_replace("&edit", "", $pageslug); ?> <?php if($edit) { echo "<form id='editPageForm' action='?page={$previewslug}&preview' method='post'>"; } ?> <?php if($edit && !$preview) : // Edit original ?> <div id="admin_meta_nav" class="admin_meta_nav"> <ul class="center nolist"> <li class="title">Edit</li> <li class="cancel"><a class="cancel" href="?page=<?php echo $pageslug; ?>&cancel">Cancel</a></li> <li class="save"><input style='position: relative; z-index: 500' class='save' type="submit" name="newpreview" value="Preview" /></li> <li class="publish"><input style='position: relative; z-index: 500' class='publish button' type="submit" name="publishPreview" value="Publish" /></li> </ul> </div> <?php elseif($preview && !$edit) : // Preview your edits ?> <div id="admin_meta_nav" class="admin_meta_nav"> <ul class="center nolist"> <li class="title">Preview</li> <li class="cancel"><a class="cancel" href="?page=<?php echo $pageslug; ?>&cancel">Cancel</a></li> <li class="save"><a class="newpreview" href="?page=<?php echo $pageslug; ?>&preview&edit">Continue Editing</a></li> <li class="publish"><a class="publish" href="?page=<?php echo $pageslug; ?>&publishLastPreview">Publish</a></li> </ul> </div> <?php elseif($preview && $edit) : // Return to preview and continue editing ?> <div id="admin_meta_nav" class="admin_meta_nav"> <ul class="center nolist"> <li class="title">Edit Again</li> <li class="cancel"><a class="cancel" href="?page=<?php echo $pageslug; ?>&cancel">Cancel</a></li> <li class="save"><input style='position: relative; z-index: 500' class='save button' type="submit" name="newpreview" value="Preview" /></li> <li class="publish"><input style='position: relative; z-index: 500' class='publish button' type="submit" name="publishPreview" value="Publish" /></li> </ul> </div> <?php else : ?> <div id="meta_nav" class="meta_nav"> <ul class="center nolist"> <li><a href="login.php?logout">Logout</a></li> <li><a href="<?php echo SITE_URL; ?>/admin">Admin</a></li> <li><a href="<?php if($front) { echo "admin/?admin=frontpage"; } elseif($event || $eventpage) { echo "admin/?admin=events"; } elseif($buildblog) { if($post) { echo "admin/editpost.php?post={$post->id}"; } else { echo "admin/?admin=blog"; } } else { echo "?page=".$pageslug."&edit"; } ?>">Edit Mode</a></li> <li><a href="<?php echo SITE_URL; ?>/?page=donate">Donate</a></li> <li><a href="<?php echo SITE_URL; ?>/?page=calendar">Calendar</a></li> </ul> <div class="clear"></div> </div> <?php endif; ?> <div id="public_meta_nav" class="public_meta_nav"> <div class="center"> <ul class="nolist"> <li><a href="<?php echo SITE_URL; ?>/?page=donate">Donate</a></li> <li><a href="<?php echo SITE_URL; ?>/?page=calendar">Calendar</a></li> </ul> <div class="clear"></div> </div> </div> * Main Navigation Section, as seen in screenshots above, starts here ** <div class="header"> <div class="center"> <a class="front_logo" href="<?php echo SITE_URL; ?>"><?php echo $database->site_name(); ?></a> <ul class="nolist main_nav"> <?php $tops = Page::get_top_pages(); $topcount = 1; foreach($tops as $top) { $current = $top->id == $topID ? true : false; $title = $top->title == "Front Page" ? "Home" : ucwords($top->title); $url = ($top->title == "Front Page" || !$top->get_slug($loggedIn)) ? SITE_URL : SITE_URL . "/?page=".$top->get_slug($loggedIn); if(isset($_GET['post']) && $top->id == 1) { $current = false; } if(isset($_GET['post']) && $top->id == 4) { $current = true; } echo "<li"; if($topcount > 3) { echo " class='right'"; } echo "><a class='top"; if($current) { echo " current"; } echo "' href='{$url}'>{$title}</a>"; if($children = Page::get_children($top->id)) { echo "<div class='submenu'>"; echo "<div class='corner-helper'></div>"; foreach($children as $child) { echo "<ul class='nolist level1"; if(!$subchildren = Page::get_children($child->id)) { echo " nochildren"; } echo "'>"; $title = ucwords($child->title); $url = !$child->get_slug($loggedIn) ? SITE_URL : SITE_URL . "/?page=".$child->get_slug($loggedIn); if($child->has_published() || $loggedIn) { echo "<li><a class='title' href='{$url}'>{$title}</a>"; if($subchildren = Page::get_children($child->id)) { echo "<ul class='nolist level2'>"; foreach($subchildren as $subchild) { if($subchild->has_published() || $loggedIn) { $title = ucwords($subchild->title); $url = !$subchild->get_slug($loggedIn) ? SITE_URL : SITE_URL . "/?page=".$subchild->get_slug($loggedIn); echo "<li><a href='{$url}'>{$title}</a>"; } } echo "</ul>"; } echo "</li>"; } echo "</ul>"; } echo "</div>"; } echo "</li>"; $topcount++; } ?> </ul> <div class="clear"></div> </div> </div> <div id="mediaLibraryPopup" class="mediaLibraryPopup"> <h3>Media Library</h3> <ul class="box nolist"></ul> <div class="clear"></div> <a href="#" class="cancel">Cancel</a> </div> <div class="main_content"> Does anyone have any idea why the PC Safari browser would be breaking things up like this? I'm assuming it's PHP related but I cannot figure out why it would do that.

    Read the article

  • WiX/Windows Installer: Re-install to a new folder

    - by vitalyval
    1. I am using WiX for creating installer and would like to implement the following behaviour: If a user launches msi installer for the product and the product already installed, then wizard works similar to pure (first time) installation with exception of some things (e.g. license aggrement screen is omitted). The wizard should allow for example to change installation folder, select whether to place desktop shortcut,... I tried to do: <Publish Event="ReinstallMode" Value="amus"><![CDATA[INSTALL_MODE = "Change"]]></Publish> <Publish Event="Reinstall" Value="ALL"><![CDATA[INSTALL_MODE = "Change"]]></Publish> But after installation completes: the product is in the same folder, where it was installed first time; desktop icon in the same state as it was after first time install. MSDN says: "Do not attempt to change the target directory path if some components that use the path are already installed for the current user or for a different user". Is there a way to re-install in another forlder and add/remove desktop icon in re-install? 2. Is this normal to use the same KeyPath for some components? For example the same registry values for DeskTop and Programs menu shortcuts? MSDN says: "Two components cannot share the same key path value". But compiling and verifying goes OK. And I did not discover problems using the same keypaths.

    Read the article

  • I am getting null pointer exception while publishing dynamic web project on jboss using eclipse

    - by Rozer
    I am getting null pointer exception while publishing dynamic web project on jboss using eclipse Environment details Version: 3.3.2 Build id: M20080221-1800 Jboss: 3.2.3 JDK 1.5 I have checked with google, but nothing work, can anyone suggest me what is the root cause for that Following is the log trace may be help to understance the situation !ENTRY org.eclipse.wst.server.core 4 0 2010-06-13 19:03:44.568 !MESSAGE Error calling delegate restart() JBOSS 4.0 !ENTRY org.eclipse.wst.server.core 4 0 2010-06-13 19:39:34.365 !MESSAGE Could not publish to the server. !STACK 0 java.lang.NullPointerException at org.eclipse.wst.common.componentcore.internal.util.ComponentUtilities.getDeployUriOfComponent(ComponentUtilities.java:327) at org.eclipse.jst.j2ee.internal.deployables.J2EEFlexProjDeployable.getURI(J2EEFlexProjDeployable.java:429) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.guessModuleName(AntPublisher.java:259) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.getPublishProperties(AntPublisher.java:224) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.publish(AntPublisher.java:110) at org.eclipse.jst.server.generic.core.internal.GenericServerBehaviour.publishModule(GenericServerBehaviour.java:84) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModule(ServerBehaviourDelegate.java:749) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModules(ServerBehaviourDelegate.java:835) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:669) at org.eclipse.wst.server.core.internal.Server.doPublish(Server.java:887) at org.eclipse.wst.server.core.internal.Server.publish(Server.java:874) at org.eclipse.wst.server.core.internal.PublishServerJob.run(PublishServerJob.java:72) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • CSS style to create an HTML with an image on left and input text boxes on right that fills all space

    - by dafi
    I need to create an HTML page (a JqueryUI dialog but this isn't the problem) containing an image on left (size is fixed to 75x75) and on right some input text boxes, input boxes must resize to all remaining space. You can see an example at http://img683.imageshack.us/img683/5412/19483174.jpg The problem is that when I resize the dialog controls move under image as shown at http://img510.imageshack.us/img510/4817/44749696.jpg How can I resolve this problem in all browser and if possible using only CSS? Below I show my HTML code and the CSS I'm using HTML code <div id="dialog-form" title="Modify Post"> <form action=""> <fieldset> <img id="dialog-modify-thumb" src="http://dummyimage.com/75x75/000/fff" alt="" width="75" height="75"/> <div id="dialog-modify-controls"> <label for="dialog-modify-caption">Caption</label> <input type="text" name="dialog-modify-caption" id="dialog-modify-caption"/> <br/> <label for="dialog-modify-tags">Tags</label> <input type="text" name="dialog-modify-tags" id="dialog-modify-tags"/> <br/> <label for="dialog-modify-publish-date">Publish Date</label> <input type="text" name="dialog-modify-publish-date" id="dialog-modify-publish-date"/> </div> </fieldset> </form> </div> The CSS #dialog-modify-thumb { margin-right: 3px; border: 1px solid; display: block; float:left; } #dialog-form input[type='text'] { width: 100%; } #dialog-modify-controls { float: right; width:100%; }

    Read the article

  • Communication between modules

    - by David Elentok
    I have an application that consists from the following three modules: Search (to search for objects) List (to display the search results) Painter (to allow me to edit objects) - this module isn't always loaded (Each object is a figure that I can edit in the painter). When I open an object in the painter it's added to the objects that are already in the painter and I can move it and alter it. I'm using an object similar to the EventAggregator to communicate between the modules. For example, to show the search results I publish a "ShowList" event that is caught by the List module (I'm not sure this is the best way to do this, if anyone has better idea please comment...). One of the features of the search module requires it to get the selected object in the painter (if the painter is available), and I'm not sure what would be the best way to do that... I thought of these solutions: Whenever the selected object in the painter changes it will publish a "PainterSelectedObjectChanged" event which will be caught by the search module and stored for later use. When the selected object is needed by the search module it will publish a "RequestingPainterSelectedObject" event which will be caught by the painter module. The painter module will then set the "SelectedObject" property in the EventArgs object, and when the publish is complete and we're back in the search module we will have the painter's selected object in the EventArgs object. What do you think? what is the right way to do this?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >