Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 303/617 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • JSF 2 equivalent of tomahawk subform

    - by digitaljoel
    I have an existing JSF 1.2 app that was a portlet running on glassfish v2. I'm converting it to a webapp running on glassfish v3. The app uses tomahawk subforms in several areas. Tomahawk has not been update for JSF 2, which is what ships with glassfish v3. We would like to update our app to JSF 2. Is there a JSF 2 equivalent to tomahawk's subform? I know one option would be to change all of the subforms to be ajax enabled and use the execute attribute to specify which controls take part in the form submission, but would like to make this as straight across as I can without modifying existing functionality if I can. So, lacking a tomahawk subform substitute, is there any way to specify partial form submission for non ajax events?

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Vb.Net web browser control not firing document complete with AJAX web site.

    - by ajl
    The VB.Net desktop app uses the IE browser control to navigate the web. When a normal page loads the document_complete event fires and I can read the resulting page and go from there. The issue I am having is that the page I am driving is written with AJAX, so the document complete event never fires. Furthermore, when you view the source of the page after it loaded a new portion via AJAX, it hasn't change. How are people handling this? What are my options?

    Read the article

  • Cocoon lite / XML and XSLT publishing framework

    - by bambax
    What publishing frameworks (publishing only, NOT full-blown CMS) based on XML, XSLT sitemaps and pipelines exist, are stable, active, and simpler / lighter than Cocoon? I have glanced at: mod_xslt (http://www.mod-xslt2.com/) which seemed to be exactly that, but looks all but dead, and required a complex setup, and apparently supported only libxslt as an XSLT processor (I'd like to be able to use Saxon and XSLT 2.0 of course). Apache Forrest (http://forrest.apache.org/), but I don't understand if it is really simpler than Cocoon or is rather an additionnal thing on top of Cocoon? What I'm looking for is something that does just this: recieves an HTTP GET "runs it" through a sitemap finds a pipeline: source.xml - xslt1.xsl - xslt2.xsl - xsltn.xsl - serialize runs the pipeline serves the serialized result to the client and: uses Saxon (or is "processor independant") can be installed "lightly", that is: should not require much more configuration than the sitemap Maybe I'm describing an early version of Cocoon, or a future version of an XProc implementation... Anyway, does such a tool exist?

    Read the article

  • How do I use local memory in OpenCL?

    - by splicer
    I've been playing with OpenCL recently, and I'm able to write simple kernels that use only global memory. Now I'd like to start using local memory, but I can't seem to figure out how to use get_local_size() and get_local_id() to compute one "chunk" of output at a time. For example, let's say I wanted to convert Apple's OpenCL Hello World example kernel to something the uses local memory. How would you do it? Here's the original kernel source: __kernel square( __global float *input, __global float *output, const unsigned int count) { int i = get_global_id(0); if (i < count) output[i] = input[i] * input[i]; } If this example can't easily be converted into something that shows how to make use of local memory, any other simple example will do. Thanks!

    Read the article

  • psexec failing with return code 122 when used from Windows service

    - by Jeremy McGee
    I've written a WCF service as a wrapper around a C# utility we've written that uses the SysInternals psexec utility to run jobs on a remote system. psexec is invoked from C# with command-line parameters that specify the domain, user and password to use. All works fine when I invoke the C# utility from PowerShell locally. However, when I run the utility from the WCF service we see a return code of 122, which corresponds to (?) "The data area passed to a system call is too small". psexec is running against Windows Server 2008. The credentials I'm passing are local administrator, in the same domain as the machine that hosts the service wrapping the utility.

    Read the article

  • What is the equivalent of OnRender in Silverlight?

    - by John Weldon
    I'm working on porting an app from WPF to Silverlight. The app uses custom types derived from FrameworkElement (in WPF) to describe shapes, and text to be rendered on a Canvas. The WPF app root node overrides OnRender() to iterate through a collection of 'child' nodes, calling Render on each child node to build the Visual Tree. Silverlight doesn't expose OnRender, but there are hints that the same effect can be achieved using ControlTemplate. Is this the way to go, and are there any good examples of using this method available? I've done some googling (binging?) and found nothing really conclusive.

    Read the article

  • how mount userdata.img or userdata-qemu.img in osx

    - by misbell
    Disk Utility in OSX easily mounts an SD Card image as a device, but not so the other img files. I want to get the database I just created in the Android Emulator off the drive and into my osx file system. I updated my system with qemu using macports but no combination I try succeeds. Anyone figured out how to do this? Obviously one way I can do this is run the app on my phone than mount the phone as a USB drive. But I don't wanna. I wanna get it off the drive the emulator uses :-) Thanks in advance, folks. Michael

    Read the article

  • VS2010 Beta 2 > Setup Project > Prerequisites > Missing .NET 3.0

    - by Adam Kane
    Hello, In Visual Studio 2010 Beta 2, in a Setup project, in Prerequisites, there's no option to include the .NET Framework 3.0 prerequisite. How do I change things so that it is an available option? My goal is to create an .msi installer that is launched by a Setup.exe. The Setup.exe should install .NET 3.0 if it's not there. The application that I'm installing uses .NET 3.0 Note: I've tried clicking the "Check Microsoft Update for more redistributable components", but .NET 3.0 wasn't there. Thanks! Adam

    Read the article

  • Good examples of MVVM Template

    - by jwarzech
    I am currently working with the Microsoft MVVM template and find the lack of detailed examples frustrating. The included ContactBook example shows very little Command handling and the only other example I've found is from an MSDN Magazine article where the concepts are similar but uses a slightly different approach and still lack in any complexity. Are there any decent MVVM examples that at least show basic CRUD operations and dialog/content switching? Everyone's suggestions were really useful and I will start compiling a list of good resources Frameworks/Templates WPF Model-View-ViewModel Toolkit MVVM Light Toolkit Prism Caliburn Cinch Useful Articles Data Validation in .NET 3.5 Using a ViewModel to Provide Meaningful Validation Error Messages Action based ViewModel and Model validation Dialogs Command Bindings in MVVM More than just MVC for WPF MVVM + Mediator Example Application Additional Libraries WPF Disciples' improved Mediator Pattern implementation(I highly recommend this for applications that have more complex navigation)

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • INetCfgComponent::RaisePropertyUi arguments

    - by Soo Wei Tan
    I'm trying to do some COM interop and attempting to invoke the INetCfgComponent::RaisePropertyUi method. I've gotten to the point where I can enumerate devices and get a valid INetCfgComponent for the adapter that I want to display the UI for. However, I'm a COM newbie (let alone COM interop) so I have no idea what the third argument in RaisePropertyUi() is meant to be. I've tried passing in the INetCfgComponent object that I have, but that just results in a InvalidCastException. MSDN has the following to say about the argument: Pointer to the IUnknown interface. RaisePropertyUi retrieves from IUnknown the interface of the context in which to display a network component's property sheet. RaisePropertyUi uses this interface to restrict the display of the property sheet to the context of a connection.

    Read the article

  • Problem using SQLDMO/Vb6 against SQL2008

    - by E.J. Brennan
    I have a client, that uses SQLDMO for a portion of a custom application that was written against SQL 2000, and they recently upgraded to SQL2008. The majority of the app still runs fine (doesn't use SQLDMO), but the admin functions which rely on SQLDMO stopped working. I installed the SQL2005 backward compatibility pack, and now SQLDMO partially works, i.e. I can run "select" type queries, but any "Update" queries fail with the error message: to connect to the server you must use SQL Server management studio or sql server management objects (SMO) Any thoughts? Should the backward compatibility pack give me ALL the functionality back, or is this a known issue? BTW: I realize SQLDMO has been deprecated and will go away next release, none-the-less I need to do what I can to solve the problem at hand.

    Read the article

  • WF4 workflow versioning using WorkflowServiceHost

    - by rwwilden
    Related to this question. I understand how to implement versioning of workflows using WorkflowApplication. If you keep the original XAML definition for older versions of your workflow around, you can load them using the right WorkflowApplication constructor. How could you ensure that WorkflowServiceHost uses the correct workflow definition when you want to host your workflows in IIS? There is a WorkflowServiceHost constructor that you can use to load a workflow definition, but when you are hosting inside IIS through a XAMLX file, you do not call WorkflowServiceHost yourself, this is handled somehow by IIS. So how do I ensure that the correct workflow definition is loaded for the right version of my workflow?

    Read the article

  • FNhibernate, GeneratedBy.HiLo, hibernate_unique_key etc.

    - by csetzkorn
    Hi, I have started using the s#arp architecture which uses FNhibernate and GeneratedBy.HiLo to generate primary keys (there is also table hibernate_unique_key). Apparently, this is recommended practise and I would like to stick with this. Now to my problem. I have used NHibernate and hbm mapping quite a bit and usually used identity columns for my primary keys. This allowed me to seed the database using SQL. Can I do this with the aforementioned setup (hibernate_unique_key table etc.). I need to do this as SQL insert is much more efficient than using NHibernate + C# to seed the db with a million entities. Any feedback would be very much appreciated. Thanks. Christian

    Read the article

  • Security Level for WebBrowser control in C#

    - by jaywon
    I am trying to migrate an .hta application to a C# executable. Of course, since it's an .hta the code is all HTML and Jscript, with calls to local ActiveX objects. I created a C# executable project and am just using the WebBrowser control to display the HTML content. Simply renamed the .hta to an .html and took out the HTA declarations. Everything works great, except that when I make calls to the ActiveX objects, I get a security popup warning of running an ActiveX control on the page. I understand why this is happening since the WebBrowser control is essentially IE and uses the Internet Options security settings, but is there any way to get the WebBrowser control to bypass security popups, or a way to register the executable or DLLs as being trusted without having to change settings in Internet Options? Even a way to do on a deployment package would work as well.

    Read the article

  • jQuery Block UI: node is undefined

    - by stef
    I'm close to finishing an app that uses quite a bit of JS. Recently Firebug started throwing an error that says "node is undefined", referring to data.parent = node.parentNode; on line 209 of jQuery blockUI plugin Version 2.31 (06-JAN-2010) @requires jQuery v1.2.3 or later. I'm using jQuery 1.4.2 When I remove the code from my page that triggers the Block UI action, the error is still there. So it does not seem to be an issue in my code but an "error" in the file itself, or perhaps some kind of conflict with another file? For info my code is below. My IDE is highlighting some syntax errors in here but it does that even when there are none. Perhaps I'm missing it? $.blockUI({ css: { border: 'none', padding: '25px', backgroundColor: '#fff', '-webkit-border-radius': '10px', '-moz-border-radius': '10px', opacity: 1, color: '#000' , cursor: 'auto' }, message: $('#block_ui_msg'), }); EDIT: I just replaced the block UI file with the latest version 2.33 (29-MAR-2010), error still occurs but this time on line 210.

    Read the article

  • SQLXML with Windows 2008 and SQL Server 2008

    - by Rafa G. Argente
    Hi all, I have an application that uses SQLXML to access data on the database. We have it working on a Windows 2003 server and SQL Server 2005. Now the client wants to install it on Windows 2008 and SQL Server 2008 and we are getting errors like: Microsoft.Data.SqlXml.SqlXmlException: Class not registered --- System.Runtime.InteropServices.COMException (0x80040154): Class not registered at Microsoft.Data.SqlXml.Common.UnsafeNativeMethods. ISQLXMLCommandManagedInterface.ExecuteToOutputStream() at Microsoft.Data.SqlXml.SqlXmlCommand.innerExecute(Stream strm) ... etc etc This is driving me crazy. SQLXML is quite an obsolete technology, and we are trying to use it with the latest SO. I can't find official information about SQLXML and Windows 2008, it seems it's not officially supported but they don't say it's not supported either. The SQLXML4.0SP1 installation seems to work fine, but it seems like it fails on runtime. Do you have any ideas? Has someone tried anything like this?

    Read the article

  • Intellij Community can't use http proxy for Maven

    - by MikeHoss
    I have Intellij IDEA Community installed on a Linux box that needs to use an authenticated proxy to get to the Internet. I have a system-wide proxy on the box that works, and I have the proxy configured in ~/.m2/settings.xml. Maven correctly uses the proxy when I run try it from the command-line. I have the same proxy configured within Intellij and it gives me the plugins listing correctly. But when I try to sync with the Maven repository withing Intellij I keep getting this: [WARNING] Unable to get resource 'org.codehaus.mojo:hibernate3-maven-plugin:pom:2.2' from repository restlet (http://maven.restlet.org): Authorization failed: Not authorized by proxy. I went to Settings-Maven and put in the proxy info as properties and that didn't work. I can see by looking at those settings that Intellij is reading my ~./m2/settings.xml fine because it knows where my local repo is (it's in a non-standard place). Anyone know how I can get this working?

    Read the article

  • How do i specify wcf behaviorExtension class type without the assembly version number?

    - by Mr Bell
    I have a web app that uses a WCF service that utilizes a behaviorExtension like so: <behaviorExtensions> <add name="clientCredentialsExtension" type="Simon.Web.Giftcard.WCFSecurity.ClientCredentialsExtensionElement, Simon.Web.Giftcard, Version=1.0.3736.20411, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> The problem is this web app's version changes with every compile (i think) and thus invalidating this entry. How can I avoid having to change the version number every time I compile this? Can I specify the extension in code somewhere?

    Read the article

  • Enable PDO for PHP5 on CentOS5 where PHP is configured as '--disable-pdo'

    - by jakenoble
    Hi I have been given access to a CentOS5 machine by my client for their new site which uses Zend Framework. phpinfo() states in Configure Command that PDO is disabled ('--disable-pdo'). How can enable it? Do I need to recompile PHP5 to enable it? I have tried adding 'extension=pdo.so' in php.ini and restarting Apache, but this didn't work. It would also be nice to understand what '--disable-pdo' actaully means, does it mean it's not compiled into PHP or does it mean its just not enabled? Thanks for your time

    Read the article

  • .net WCF DateTime

    - by freddy smith
    I want to be able to safely send DateTime's over WCF without having to worry about any implicit or automatic TimeZone conversions. The dates I want to send are "logical" dates. year month day, there should be no time component. I have various server processes and client process that run on a variety of machines with different TimeZone settings. What I would like to ensure is that when a user enters a date in a textfield (or uses a date picker component) on any client that exact what they see at data entry is what is sent over WCF, used by the server and seen by other clients when they request the data. I am a little confused by the various questions and answers on this site concerning DateTime.Kind (unspecified, UTC, local).

    Read the article

  • chat application: pubsubhub vs xmpp

    - by sofia
    I'm unsure on the best stack to build a chat application. Currently I'm thinking of two main options: facebook tornado cons: does not use the main chat protocol xmpp but pubsubhubbub pros: i really like its simplicity for development (webserver + webframework); pubsubhubbub also seems simpler as a protocol than xmpp; and i know python xmpp + bosch, punjab, ejabberd cons: don't know erlang; overall seems a bit harder to develop pros: uses xmpp protocol The chat app will need to have the following: Private messages Public rooms Private rooms Chat history for rooms (not forever, just the last n messages) html embedding url to chat room Both options seem scalable so that's not really my worry (we're thinking of running the app in amazon's ec2 as well). I know there's a project that builds a xmpp server using tornado but it's not ready for production use and our deadline isn't that big. Basically my main worry is ease of development vs somehow regretting later using pubsubhubbub to develop a chat app but I read somewhere that PubSubHubbub might eventually replace XMPP as REST replaced SOAP - so what do you think?

    Read the article

  • new[n] and delete every location with delete instead the whole chunk with delete[]

    - by pmr
    Is this valid C++ (e.g. not invoking UB) and does it achieve what I want without leaking memory? valgrinds complains about mismatching free and delete but says "no leaks are possible" in the end. int main() { int* a = new int[5]; for(int i = 0; i < 5; ++i) a[i] = i; for(int i = 0; i < 5; ++i) delete &a[i]; } The reason I'm asking: I have a class that uses boost::intrusive::list and I new every object that is added to that list. Sometimes I know how many objects I want to add to the list and was thinking about using new[] to allocate a chunk and still be able to delete every object on its own with the Disposer-style of boost::intrusive.

    Read the article

  • Access virtualhosts over LAN (Also in xpmode (Virtual PC))

    - by Pheter
    Hi, I am running Wamp on my computer (the host). I have set up several virtualhosts in apache and they are working fine when I access them from the same computer (host). I have installed Windows XPMode on my computer (which is running windows 7). XPMode (which uses Virtual PC) is set up to use a NAT network. The network in XPMode is working fine, and I can access the host PC via the IP address 192.168.1.5, just as I would if I was using any physical computer on the same network. I can view all the web pages at 192.168.1.5 and it's subdirectories. However, I cannot access any of the subdomains that are configured in the virtualhosts of the host computer. How can I access the subdomains? I don't think that the fact that I am using XPMode and am using a virtualized OS has anything to do with it, but I thought that it was worth mentioning.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >