Search Results

Search found 21548 results on 862 pages for 'url mapping'.

Page 265/862 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Domain Forwarding | A Magento Store

    - by WaZ
    I have installed Magento inside a folder called magento. The URL of the site currently looks like this: http://gios.azamdevelopment.co.uk/magento/ We want our domain forwarded to the above URL and moreover, any links relative should work as well. e.g. http://gios.azamdevelopment.co.uk/magento/customer/account/login/ should ideally be www.giosconcept.com/customer/account/login and so forth. Thanks very much.

    Read the article

  • Can I tell Chrome not to redirect mistyped URLs?

    - by Nathan Long
    If I mistype a URL, Chrome sometimes redirects me to a search. For instance, typing "example_url_i_sometimes_mistype.conm" into the location bar gets me: Your search - example_url_i_sometimes_mistype.conm - did not match any documents. Not only is this annoying, but if the mistyped URL was one on private DNS, I've now just told Google that the domain exists. (A small concern, but bad in principle.) Can I configure Chrome to just show an error and not blab to Google Search about it?

    Read the article

  • Exclude regular expression from virtual host

    - by Joao Trindade
    I have a virtual host in apache which is redirecting requests to another web server. <VirtualHost *:80> DocumentRoot /var/www ServerName another.host ProxyPass / http://another.host2:8081/ ProxyPassReverse / http://another.host2:8081/ </VirtualHost> I need to exclude an URL pattern from being catch by this virtual host. Basically I don't want requests with the url: http://another.host:8081/~username to be forwarded to the other server. Can this be done?

    Read the article

  • negative regexp in Squirm (for Squid). Possible ?

    - by alex8657
    Did someone achieve to do negative regexp (or part of) with Squirm ? I tried negative lookahead things and ifthenelse regexps, but Squirm 1.26 fails to understand them. What i want to do is simply: * If the url begins by 'http://' and contains 'account', then rewrite/redirect to 301:https:// * It the url begins by 'https://' and does NOT contains 'account, then rewrite/redirect to 301:http:// So far, i do that using 2 lines of perl, but squirm redirectors would take less memory

    Read the article

  • "Save as webpage, complete" problem ?

    - by Adelave
    i have problem when i am trying to save web page from browser using "Save as Webpage, complete" some of CSS/Javascript/Image are not saved, thus when i reopen saved page offline webpage can't display properly. How do i solve this? How do i made webpage can be saved properly from browser ? i don't want to use MHT ext. Because what my situation here is i need to give URL for web designer to launch URL, saved webpage and modify CSS/HTML. Thanks

    Read the article

  • Can I keep Google from stealing my cursor? (Firefox)

    - by LinkTiger
    I have iGoogle as my home page. Every time that I start up Firefox with the intent to go to a specific page, I end up typing half the URL in the Google search box when iGoogle steals focus away from the URL bar. Is there any way to hack Firefox (or iGoogle) to keep the page from stealing my cursor on load? Thanks!

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • I need to move part of my website to another web server, how can I do this and keep the same Domain

    - by hamlin11
    I need to move a section of my website to another server because it is taxing our current web server. However, I cannot afford to lose page rank on any pages within the section of the site that must be moved. Furthermore, the URL of the pages must not be changed... visitors must still see the same URL, even though it would be served up by different hardware from a different data center. How can this be accomplished? Thanks

    Read the article

  • JqGrid - AfterInsertRow, setCell. programmatically change the contet of the cell

    - by oirfc
    Hello there, I am new to JqGrid, so please bare with me. I am having some problems with styling the cells when I use a showlink formatter. In my configuration I set up the AfterInsertRow and it works fine if I just display simple text: afterInsertRow: function(rowid, aData) { if (aData.Security == `C`) { jQuery('#list').setCell(rowid, 'Doc_Number', '', { color: `red` }); } else { jQuery('#list').setCell(rowid, 'Doc_Number', '', { color: `green` }); } }, ... This code works just fine, but as soon as I add a formatter {'Doc_Number, ..., 'formatter: ’showlink’, formatoptions: {baseLinkUrl: ’url.aspx’} the above code doesn't work because a new element is added to the cell <a href='url.aspx'>cellValue</a> Is it possible to access programmatically the new child element using something like the code above and change the style? <a href='url.aspx' style='color: red;'>cellValue</a> etc. Thanks in advance, oirfc

    Read the article

  • ASP.NET MVC - HttpPost to ReturnURL after redirect

    - by JP
    Hello, I am writing an ASP.NET MVC 2.0 application which requires users to log in before placing a bid on an item. I am using an actionfilter to ensure that the user is logged in and, if not, send them to a login page and set the return url. Below is the code i use in my action filter. if (!filterContext.HttpContext.User.Identity.IsAuthenticated) { filterContext.Result = new RedirectResult(String.Concat("~/Account/LogOn","?ReturnUrl=",filterContext.HttpContext.Request.RawUrl)); return; } In my logon controller I validate the users credentials then sign them in and redirect to the return url FormsAuth.SignIn(userName, rememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } My problem is that this will always use a Get (HttpGet) request whereas my original submission was a post (HttpPost) and should always be a post. Can anyone suggest a way of passing this URL including the HttpMethod or any workaround to ensure that the correct HttpMethod is used? Thanks in advance, JP

    Read the article

  • WebView and Cookies on Android

    - by dave smith
    I have an application on appspot that works fine through regular browser, however when used through Android WebView, it cannot set and read cookies. I am not trying to get cookies "outside" this web application BTW, once the URL is visited by WebView, all processing, ids, etc. can stay there, all I need is session management inside that application. First screen also loads fine, so I know WebView + server interactivity is not broken. I looked at WebSettings class, there was no call like setEnableCookies. I load url like this: public class MyActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebView webview = new WebView(this); setContentView(webview); webview.loadUrl([MY URL]); } .. } Any ideas?

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • JqGrid - AfterInsertRow, setCell. programmatically change the content of the cell

    - by oirfc
    Hello there, I am new to JqGrid, so please bare with me. I am having some problems with styling the cells when I use a showlink formatter. In my configuration I set up the AfterInsertRow and it works fine if I just display simple text: afterInsertRow: function(rowid, aData) { if (aData.Security == `C`) { jQuery('#list').setCell(rowid, 'Doc_Number', '', { color: `red` }); } else { jQuery('#list').setCell(rowid, 'Doc_Number', '', { color: `green` }); } }, ... This code works just fine, but as soon as I add a formatter {'Doc_Number, ..., 'formatter: ’showlink’, formatoptions: {baseLinkUrl: ’url.aspx’} the above code doesn't work because a new element is added to the cell <a href='url.aspx'>cellValue</a> Is it possible to access programmatically the new child element using something like the code above and change the style? <a href='url.aspx' style='color: red;'>cellValue</a> etc. Thanks in advance, oirfc

    Read the article

  • perl xml parser get xml content within xml

    - by user391986
    How can I use XMLParser to get the item-@url, item-@replace and item-"value inside" for the content as a string of the node where item-@cone="one"? <cstep> <item cone="one" url="http://google.com/{ccc}/cthree" replace="{ccc}"> <itemsub conesub="conesub"> <itemsubsub conesubsub="conesubsub" /> </itemsub> </item> <item cone="two" url="http://google.com/{ccc}/cthree" replace="{ccc}"> <itemsub conesub="conesub"> <itemsubsub conesubsub="conesubsub" /> </itemsub> </item> </cstep>

    Read the article

  • Reporting Services 2010 RDLC: Passing Querystring Parameters from an RDLC

    - by Brian MacKay
    I'm trying to build a simple RDLC report that shows some data, and has a 'select' link that sends the browser off to a certain url with some data in the querystring (a key). In the vs2010 report designer, I can double-click on the column, then select action, and there are a bunch of thigns that seem like they might work. But none of them do. Under 'enable as a hyperlink' I can pick 'go to url' but there aren't any parameter options to pass. I also tried 'go to report' on the off chance that I could trick it into doing what I want. Here there are parameter options, but it knows that my url is not a report and the "select" link renders as text (not clickable). Any ideas? I'm pretty sure this used to work in vs2008, and it seems like something that must be doable. But I've been pulling out my hair for several hours on this one.

    Read the article

  • WebClient The request was aborted: Could not create SSL/TLS secure channel

    - by Tomas
    I am using WebClient in ASP.NET app to call PayPal secured url to create payment button. While calling secured PayPal Url I get error below. How to solve this problem? Do I need to purchase certificate to just call secured url? The request was aborted: Could not create SSL/TLS secure channel. My code ServicePointManager.Expect100Continue = true; ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3; using (var client = new WebClient()) { var postBytes = Encoding.ASCII.GetBytes(param); client.Headers.Add("Content-Type", "application/x-www-form-urlencoded"); responseBytes = client.UploadData(_paymentProcessorCredentials.PayPalApiUrl, "POST", postBytes); }

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • FB.ui and setting popup size

    - by manuelpedrera
    I am using FB.ui with the display parameter set to popup. When the method is 'stream.publish', it autoresizes when the content is loaded. However, when using 'fbml.dialog' (in order to display a multi-friend selector) it shows a size that I'm not able to change (and the content is displayed cropped). I have tried with the following approaches, with no luck: FB.ui({ method: 'fbml.dialog', size: {width: 800, height: 500}, ... FB.ui({ method: 'fbml.dialog', width: 800, height: 500, ... Also I've been looking at the API source code, and it declares the method this way: Method declaration: 'fbml.dialog': { size : { width: 575, height: 300 }, url : 'render_fbml.php', loggedOutIframe : true }... Functions that executes the methods: // the basic call data var call = { cb : cb, id : id, size : method.size || {}, url : FB._domain.www + method.url, params : params }; Any help would be much appreciated...

    Read the article

  • MS CRM Register a plugin

    - by mwright
    I am trying to register a plugin for MS CRM, the situation is as follows. It's an IFD deployment and everytime that I connect using the Microsoft provided plugin registration tool I get the following error message. Unhandled Exception: System.Net.WebException: The request failed with the error message: -- <html><head><title>Object moved</title></head><body> <h2>Object moved to <a href="https://URL/signin.aspx?targeturl=https%3a%2f%2fURL%2fMSCrmServices%2f2007%2fIFD%2fcrmdiscoveryservice.asmx%2fmscrmservices%2f2007%2fad%2fcrmdiscoveryservice.asmx">here</a>.</h2> </body></html> The link that I'm using to connect looks like this: https://URL/MSCrmServices/2007/IFD/crmdiscoveryservice.asmx Can anyone give me some direction?

    Read the article

  • Escape apostrophe when passing parameter in onclick event

    - by RememberME
    I'm passing the company name to an onclick event. Some company names have apostrophes in them. I added '.Replace("'", "'")' to the company_name field. This allows the onclick event to fire, but the confirm message displays as "Jane's Welding Company". <a href="#" onclick="return Actionclick('<%= Url.Action("Activate", new {id = item.company_id}) %>', '<%= Html.Encode(item.company1.company_name.Replace("'", "&#39;")) %>');" class="fg-button fg-button-icon-solo ui-state-default ui-corner-all"><span class="ui-icon ui-icon-refresh"></span></a> <script type="text/javascript"> function Actionclick(url, companyName) { if (confirm('This action will activate this company\'s primary company ('+companyName+') and all of its other subsidiaries. Continue?')) { location.href = url; }; };

    Read the article

  • What is the meaning of NSXMLParserErrorDomain error 5.?

    - by mobibob
    Ok, I am back on this task. I have my XML properly download from my webserver with a URL pointing to the server's file, however, when I detect the network is 'unreachable' I simply point the URL to my application's local XML and I get the following error (N.B. the file is a direct copy of the one on the server). I cannot find detail description, but I think it is saying that the URL is pointing to an inaccessible location. Am I storing this resource in the wrong location? I think I want it in the HomeDirectory / Library?? Debug output loadMyXml: /var/mobile/Applications/950569B0-6113-48FC-A184-4F1B67A0510F/MyApp.app/SampleHtml.xml 2009-10-14 22:08:17.257 MyApp[288:207] Wah! It didn't work. Error Domain=NSXMLParserErrorDomain Code=5 "Operation could not be completed. (NSXMLParserErrorDomain error 5.)" 2009-10-14 22:08:17.270 MyApp[288:207] Operation could not be completed. (NSXMLParserErrorDomain error 5.)

    Read the article

  • How to populate a drop down list in Spring MVC

    - by GigaPr
    Hi, would like to populate a drop down list on a jsp page i have my page that looks like <form:form method="POST" action="addRss.htm" commandName="addNewRss" cssClass="addUserForm"> <div class="floatL"> <div class="padding5"> <div class="fieldContainer"> <strong>Title:</strong>&nbsp; </div> <form:errors path="title" cssClass="error"/> <form:input path="title" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Description:</strong>&nbsp; </div> <form:errors path="description" cssClass="error"/> <form:input path="description" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Language:</strong>&nbsp; </div> <form:errors path="language" cssClass="error"/> <form:select path="language" cssClass="textArea" /> </div> </div> <div class="floatR"> <div class="padding5"> <div class="fieldContainer"> <strong>Link:</strong>&nbsp; </div> <form:errors path="link" cssClass="error"/> <form:input path="link" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Url:</strong>&nbsp; </div> <form:errors path="url" cssClass="error"/> <form:input path="url" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Url</strong>&nbsp; </div> <form:errors path="url" cssClass="error"/> <form:input path="url" cssClass="textArea" /> </div> </div> <input type="submit" class="floatR" value="Add New Rss"> </form:form> and my controller public class AddRssController extends BaseController { private static final String[] LANGUAGES = { "AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "DC", "FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC", "SD", "TN", "TX", "UT", "VA", "VT", "WA", "WV", "WI", "WY" }; public AddRssController() { setCommandClass(RSS.class); setCommandName("addNewRss"); } @Override protected Object formBackingObject(HttpServletRequest request) throws Exception { RSS rantForm = (RSS) super.formBackingObject(request); // rantForm.setVehicle(new Vehicle()); return rantForm; } @Override protected Map referenceData(HttpServletRequest request) throws Exception { Map referenceData = new HashMap(); referenceData.put("language", LANGUAGES); return referenceData; } @Override protected ModelAndView onSubmit(Object command, BindException bindException) throws Exception { RSS rss = (RSS) command; rssServiceImplementation.add(rss); return new ModelAndView(getSuccessView()); } } and my BaseController public class BaseController extends SimpleFormController implements Controller { public UserServiceImplementation userServiceImplementation; public UserServiceImplementation getUserServiceImplementation() { return userServiceImplementation; } public void setUserServiceImplementation(UserServiceImplementation userServiceImplementation) { this.userServiceImplementation = userServiceImplementation; } public RssServiceImplementation rssServiceImplementation; public RssServiceImplementation getRssServiceImplementation() { return rssServiceImplementation; } public void setRssServiceImplementation(RssServiceImplementation rssServiceImplementation) { this.rssServiceImplementation = rssServiceImplementation; } } But it doesn t work Any suggestion?

    Read the article

  • Using Unity – Part 2

    - by nmarun
    In the first part of this series, we created a simple project and learned how to implement IoC pattern using Unity. In this one, I’ll show how you can instantiate other types that implement our IProduct interface. One place where this one would want to use this feature is to create mock types for testing purposes. Alright, let’s dig in. I added another class – Product2.cs  to the ProductModel project. 1: public class Product2 : IProduct 2: { 3: public string Name { get; set;} 4: public Category Category { get; set; } 5: public DateTime MfgDate { get;set; } 6:  7: public Product2() 8: { 9: Name = "Canon Digital Rebel XTi"; 10: Category = new Category {Name = "Electronics", SubCategoryName = "Digital Cameras"}; 11: MfgDate = DateTime.Now; 12: } 13:  14: public string WriteProductDetails() 15: { 16: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}", 17: Name, Category, MfgDate.ToShortDateString()); 18: } 19: } Highlights of this class are that it implements IProduct interface and it has some different properties than the Product class. The Category class looks like below: 1: public class Category 2: { 3: public string Name { get; set; } 4: public string SubCategoryName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0} - {1}", Name, SubCategoryName); 9: } 10: } We’ll go to our web.config file to add the configuration information about this new class – Product2 that we created. Let’s first add a typeAlias element. 1: <typeAlias alias="Product2" type="ProductModel.Product2, ProductModel"/> That’s all that is needed for us to get an instance of Product2 in our application. I have a new button added to the .aspx page and the click event of this button is where all the magic happens: 1: private IUnityContainer unityContainer; 2: protected void Page_Load(object sender, EventArgs e) 3: { 4: unityContainer = Application["UnityContainer"] as IUnityContainer; 5: 6: if (unityContainer == null) 7: { 8: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 9: } 10: else 11: { 12: if (!IsPostBack) 13: { 14: IProduct productInstance = unityContainer.Resolve<IProduct>(); 15: productDetailsLabel.Text = productInstance.WriteProductDetails(); 16: } 17: } 18: } 19:  20: protected void Product2Button_Click(object sender, EventArgs e) 21: { 22: unityContainer.RegisterType<IProduct, Product2>(); 23: IProduct product2Instance = unityContainer.Resolve<IProduct>(); 24: productDetailsLabel.Text = product2Instance.WriteProductDetails(); 25: } The unityContainer instance is set in the Page_Load event. Line 22 in the click event of the Product2Button registers a type mapping in the container. In English, this means that when unityContainer tries to resolve for IProduct, it gets an instance of Product2. Once this code runs, following output is rendered: There’s another way of doing this. You can resolve an instance of the requested type with a name from the container. We’ll have to update the container element of our web.config file to include the following: 1: <container name="unityContainer"> 2: <types> 3: <type type="IProduct" mapTo="Product"/> 4: <!-- Named mapping for IProduct to Product --> 5: <type type="IProduct" mapTo="Product" name="LegacyProduct" /> 6: <!-- Named mapping for IProduct to Product2 --> 7: <type type="IProduct" mapTo="Product2" name="NewProduct" /> 8: </types> 9: </container> I’ve added a Dropdownlist and a button to the design page: 1: <asp:DropDownList ID="ModelTypesList" runat="server"> 2: <asp:ListItem Text="Legacy Product" Value="LegacyProduct" /> 3: <asp:ListItem Text="New Product" Value="NewProduct" /> 4: </asp:DropDownList> 5: <br /> 6: <asp:Button ID="SelectedModelButton" Text="Get Selected Instance" runat="server" 7: onclick="SelectedModelButton_Click" /> 1: protected void SelectedModelButton_Click(object sender, EventArgs e) 2: { 3: // get the selected value: LegacyProduct or NewProduct 4: string modelType = ModelTypesList.SelectedValue; 5: // pass the modelType to the Resolve method 6: IProduct customModel = unityContainer.Resolve<IProduct>(modelType); 7: productDetailsLabel.Text = customModel.WriteProductDetails(); 8: } Pretty straight forward right? The only thing to note here is that the values in the dropdownlist item need to match the name attribute of the type. Depending on what you select, you’ll get an instance of either the Product class or the Product2 class and the corresponding WriteProductDetails() method is called. Now you see, how either of these methods can be used to create mock objects your the test project. See the code here. I’ll continue to share more of Unity in the next blog.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >