Search Results

Search found 19115 results on 765 pages for 'region specific'.

Page 452/765 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Installing Visual Studio 2003 on Windows 7 64-bit

    - by Cole Shelton
    My team is currently supporting a 1.1 app and we are installing VS.NET 2003 on Windows 7. We haven't had any issues on the 32-bit machines, but FrontPage Server Extensions are failing to install on my 64-bit machine. Others on the Interwebs say that they have done this successfully, so I wanted to know if anyone here has and if they know of a solution. The specific issue is that FPSE (to clarify, I'm installing "FrontPage 2002 Server Extensions for IIS 7.0") fails to install correctly. In EventViewer I get the error: Microsoft FrontPage Server Extensions: Error #3004f Message: Unable to read configuration information for Microsoft Internet Information Server: ImpersonateLoggedOnUser Error. I've looded for errors with ImpersonateLoggedOnUser on 64-bit and did find a case where it fails on 64-bit when UAC is turned off (which I did have it off). I turned UAC back on, ran command prompt as administrator, and ran msiexec on the FPSE package. Still no dice. I have followed this tutorial (and the others it points to) for installing: http://frankbuchan.blogspot.com/2009/08/visual-studio-2003-under-windows-7.html

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Error Running MVC2 application in IIS on .NET 4.0

    - by Matt Wrock
    I recently installed the RTM version of 4.0. I now receive an error when running MVC2 websites in a .net 4 app pool. The error is "User is not available in this context." All works fine on .net 2.0 app pools or if I run the app within the VS10 web server. The error only occurs in IIS on .net 4.0. To verify that it was not something specific to my app, I created a new MVC test app from the VS template and even that app encounters this error. My next step is to reinstall .net 4.0. Has anyone else seen this error?

    Read the article

  • Jquery Datepicker with XML file

    - by matt
    an extension of my last question, http://stackoverflow.com/questions/2562986/getdate-with-jquery-datepicker , I am trying to use the jquery datepicker to load specific info from xml file dependent on the date selected by the user. Similar code but i am trying to load and parse an xml file to read contents of the file for the particular date. In a perfect world the user would tap a date and below the datepicker html output would give the user specific times for the selected date instead of my last project of an image. my probelm is nothing is loading, so my question is what am i doing wrong? my code is as follows <!DOCTYPE html> <link type="text/css" href="css/ui-darkness/jquery-ui-1.8.custom.css" rel="stylesheet" /> <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="js/jquery-ui-1.8.custom.min.js"></script> <script type="text/javascript"> $(function(){ // Datepicker $('#datepicker').datepicker({ dateFormat: 'yy-mm-dd', inline: true, minDate: new Date(2010, 1 - 1, 1), maxDate:new Date(2010, 12 - 1, 31), altField: '#datepicker_value', onSelect: function(){ var day1 = $("#datepicker").datepicker('getDate').getDate(); var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; var year1 = $("#datepicker").datepicker('getDate').getFullYear(); var fullDate = year1 + "" + month1 + "" + day1; //var str_output ="<img src=\"http://69.89.20.27/images/a" + fullDate +".png\" width=\"100%\"/>"; //"<h1>"+fullDate+"</h1>"; //"<img src=\"http://69.*.*.*/images/a" + fullDate +".png\"/>"; //$('#page_output').html(str_output); var doc = loadXMLDoc('date.xml'); // loading the XML file var el = doc.getElementsByTagName('_'+date); // retrieving the elements corrsponding to a date, eg: _20100103 var page_output = document.getElementById('page_output'); if(el.length >= 1) { // matched XML data found for the specified date var dt = el[0].getElementsByTagName('date'); var great_times = el[0].getElementsByTagName('great_times'); var good_times = el[0].getElementsByTagName('good_times'); var str_output = "<h1><center>" + dt[0].childNodes[0].nodeValue + "</center></h1><br/><br>"; str_output += "<b>Excellent Times:</b><br> " + great_times[0].childNodes[0].nodeValue + "<br/><br>"; str_output += "<b>Good Times:</b><br> " + good_times[0].childNodes[0].nodeValue + "<br/><br>"; $('#page_output').html(str_output);// writing the results to the div element (page_out) } else { alert("Sorry","Action not allowed on this page"); page_output.innerHTML = ''; // No XML data found for the selected date reloadmainwDate(); return false; } return true; } }); //hover states on the static widgets $('#dialog_link, ul#icons li').hover( function() { $(this).addClass('ui-state-hover'); }, function() { $(this).removeClass('ui-state-hover'); } ); }); //var img_date = .datepicker('getDate'); //var day1 = $("#datepicker").datepicker('getDate').getDate(); //var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; //var year1 = $("#datepicker").datepicker('getDate').getFullYear(); //var fullDate = year1 + "-" + month1 + "-" + day1; //var date = $('#datepicker').datepicker({ dateFormat: 'dd-mm-yy' }); //var str_output = "<h1><center><p>"+ date + "</p></center></h1>"; //$('#page_output')[0].innerHTML = str_output; // writing the results to the div element (page_out) </script> <script> function loadXMLDoc(dname) { var xmlDoc; // IE 5 and IE 6 if(typeof ActiveXObject != 'undefined') { xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async=false; xmlDoc.load(dname); return xmlDoc; } else if (window.XMLHttpRequest) // firefox { xmlDoc=new window.XMLHttpRequest(); xmlDoc.open("GET",dname,false); xmlDoc.send(""); return xmlDoc.responseXML; } alert("Error loading document"); return null; } <!-- Datepicker --> <div id="datepicker"></div> <!-- Highlight / Error --> <div id="page_output"></div> </body>

    Read the article

  • Set PageIndex of DataPager

    - by Barbosa
    I have a ListView that I am paging with a DataPager. I would like to set the initial page of the pager on Page_Load. I have tried the DataPager.SetPageProperties method but it's not doing what I need. Here's how I'm calling this method: dataPager.SetPageProperties(3, dataPager.TotalRowCount, false); The line above trims the datasource to start at the third item and paging still starts at 1. This is not what I want. I want to keep the entire list of items and just jump to a specific page in the list. Is there another Property and/or method of a DataPager and/or ListView that I should use? Any help will be greatly appreciated. Thanks!

    Read the article

  • How to get a folder from CAML Query?

    - by Vijay
    I have a List which has a two level hierarchy of folders. Something like this: List Folder_1 SubFolder_1 Item 1_1_1 Item 1_1_2 SubFolder_2 Item 1_2_1 Item 1_2_2 Item 1_2_3 Folder_2 SubFolder_1 Item 2_1_1 Item 2_1_2 Item 2_1_3 SubFolder_2 Item 2_2_1 Item 2_2_2 I want to add a list item to a folder depending on some criteria. I don't want to loop through all folders as the number of folders is more. So, I thought of running a CAML query to get the folder. Below CAML Query gives me all folders in the list: <Where> <Eq> <FieldRef Name='FSObjType' /> <Value Type='int'>0</Value> </Eq> </Where> How can I add another condition to the above query so that I can get a specific folder when I know the exact folder name?

    Read the article

  • PHP Frameworks (CodeIgnitor, Yii, CakePHP) vs. Django

    - by niting
    I have to develop a site which has to accomodate around 2000 users a day and speed is a criterion for it. Moreover, the site is a user oriented one where the user will be able to log in and check his profile, register for specific events he/she wants to participate in. The site is to be hosted on a VPS server.Although I have pretty good experience with python and PHP but I have no idea how to use either of the framework. We have plenty of time to experiment and learn one of the above frameworks.Could you please specify which one would be preferred for such a scenario considering speed, features, and security of the site. Thanks, niting

    Read the article

  • GCC vs Microsoft : Undefined reference to `_chkstk'?

    - by SethCoder
    I am using CodeBlocks and MinGW toolchain which is essentially GCC. I was using VStudio but I want to get away from it to do cross platform development. There seems to be some microsoft specific references in some libraries that I am linking, specifically in CXImage SDK (_chkstk). I presume the library was put together using VS. From my searches I have learned that GCC uses _alloca rather than _chkstk. I still want to use CXImage for some stuff I am doing. My question: Is there a way around this problem or am I stuck with ditching libs such as this if I want to use GCC?

    Read the article

  • Fortran arrays and subroutines (sub arrays)

    - by ccook
    I'm going through a Fortran code, and one bit has me a little puzzled. There is a subroutine, say SUBROUTINE SSUB(X,...) REAL*8 X(0:N1,1:N2,0:N3-1),... ... RETURN END Which is called in another subroutine by: CALL SSUB(W(0,1,0,1),...) where W is a 'working array'. It appears that a specific value from W is passed to the X, however, X is dimensioned as an array. What's going on?

    Read the article

  • Latex "/indent" creating paragraph indentation / tabbing package requirement?

    - by Pareshkumar C. Brahmbhatt
    The Latex code provided below shows the usage of the command "\indent" as it appears in the document,but it does not produce the desired indentation within the document. Is there a specific package associated with the command "\indent" or "\="? I am asking for a step by step method of producing an indentation within a document for only one paragraph, regardless of location within the document. \documentclass[12pt]{article} \usepackage{graphicx} \topmargin -3.5cm \oddsidemargin -0.04cm \evensidemargin -0.04cm \textwidth 16.59cm \textheight 21.94cm \parskip 7.2pt \parindent 8pt \title{Physics} \author{Pareshkumar Brahmbhatt} \date{March 17, 2010} \begin{document} \maketitle \indent Now we are engaged in a great civil war. \end{document}

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Force page reload with html anchors (#) - HTML & JS

    - by yuval
    Say I'm on a page called /example#myanchor1 where myanchor is an anchor in the page. I'd like to link to /example#myanchor2, but force the page to reload while doing so. The reason is that I run js to detect the anchor from the url at the page load. The problem [normally expected behavior] here though, is that the browser just sends me to that specific anchor on the page without reloading the page. How would I go about doing so (JS OK). Thanks!

    Read the article

  • Announcing Entity Framework Code-First (CTP5 release)

    - by ScottGu
    This week the data team released the CTP5 build of the new Entity Framework Code-First library.  EF Code-First enables a pretty sweet code-centric development workflow for working with data.  It enables you to: Develop without ever having to open a designer or define an XML mapping file Define model objects by simply writing “plain old classes” with no base classes required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping I’m a big fan of the EF Code-First approach, and wrote several blog posts about it this summer: Code-First Development with Entity Framework 4 (July 16th) EF Code-First: Custom Database Schema Mapping (July 23rd) Using EF Code-First with an Existing Database (August 3rd) Today’s new CTP5 release delivers several nice improvements over the CTP4 build, and will be the last preview build of Code First before the final release of it.  We will ship the final EF Code First release in the first quarter of next year (Q1 of 2011).  It works with all .NET application types (including both ASP.NET Web Forms and ASP.NET MVC projects). Installing EF Code First You can install and use EF Code First CTP5 using one of two ways: Approach 1) By downloading and running a setup program.  Once installed you can reference the EntityFramework.dll assembly it provides within your projects.      or: Approach 2) By using the NuGet Package Manager within Visual Studio to download and install EF Code First within a project.  To do this, simply bring up the NuGet Package Manager Console within Visual Studio (View->Other Windows->Package Manager Console) and type “Install-Package EFCodeFirst”: Typing “Install-Package EFCodeFirst” within the Package Manager Console will cause NuGet to download the EF Code First package, and add it to your current project: Doing this will automatically add a reference to the EntityFramework.dll assembly to your project:   NuGet enables you to have EF Code First setup and ready to use within seconds.  When the final release of EF Code First ships you’ll also be able to just type “Update-Package EFCodeFirst” to update your existing projects to use the final release. EF Code First Assembly and Namespace The CTP5 release of EF Code First has an updated assembly name, and new .NET namespace: Assembly Name: EntityFramework.dll Namespace: System.Data.Entity These names match what we plan to use for the final release of the library. Nice New CTP5 Improvements The new CTP5 release of EF Code First contains a bunch of nice improvements and refinements. Some of the highlights include: Better support for Existing Databases Built-in Model-Level Validation and DataAnnotation Support Fluent API Improvements Pluggable Conventions Support New Change Tracking API Improved Concurrency Conflict Resolution Raw SQL Query/Command Support The rest of this blog post contains some more details about a few of the above changes. Better Support for Existing Databases EF Code First makes it really easy to create model layers that work against existing databases.  CTP5 includes some refinements that further streamline the developer workflow for this scenario. Below are the steps to use EF Code First to create a model layer for the Northwind sample database: Step 1: Create Model Classes and a DbContext class Below is all of the code necessary to implement a simple model layer using EF Code First that goes against the Northwind database: EF Code First enables you to use “POCO” – Plain Old CLR Objects – to represent entities within a database.  This means that you do not need to derive model classes from a base class, nor implement any interfaces or data persistence attributes on them.  This enables the model classes to be kept clean, easily testable, and “persistence ignorant”.  The Product and Category classes above are examples of POCO model classes. EF Code First enables you to easily connect your POCO model classes to a database by creating a “DbContext” class that exposes public properties that map to the tables within a database.  The Northwind class above illustrates how this can be done.  It is mapping our Product and Category classes to the “Products” and “Categories” tables within the database.  The properties within the Product and Category classes in turn map to the columns within the Products and Categories tables – and each instance of a Product/Category object maps to a row within the tables. The above code is all of the code required to create our model and data access layer!  Previous CTPs of EF Code First required an additional step to work against existing databases (a call to Database.Initializer<Northwind>(null) to tell EF Code First to not create the database) – this step is no longer required with the CTP5 release.  Step 2: Configure the Database Connection String We’ve written all of the code we need to write to define our model layer.  Our last step before we use it will be to setup a connection-string that connects it with our database.  To do this we’ll add a “Northwind” connection-string to our web.config file (or App.Config for client apps) like so:   <connectionStrings>          <add name="Northwind"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\northwind.mdf;User Instance=true"          providerName="System.Data.SqlClient" />   </connectionStrings> EF “code first” uses a convention where DbContext classes by default look for a connection-string that has the same name as the context class.  Because our DbContext class is called “Northwind” it by default looks for a “Northwind” connection-string to use.  Above our Northwind connection-string is configured to use a local SQL Express database (stored within the \App_Data directory of our project).  You can alternatively point it at a remote SQL Server. Step 3: Using our Northwind Model Layer We can now easily query and update our database using the strongly-typed model layer we just built with EF Code First. The code example below demonstrates how to use LINQ to query for products within a specific product category.  This query returns back a sequence of strongly-typed Product objects that match the search criteria: The code example below demonstrates how we can retrieve a specific Product object, update two of its properties, and then save the changes back to the database: EF Code First handles all of the change-tracking and data persistence work for us, and allows us to focus on our application and business logic as opposed to having to worry about data access plumbing. Built-in Model Validation EF Code First allows you to use any validation approach you want when implementing business rules with your model layer.  This enables a great deal of flexibility and power. Starting with this week’s CTP5 release, EF Code First also now includes built-in support for both the DataAnnotation and IValidatorObject validation support built-into .NET 4.  This enables you to easily implement validation rules on your models, and have these rules automatically be enforced by EF Code First whenever you save your model layer.  It provides a very convenient “out of the box” way to enable validation within your applications. Applying DataAnnotations to our Northwind Model The code example below demonstrates how we could add some declarative validation rules to two of the properties of our “Product” model: We are using the [Required] and [Range] attributes above.  These validation attributes live within the System.ComponentModel.DataAnnotations namespace that is built-into .NET 4, and can be used independently of EF.  The error messages specified on them can either be explicitly defined (like above) – or retrieved from resource files (which makes localizing applications easy). Validation Enforcement on SaveChanges() EF Code-First (starting with CTP5) now automatically applies and enforces DataAnnotation rules when a model object is updated or saved.  You do not need to write any code to enforce this – this support is now enabled by default.  This new support means that the below code – which violates our above rules – will automatically throw an exception when we call the “SaveChanges()” method on our Northwind DbContext: The DbEntityValidationException that is raised when the SaveChanges() method is invoked contains a “EntityValidationErrors” property that you can use to retrieve the list of all validation errors that occurred when the model was trying to save.  This enables you to easily guide the user on how to fix them.  Note that EF Code-First will abort the entire transaction of changes if a validation rule is violated – ensuring that our database is always kept in a valid, consistent state. EF Code First’s validation enforcement works both for the built-in .NET DataAnnotation attributes (like Required, Range, RegularExpression, StringLength, etc), as well as for any custom validation rule you create by sub-classing the System.ComponentModel.DataAnnotations.ValidationAttribute base class. UI Validation Support A lot of our UI frameworks in .NET also provide support for DataAnnotation-based validation rules. For example, ASP.NET MVC, ASP.NET Dynamic Data, and Silverlight (via WCF RIA Services) all provide support for displaying client-side validation UI that honor the DataAnnotation rules applied to model objects. The screen-shot below demonstrates how using the default “Add-View” scaffold template within an ASP.NET MVC 3 application will cause appropriate validation error messages to be displayed if appropriate values are not provided: ASP.NET MVC 3 supports both client-side and server-side enforcement of these validation rules.  The error messages displayed are automatically picked up from the declarative validation attributes – eliminating the need for you to write any custom code to display them. Keeping things DRY The “DRY Principle” stands for “Do Not Repeat Yourself”, and is a best practice that recommends that you avoid duplicating logic/configuration/code in multiple places across your application, and instead specify it only once and have it apply everywhere. EF Code First CTP5 now enables you to apply declarative DataAnnotation validations on your model classes (and specify them only once) and then have the validation logic be enforced (and corresponding error messages displayed) across all applications scenarios – including within controllers, views, client-side scripts, and for any custom code that updates and manipulates model classes. This makes it much easier to build good applications with clean code, and to build applications that can rapidly iterate and evolve. Other EF Code First Improvements New to CTP5 EF Code First CTP5 includes a bunch of other improvements as well.  Below are a few short descriptions of some of them: Fluent API Improvements EF Code First allows you to override an “OnModelCreating()” method on the DbContext class to further refine/override the schema mapping rules used to map model classes to underlying database schema.  CTP5 includes some refinements to the ModelBuilder class that is passed to this method which can make defining mapping rules cleaner and more concise.  The ADO.NET Team blogged some samples of how to do this here. Pluggable Conventions Support EF Code First CTP5 provides new support that allows you to override the “default conventions” that EF Code First honors, and optionally replace them with your own set of conventions. New Change Tracking API EF Code First CTP5 exposes a new set of change tracking information that enables you to access Original, Current & Stored values, and State (e.g. Added, Unchanged, Modified, Deleted).  This support is useful in a variety of scenarios. Improved Concurrency Conflict Resolution EF Code First CTP5 provides better exception messages that allow access to the affected object instance and the ability to resolve conflicts using current, original and database values.  Raw SQL Query/Command Support EF Code First CTP5 now allows raw SQL queries and commands (including SPROCs) to be executed via the SqlQuery and SqlCommand methods exposed off of the DbContext.Database property.  The results of these method calls can be materialized into object instances that can be optionally change-tracked by the DbContext.  This is useful for a variety of advanced scenarios. Full Data Annotations Support EF Code First CTP5 now supports all standard DataAnnotations within .NET, and can use them both to perform validation as well as to automatically create the appropriate database schema when EF Code First is used in a database creation scenario.  Summary EF Code First provides an elegant and powerful way to work with data.  I really like it because it is extremely clean and supports best practices, while also enabling solutions to be implemented very, very rapidly.  The code-only approach of the library means that model layers end up being flexible and easy to customize. This week’s CTP5 release further refines EF Code First and helps ensure that it will be really sweet when it ships early next year.  I recommend using NuGet to install and give it a try today.  I think you’ll be pleasantly surprised by how awesome it is. Hope this helps, Scott

    Read the article

  • Generating a KML heatmap from given data set of [lat, lon, density]

    - by Will Croft
    I am looking to build a static KML (Google Earth markup) file which displays a heatmap-style rendering of a few given data sets in the form of [lat, lon, density] tuples. A very straightforward data set I have is for population density. My requirements are: must be able to feed in data for a given lat, lon must be able to specific the given density of the data at that lat, lon must export to KML The requirements are language agnostic for this project as I will be generating these files offline in order to build the KML used elsewhere. I have looked at a few projects, most notably heatmap.py, which is a port of gheat in Python with KML export. I have hit a brick wall in the sense that the projects I have found to date all rely on building the heatmap from the density of [lat, lon] points fed into the algorithm. If I am missing an obvious way to adapt my data set to feed in just the [lat, lon] tuples but adjusting how I feed them using the density values I have, I would love to know!

    Read the article

  • Force Browser Mode=IE8 and document mode=IE8 Standards

    - by Dennis Cheung
    I have a internal website hosted on IIS. I added the following meta code and also add http-header that the page should in IE8 Browser mode and document mode. <meta http-equiv="X-UA-Compatible" content="IE=8" > We tested it on Visual Studio and and it works very well. However, after we publish the code to another IIS server, one developer reported that the page render in "IE8 Comatiblity" Browser Mode which causes some JavaScript to fail. There are more then 4 people working on the same windows server 2003 (RDP sessions). We use the same version of IE (same IE actually). Everyone get "IE8" Browser Mode but one person gets "IE8 Compatibility" Browser Mode. What else can make a specific user's IE load the page in a mode other than IE8 mode? PS. We checked the compatibility list in the IE; it is empty.

    Read the article

  • Mouse Event BHO

    - by shaimagz
    I want my BHO to listen to onmousedown event of some element in a certain webpage. I have all the code that find the specific element, and in msdn its says that I need to use the get_onmousedown event. I came up with this code. CComQIPtr<IHTMLElement> someElement; VARIANT mouse_eve; someElement->get_onmousedown(&mouse_eve); The question is, how do I tell it to run some function when this event occurs?

    Read the article

  • WCF/C# Unable to catch EndpointNotFoundException

    - by Paul Jones
    Hi all, I have created a WCF service and client and all works until it come to catching errors. Specifically I am try to catch the EndpointNotFoundException for when the server happens not to be there for whatever reason, I have tried performing a simple try catch block catching the specific error and the communication exception it derives from and I've tried catching just Exception. None of these succeed in catching the exception, however I do get A first chance exception of type 'System.ServiceModel.EndpointNotFoundException' occurred in System.ServiceModel.dll in the output window when the client tries to open the service. Any ideas as to what I'm doing wrong?

    Read the article

  • Programmatically set a TaxonomyField on a list item

    - by Mel Gerats
    The situation: I have a bunch of Terms in the Term Store and a list that uses them. A lot of the terms have not been used yet, and are not available yet in the TaxonomyHiddenList. If they are not there yet they don't have an ID, and I can not add them to a list item. There is a method GetWSSIdOfTerm on Microsoft.SharePoint.Taxonomy.TaxonomyField that's supposed to return the ID of a term for a specific site. This gives back IDs if the term has already been used and is present in the TaxonomyHiddenList, but if it's not then 0 is returned. Is there any way to programmatically add terms to the TaxonomyHiddenList or force it happening?

    Read the article

  • Looking for search syntax documentation for SharePoint UserProfileManager.Search Method

    - by gregenslow
    Greetings - I have a SharePoint 2010 server running with the User Profile service setup to synchronize to Active Directory. I'd like to use the UserProfileManager.Search() method to return user profiles based on specific criteria. MSDN documentation for this method is here. It states that the method will return user profiles that match the specified search pattern. This is exactly what I want. However, there is no documentation on what a valid search pattern is. I've made a few guesses like "Department = 'HR'" but haven't had any luck. I can't find any other documentation or sample code. Can anyone provide samples of valid "search patterns?" Another way to return user profiles is to do a query using the FullTextSqlQuery object. We don't yet have Search setup on this server so this isn't currently an option. Thanks, Greg

    Read the article

  • WPD MTP data stream hanging on Release

    - by Jonathan Potter
    I've come across a weird problem when reading data from a MTP-compatible mobile device using the WPD (Windows Portable Devices) API, under Windows 8 (not tried any other Windows versions yet). The symptom is, when calling Release on an IStream interface obtained via the IPortableDeviceResources::GetStream function, occasionally the Release call will hang and not return until the device is disconnected from the PC. After some experimentation I've discovered that this never happens as long as the entire contents of the stream have been read. But if the stream has only been partially read (say, the first 256Kb of the file), it can happen seemingly at random (although quite frequently). This has been reproduced with an iPhone and a Windows Phone 8 mobile, so it does not seem to be device-specific. Has anyone come across this sort of issue before? And more importantly, does anyone know of a way to solve it other than by always reading the entire contents of the stream? Thanks!

    Read the article

  • How to suppress Flash migration warnings (1090)

    - by aaaidan
    I get "migration issue" warnings when I use mouse/keyboard input handler names such as onMouseDown, onKeyUp, etc. These names are perfectly legal, but sensibly, we are now warned that their use is no longer automatic in ActionScript 3. I want to suppress these warnings, but without suppressing all other warnings, which I find useful. E.g., when I use code like this: protected override function onMouseDown(e:MouseEvent):void { I get an annoying warning like this: Warning: 1090: Migration issue: The onMouseDown event handler is not triggered automatically by Flash Player at run time in ActionScript 3.0. You must first register this handler for the event using addEventListener ( 'mouseDown', callback_handler). There are flex compiler (mxmlc) flags which can suppress actionscript warnings, or all warnings, but I don't want that. That's to general. Ideally I could suppress a specific error/warning number (Warning #1090). Halp?

    Read the article

  • Metro, Authentication, and the ASP.NET Web API

    - by Stephen.Walther
    Imagine that you want to create a Metro style app written with JavaScript and you want to communicate with a remote web service. For example, you are creating a movie app which retrieves a list of movies from a movies service. In this situation, how do you authenticate your Metro app and the Metro user so not just anyone can call the movies service? How can you identify the user making the request so you can return user specific data from the service? The Windows Live SDK supports a feature named Single Sign-On. When a user logs into a Windows 8 machine using their Live ID, you can authenticate the user’s identity automatically. Even better, when the Metro app performs a call to a remote web service, you can pass an authentication token to the remote service and prevent unauthorized access to the service. The documentation for Single Sign-On is located here: http://msdn.microsoft.com/en-us/library/live/hh826544.aspx In this blog entry, I describe the steps that you need to follow to use Single Sign-On with a (very) simple movie app. We build a Metro app which communicates with a web service created using the ASP.NET Web API. Creating the Visual Studio Solution Let’s start by creating a Visual Studio solution which contains two projects: a Windows Metro style Blank App project and an ASP.NET MVC 4 Web Application project. Name the Metro app MovieApp and the ASP.NET MVC application MovieApp.Services. When you create the ASP.NET MVC application, select the Web API template: After you create the two projects, your Visual Studio Solution Explorer window should look like this: Configuring the Live SDK You need to get your hands on the Live SDK and register your Metro app. You can download the latest version of the SDK (version 5.2) from the following address: http://www.microsoft.com/en-us/download/details.aspx?id=29938 After you download the Live SDK, you need to visit the following website to register your Metro app: https://manage.dev.live.com/build Don’t let the title of the website — Windows Push Notifications & Live Connect – confuse you, this is the right place. Follow the instructions at the website to register your Metro app. Don’t forget to follow the instructions in Step 3 for updating the information in your Metro app’s manifest. After you register, your client secret is displayed. Record this client secret because you will need it later (we use it with the web service): You need to configure one more thing. You must enter your Redirect Domain by visiting the following website: https://manage.dev.live.com/Applications/Index Click on your application name, click Edit Settings, click the API Settings tab, and enter a value for the Redirect Domain field. You can enter any domain that you please just as long as the domain has not already been taken: For the Redirect Domain, I entered http://superexpertmovieapp.com. Create the Metro MovieApp Next, we need to create the MovieApp. The MovieApp will: 1. Use Single Sign-On to log the current user into Live 2. Call the MoviesService web service 3. Display the results in a ListView control Because we use the Live SDK in the MovieApp, we need to add a reference to it. Right-click your References folder in the Solution Explorer window and add the reference: Here’s the HTML page for the Metro App: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MovieApp</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> <!-- WebServices references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/default.js"></script> </head> <body> <div id="tmplMovie" data-win-control="WinJS.Binding.Template"> <div class="movieItem"> <span data-win-bind="innerText:title"></span> <br /><span data-win-bind="innerText:director"></span> </div> </div> <div id="lvMovies" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplMovie') }"> </div> </body> </html> The HTML page above contains a Template and ListView control. These controls are used to display the movies when the movies are returned from the movies service. Notice that the page includes a reference to the Live script that we registered earlier: <!-- Live SDK --> <script type="text/javascript" src="/LiveSDKHTML/js/wl.js"></script> The JavaScript code looks like this: (function () { "use strict"; var REDIRECT_DOMAIN = "http://superexpertmovieapp.com"; var WEBSERVICE_URL = "http://localhost:49743/api/movies"; function init() { WinJS.UI.processAll().done(function () { // Get element and control references var lvMovies = document.getElementById("lvMovies").winControl; // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then( function(response) { // Get the authentication token var authenticationToken = response.session.authentication_token; // Call the web service var options = { url: WEBSERVICE_URL, headers: { authenticationToken: authenticationToken } }; WinJS.xhr(options).done( function (xhr) { var movies = JSON.parse(xhr.response); var listMovies = new WinJS.Binding.List(movies); lvMovies.itemDataSource = listMovies.dataSource; }, function (xhr) { console.log(xhr.statusText); } ); }, function(response) { throw WinJS.ErrorFromName("Failed to login!"); } ); }); } document.addEventListener("DOMContentLoaded", init); })(); There are two constants which you need to set to get the code above to work: REDIRECT_DOMAIN and WEBSERVICE_URL. The REDIRECT_DOMAIN is the domain that you entered when registering your app with Live. The WEBSERVICE_URL is the path to your web service. You can get the correct value for WEBSERVICE_URL by opening the Project Properties for the MovieApp.Services project, clicking the Web tab, and getting the correct URL. The port number is randomly generated. In my code, I used the URL  “http://localhost:49743/api/movies”. Assuming that the user is logged into Windows 8 with a Live account, when the user runs the MovieApp, the user is logged into Live automatically. The user is logged in with the following code: // Login to Windows Live var scopes = ["wl.signin"]; WL.init({ scope: scopes, redirect_uri: REDIRECT_DOMAIN }); WL.login().then(function(response) { // Do something }); The scopes setting determines what the user has permission to do. For example, access the user’s SkyDrive or access the user’s calendar or contacts. The available scopes are listed here: http://msdn.microsoft.com/en-us/library/live/hh243646.aspx In our case, we only need the wl.signin scope which enables Single Sign-On. After the user signs in, you can retrieve the user’s Live authentication token. The authentication token is passed to the movies service to authenticate the user. Creating the Movies Service The Movies Service is implemented as an API controller in an ASP.NET MVC 4 Web API project. Here’s what the MoviesController looks like: using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using JWTSample; using MovieApp.Services.Models; namespace MovieApp.Services.Controllers { public class MoviesController : ApiController { const string CLIENT_SECRET = "NtxjF2wu7JeY1unvVN-lb0hoeWOMUFoR"; // GET api/values public HttpResponseMessage Get() { // Authenticate // Get authenticationToken var authenticationToken = Request.Headers.GetValues("authenticationToken").FirstOrDefault(); if (authenticationToken == null) { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Validate token var d = new Dictionary<int, string>(); d.Add(0, CLIENT_SECRET); try { var myJWT = new JsonWebToken(authenticationToken, d); } catch { return new HttpResponseMessage(HttpStatusCode.Unauthorized); } // Return results return Request.CreateResponse( HttpStatusCode.OK, new List<Movie> { new Movie {Title="Star Wars", Director="Lucas"}, new Movie {Title="King Kong", Director="Jackson"}, new Movie {Title="Memento", Director="Nolan"} } ); } } } Because the Metro app performs an HTTP GET request, the MovieController Get() action is invoked. This action returns a set of three movies when, and only when, the authentication token is validated. The Movie class looks like this: using Newtonsoft.Json; namespace MovieApp.Services.Models { public class Movie { [JsonProperty(PropertyName="title")] public string Title { get; set; } [JsonProperty(PropertyName="director")] public string Director { get; set; } } } Notice that the Movie class uses the JsonProperty attribute to change Title to title and Director to director to make JavaScript developers happy. The Get() method validates the authentication token before returning the movies to the Metro app. To get authentication to work, you need to provide the client secret which you created at the Live management site. If you forgot to write down the secret, you can get it again here: https://manage.dev.live.com/Applications/Index The client secret is assigned to a constant at the top of the MoviesController class. The MoviesController class uses a helper class named JsonWebToken to validate the authentication token. This class was created by the Windows Live team. You can get the source code for the JsonWebToken class from the following GitHub repository: https://github.com/liveservices/LiveSDK/blob/master/Samples/Asp.net/AuthenticationTokenSample/JsonWebToken.cs You need to add an additional reference to your MVC project to use the JsonWebToken class: System.Runtime.Serialization. You can use the JsonWebToken class to get a unique and validated user ID like this: var user = myJWT.Claims.UserId; If you need to store user specific information then you can use the UserId property to uniquely identify the user making the web service call. Running the MovieApp When you first run the Metro MovieApp, you get a screen which asks whether the app should have permission to use Single Sign-On. This screen never appears again after you give permission once. Actually, when I first ran the app, I get the following error: According to the error, the app is blocked because “We detected some suspicious activity with your Online Id account. To help protect you, we’ve temporarily blocked your account.” This appears to be a bug in the current preview release of the Live SDK and there is more information about this bug here: http://social.msdn.microsoft.com/Forums/en-US/messengerconnect/thread/866c495f-2127-429d-ab07-842ef84f16ae/ If you click continue, and continue running the app, the error message does not appear again.  Summary The goal of this blog entry was to describe how you can validate Metro apps and Metro users when performing a call to a remote web service. First, I explained how you can create a Metro app which takes advantage of Single Sign-On to authenticate the current user against Live automatically. You learned how to register your Metro app with Live and how to include an authentication token in an Ajax call. Next, I explained how you can validate the authentication token – retrieved from the request header – in a web service. I discussed how you can use the JsonWebToken class to validate the authentication token and retrieve the unique user ID.

    Read the article

  • Copying a directory off a network drive using Objective-C/C

    - by Jeff
    Hi All, I'm trying to figure out a way for this to work: NSString *searchCommand = [[NSString alloc] initWithFormat:@"cp -R /Volumes/Public/DirectoryToCopy* Desktop/"]; const char* system_call = [searchCommand UTF8String]; system(system_call); The system() method does not acknowledge the specific string I am passing in. If I try something like: system("kextstat"); No problems. Any ideas why the find string command I'm passing in is not working? All I get is "GDB: Running ....." in xCode. I should mention that if I try the same exact command in terminal it works just fine.

    Read the article

  • System calls on Windows

    - by b-gen-jack-o-neill
    Hi, I just want to ask, I know that standart system calls in Linux are done by int instruction pointing into Interrupt Vector Table. I assume this is similiar on Windows. But, how do you call some higher-level specific system routines? Such as how do you tell Windows to create a window? I know this is handled by the code in the dll, but what actually happend at assembler-instruction level? Does the routine in dll calls software interrupt by int instruction, or is there any different approach to handle this? Thanks.

    Read the article

  • Call a web service and parse xml response in blackberry

    - by Nirmal
    Hello All... Currently I have a ready design for blackberry application. Now, I need to call the web service in my app, and that web service will give me some xml response. So, I need to parse that response from xml to some POJO. So, for parsing the xml response should I go with the basic DOM praser, or should I use any other J2ME specific prasing concept ? If anybody have any sample tutorial link for the same then it would be very much useful to me. Thanks in advance....

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >