Search Results

Search found 27047 results on 1082 pages for 'multiple projects'.

Page 46/1082 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • windows system (bootloader) partition accidently deleted during multiple installs

    - by S.Y.T.
    After experimenting with multiple variations of backtrack and xbmcbuntu variations of Ubuntu with dual boot successfully, my windows partition became unrecognizable to grub. I used my windows boot CD to try to correct the problem. However, I deleted all partitions except for the NFTS one that contains my old windows install. (And, merged all other ones into that in hopes of getting back to the windows boot loader and out of grub) Now, all I get is a grub command prompt when I try and boot the system (how??? - I thought I deleted grub) And, now the windows boot disc doesn't even recognize the install. I've tried TRK to try and resolve the problem. Though I must admit ignorance in correctly using this utility. I've searched for other answers to this problem. Any help would be much appreciated. S.Y.

    Read the article

  • Multiple style sheets best practice

    - by user1145927
    I currently am working on a project which has one large style sheet for about 20 pages. The style sheet contains some styles which are specific for certain pages. I'd like to break the style sheet up so there is one style sheet for each page with one master style sheet that handles everything generic. The reason I want to do this is so people can work on multiple pages without having to worry about who has that large style sheet checked out (I'm using TFS). Is this good practice?

    Read the article

  • Open Terminal with multiple tabs and execute application

    - by user172001
    I am new to linux shell scripting. I want to write a shell script which will open terminal with multiple tabs; it should run rtsp client app in each tab. For this, I have gone through question here in this forum and tried to code like bellow, tab="--tab-with-profile=Default -e " cmd="java RunRTSPClient" for i in 1 2 3 4 5 do # foo="$foo $tab $cmd" done gnome-terminal $foo exit 0 This is running and opens the terminal window with tabs but suddenly it will close. I am not getting any errors.

    Read the article

  • Rythmbox library set to: multiple locations and no way to edit/change

    - by Pierre
    I previously set Rhythmbox to include a few different locations. Now following changes to my directory structure, I'd like to reflect these changes in Rhythmbox. But when I go to "Library Location", I see "Multiple locations Set" and I can only add to the list; no way to edit/remove. I Googled this problem and the only relevant results I get date back to 2006, probably referring to a previous version of Rhythmbox and specifying components/locations that I can't find on my system. Give a look at the documentation...kind of minimal. Any clues? Ubuntu 12.04 Rhythmbox 2.96

    Read the article

  • Moving multiple objects on a map

    - by Dave
    I have multiple objects on my isometric game, for example, NPC's doing path finding automatically to walk around the map. Now there could be any number of them from 0 to infinity (hypthetical as no PC could handle that). My question is: is simply looping each one individually the smartest way to animate them all? Surely as the number of units increases you will notice a lag occuring on units near the end of loop still "waiting" for their next animation movement. The alternative is a swarm algorithm to move all objects together. Is that a smarter idea or do both situations apply depending on the circumstances of the game?

    Read the article

  • MVC Design Pattern to Combine Multiple Models for use

    - by roverred
    In my design, I have multiple models and each model has a controller. I need to use all the models to process some operation. Most examples I see are pretty simple with 1 view, 1 controller, and 1 model. How would you get all these models together? Only ways I can think of are 1) Have a top-level controller which has a reference to every controller. Those controllers will have a getter/setter function for their model. Does this violate MVC because every controller should have a model? 2) Have an Intermediate class to combine every model into a one model. Then you create a controller for that new super model. Do you know of any better ideas? Thanks.

    Read the article

  • Is it safe to run multiple XNA ContentManager instances on multiple threads?

    - by Boinst
    My XNA project currently uses one ContentManager instance, and one dedicated background thread for loading all content. I wonder, would it be safe to have multiple ContentManager instances, each in it's own dedicated thread, loading different content at the same time? I'm prompted to ask this question because this article makes the following statement: If there are two textures created at the same time on different threads, they will clobber the other and you will end up with some garbage in the textures. I think that what the author is saying here, is that if I access one ContentManager simultaneously on two threads, I'll get garbage. But what if I have separate ContentManager instances for each thread? If no-one knows the answer already from experience, I'll go ahead and try it and see what happens.

    Read the article

  • Cocos2d: Using single timer/scheduler for multiple sprites

    - by Shailesh_ios
    want to know if is it possible to use single timer or scheduler method for multiple sprites ? Like I am now working on a game and there could be any number of sprites and i want to perform some actions on all of that sprites, So do I have to use as many timers or schedulers as sprites ? Or How can the job be done using only a single timer or scheduler ? What is I schedule a method and use it for, Say 10 sprites ? Will it affect the performance..?

    Read the article

  • Google Analytics account setup for multiple personal websites?

    - by User
    I have multiple personal websites that I develop and plan to develop more over time. The number of websites is currently greater than one but less than 50. Currently I have a single Google account with a single analytics account that has a web property for each of my sites. My understanding is that you can have up to 25 analytics accounts attached to a single google account and each of those 25 acccounts can have up to 50 web properties in them which would allow me to track up to 1,250 sites. I don't think I'll be hitting that number anytime soon, however are there other reasons to structure accounts differently, such as using a separate google account for each site and then adding myself as an administrator?

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

  • Setup sonar-runner for multiple java projects

    - by zetafish
    I am trying to run sonar-runner to analyze multiple Java projects in one go. According to the documentation http://docs.codehaus.org/display/SONAR/Analyse+with+a+simple+Java+Runner it is just a matter of creating a sonar-project.properties file for each project. But it is not clear to me where exactly I have to put these sonar-project.properties files. I tried to add multiple .properties files in the $SONAR_RUNNER_HOME/conf folder but the runner does not seem to pick them up. It only sees the sonar-project.properties file. Any suggestions on how to run sonar-runner for multiple projects?

    Read the article

  • Looking for USB Network Attached Storage (NAS) Adapter that supports multiple drives, NTFS/FAT32 fil

    - by braveterry
    I'm looking for a NAS Adapter that supports attaching multiple USB devices to the network. Here's what I'd like to see in a NAS adapter: Under $100.00. Support for multiple devices. This can be through a USB hub or through multiple USB connectors on the device itself. Bittorrent support would be nice, but this isn't a deal-breaker. Filesystem support for at least NTFS or FAT32. I'd prefer to not have to reformat to use the device, but this is also not a deal-breaker. Here is what I am NOT looking for: I'm NOT looking for a NAS enclosure. I already have a couple of spare external USB drives that I'd like to use. I'm NOT looking for a networked USB hub like the one mentioned here. Network USB hubs only allow access to a drive from one PC at a time. I'm NOT looking for a wireless router with a NAS built in. I already have a wireless router, and I'd rather not go through the hassle of replacing it if possible. What I've looked at so far PogoPlug: This supports multiple devices via a USB hub, but there's not Bittorrent support. It's $99.00, so I may end up going with this and hope that they patch in Bittorrent support later. Addonics NAS Adapter: This supports only one device per adapter, so it's a non-starter. SimpleNET NAS Head USB 2.0 Portable Dongle: I'm not 100% sure this supports multiple devices. Plus there doesn't seem to be any Bittorrent support. I'll try to update this post as I explore other devices.

    Read the article

  • IMAP proxy as a POP3 hub?

    - by mailman stan
    Simple scenario, complicated technology: One family receiving mail from five email addresses via POP3 into one Outlook inbox on a single PC. Now we'd like to be able to replicate that single inbox across multiple devices (eg. desktop PC, laptop, netbook, smartphone). If we continue using POP3 as the mail transfer protocol, messages will be downloaded to one device and will not be visible to the others; replies will likewise be isolated on the sending machine. If we switch to IMAP, I understand that we can have multiple devices maintaining a shared view of an inbox hosted at the server end, but what about multiple accounts? I tried changing the account configuration in Outlook to fetch from the mail providers' IMAP service instead of POP3, which does give a shared view across multiple devices but also causes Outlook to create a separate inbox and PST for each account. This is awkward because it means there are five separate folders that need to be checked, and Outlook tools like search filters and rules don't seem to work across accounts. To get what I want (five accounts delivered into one shared mailbox) it seems that I would need some sort of intervening server that collects mail (using POP3) from all our accounts into a single inbox while preserving the original destination addresses, and then serves it up to all our devices using IMAP. Is this workable? Is it a good approach? Is there an easier way?

    Read the article

  • counting unique values based on multiple columns

    - by gooogalizer
    I am working in google spreadsheets and I am trying to do some counting that takes into consideration cell values across multiple cells in each row. Here's my table: |AUTHOR| |ARTICLE| |VERSION| |PRE-SELECTED| ANDREW GOLF STREAM 1 X ANDREW GOLF STREAM 2 X ANDREW HURRICANES 1 JOHN CAPE COD 1 X JOHN GOLF STREAM 1 (Google doc here) Each person can submit multiple articles as well as multiple versions of the same article. Sometimes different people submit different articles that happen to be identically named (Andrew and John both submitted different articles called "Golf Stream"). Multiple versions written by the same person do not count as unique, but articles with the same title written by different people do count as unique. So, I am looking to find a formula that Counts the number of unique articles that have been submitted [4] (without having to manually create extra columns for doing CONCATS, if possible) It would also be great to find formulas that: Count the number of unique articles that have been pre-selected (marked "X" in "PRE-SELECTED" column) [2] Count the number of unique articles that have only 1 version [4] Count the number of unique articles that have more than 1 of their versions pre-selected 1 Thank you so much! Nikita

    Read the article

  • Can the Firefox password manager store and manage passwords for multiple sub-domains or different UR

    - by Howiecamp
    Can the Firefox password manager store and manage passwords for multiple sub-domains, or for multiple URLs in the same domain? The default behavior of Firefox is that all requests for *.domain.com are treated as the same. I'd like to have Firefox do the following: Store and manage passwords separately for multiple sub-domains, e.g. mail.google.com and picasa.google.com Store and manage passwords separately for different URLs in the same domain, e.g. http://mail.google.com/a/company1.com and http://mail.google.com/a/company2.com

    Read the article

  • Managing multiple IMAP accounts in Thunderbird

    - by baritoneuk
    I've been using Thunderbird for years without issues with 20+ pop3 accounts. I'm moving over to imap which will enable me to keep copies of the emails locally and on the server whilst keeping everthing synchronised. However I'm looking for the best way to manage multiple imap accounts on Thunderbird. Currently I have a filter that copies all the emails into a central inbox and into seperate local folders. The reason for this is I go through my inbox daily and delete all emails that don't require any action. I move any emails that require action to my "action" imap account folder. This way I can syncronise all the emails that require action across multiple computers (and mobile devices). This technique is my implemantion of the GTD or Getting things Done philosophy. I also copy over each email into seperate local folders. The reason I do this is just in case any emails on the imap accounts get deleted, or something drastic happens on the server which means I lose all the emails. My business partner has access to some of these emails and still uses pop3 (with "leave copy on server" checked), but I know sometimes Thunderbird can still delete emails off the server sometimes. The problem with the above is that thunderbird gives me the dreaded error dialogue saying that the emails cannot be filtered due to another process. I find the folder list in Thunderbird hard to manage. Here is a screenshot of part of my folder list- as you can see it's a bit of a complicated list and not easy to manage: What would be the best way of me managing multiple imap accounts whilst allowing me to have copies put in a central folder and emails in local folders? It would be useful if people think this is necessary, as perhaps there is a betterway? How do people manage multiple imap accounts in a way that allows them to keep on top of actionable emails? I'd be interested in how others manage this. I've never used the Thunderbird-based client "Postbox", does this handle multiple imaps better?

    Read the article

  • Manage multiple wordpress blogs from one central location

    - by Abhishek
    I need a solution to manage multiple wordpress blogs from one central location. I tried Wordpress MU which is the obvious first choice but it is too restrictive and with lots of bugs. No plugins work etc. Is there any other way I can manage multiple wordpress blogs from one central location? Primary objective is to post an article on multiple blogs at once and manage comments from there.

    Read the article

  • Multiple test Active Directory envirovments hand in hand with production domain controllers

    - by MadBoy
    What's the best approach of having multiple test environments next to production one? We have multiple programming teams that build solutions that use Active Directory very often. We have tried different approaches, starting with their own domain controllers (in same subnet), or additional OU's in our production AD that the team gets control over and can create/delete accounts within that one OU. We thought of possible 4 solutions: Setting up separate OU's in ou production env. Creating subdomains for our contoso.com domain like test.contoso.com, something.contoso.com and delegating control to the teams (would we need additional DC's or the two that we have already would be enough to hold this? Setting up additional test domain controler that has a trust to our main domain and all teams can use the test domain controler as they please. Setting up single domain controller for every team/project. We're taking in consideration amount of resources needed, security (for example having multiple domain controlers with multiple passwords may lead users to use simpler passwords) and overall best practices for this scenario.

    Read the article

  • Merge executables to avoid multiple UAC

    - by petebob796
    Is there a program that can merge multiple windows executables into one that can run concurrently or in a sequence. I realize this sounds like how virus's often work but I have real needs. I am trying to avoid multiple UAC prompts in an installation process that runs up multiple MS hot fixes. Any other advice on ways to avoid the UAC prompts when multple exe's are to be installed is appreciated.

    Read the article

  • Tool to run same key strokes on multiple unix machines

    - by virtualvoid
    I want to run the same commands on multiple machines, I know I can do this using ssh scripting or things like clusterssh, however I don't want to install anything on the server. (Don't have the rights) What I want is to just clone the keystrokes across multiple machines e.g. run cat /etc/oratab on one window and same is run on multiple windows e.g. in putty, is there a tool to do that from a windows client.

    Read the article

  • Need to configure multiple default gateways for four seperate physical network ports for a FreeBSD Webserver

    - by user20010
    I need to configure default gateways for four separate physical network interfaces for a FreeBSD Webserver. Basically, this is a web server that needs to be accessed by multiple WANS. I've been using various online resources, and a combination of setfib, pf, and ipfw. This web server will be deployed in multiple sites where access to next hop router info is not available, so we can't use static routes. We've used setfib to successfully create multiple routing tables and can ping beyond every default gateway we've created. Using setfib # ping ip.addr.what.ever we can ping anything available on a wan and beyond the router. The problem is we can't get Apache web server (port 80) traffic to route out when external users access the server(box). Multiple people have examples of binding setfib to ipfw commands, but none of them seem to work.

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >