Search Results

Search found 16032 results on 642 pages for 'sync framework'.

Page 451/642 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • Who could ask for more with LESS CSS? (Part 2 of 3&ndash;Setup)

    - by ToStringTheory
    Welcome to part two in my series covering the LESS CSS language.  In the first post, I covered the two major CSS precompiled languages - LESS and SASS to a small extent, iterating over some of the features that you could expect to find in them.  In this post, I will go a little further in depth into the setup and execution of using the LESS framework. Introduction It really doesn’t take too much to get LESS working in your project.  The basic workflow will be including the necessary translator in your project, defining bundles for the LESS files, add the necessary code to your layouts.cshtml file, and finally add in all your necessary styles to the LESS files!  Lets get started… New Project Just like all great experiments in Visual Studio, start up a File > New Project, and create a new MVC 4 Web Application.  The Base Package After you have the new project spun up, use the Nuget Package Manager to install the Bundle Transformer: LESS package. This will take care of installing the main translator that we will be using for LESS code (dotless which is another Nuget package), as well as the core framework for the Bundle Transformer library.  The installation will come up with some instructions in a readme file on how to modify your web.config to handle all your *.less requests through the Bundle Transformer, which passes the translating onto dotless. Where To Put These LESS Files?! This step isn’t really a requirement, however I find that I don’t like how ASP.Net MVC just has a content directory where they store CSS, content images, css images….  In my project, I went ahead and created a new directory just for styles – LESS files, CSS files, and images that are only referenced in LESS or CSS.  Ignore the MVC directory as this was my testbed for another project I was working on at the same time.  As you can see here, I have: A top level directory for images which contains only images used in a page A top level directory for scripts A top level directory for Styles A few directories for plugins I am using (Colrizr, JQueryUI, Farbtastic) Multiple *.less files for different functions (I’ll go over these in a minute) I find that this layout offers the best separation of content types.  Bring Out Your Bundles! The next thing that we need to do is add in the necessary code for the bundling of these LESS files.  Go ahead and open your BundleConfig.cs file, usually located in the /App_Start/ folder of the project.  As you will see in a minute, instead of using the method Microsoft does in the base MVC 4 project, I change things up a bit.  Define Constants The first thing I do is define constants for each of the virtual paths that will be used in the bundler: The main reason is that I hate magic strings in my program, so the fact that you first defined a virtual path in the BundleConfig file, and then used that path in the _Layout.cshtml file really irked me. Add Bundles to the BundleCollection Next, I am going to define the bundles for my styles in my AddStyleBundles method: That is all it takes to get all of my styles in play with LESS.  The CssTransformer and NullOrderer types come from the Bundle Transformer we grabbed earlier.  If we didn’t use that package, we would have to write our own function (not too hard, but why do it if it’s been done). I use the site.less file as my main hub for LESS - I will cover that more in the next section. Add Bundles To Layout.cshtml File With the constants in the BundleConfig file, instead of having to use the same magic string I defined for the bundle virtual path, I am able to do this: Notice here that besides the RenderSection magic strings (something I am working on in another side project), all of the bundles are now based on const strings.  If I need to change the virtual path, I only have to do it in one place.  Nifty! Get Started! We are now ready to roll!  As I said in the previous section, I use the site.less file as a central hub for my styles: As seen here, I have a reset.css file which is a simple CSS reset.  Next, I have created a file for managing all my color variables – colors.less: Here, you can see some of the standards I started to use, in this case for color variables.  I define all color variables with the @col prefix.  Currently, I am going for verbose variable names. The next file imported is my font.less file that defines the typeface information for the site: Simple enough.  A couple of imports for fonts from Google, and then declaring variables for use throughout LESS.  I also set up the heading sizes, margins, etc..  You can also see my current standardization for font declaration strings – @font. Next, I pull in a mixins.less file that I grabbed from the Twitter Bootstrap library that gives some useful parameterized mixins for use such as border-radius, gradient, box-shadow, etc… The common.less file is a file that just contains items that I will be defining that can be used across all my LESS files.  Kind of like my own mixins or font-helpers: Finally I have my layout.less file that contains all of my definitions for general site layout – width, main/sidebar widths, footer layout, etc: That’s it!  For the rest of my one off definitions/corrections, I am currently putting them into the site.less file beneath my original imports Note Probably my favorite side effect of using the LESS handler/translator while bundling is that it also does a CSS checkup when rendering…  See, when your web.config is set to debug, bundling will output the url to the direct less file, not the bundle, and the http handler intercepts the call, compiles the less, and returns the result.  If there is an error in your LESS code, the CSS file can be returned empty, or may have the error output as a comment on the first couple lines. If you have the web.config set to not debug, then if there is an error in your code, you will end up with the usual ASP.Net exception page (unless you catch the exception of course), with information regarding the failure of the conversion, such as brace mismatch, undefined variable, etc…  I find it nifty. Conclusion This is really just the beginning.  LESS is very powerful and exciting!  My next post will show an actual working example of why LESS is so powerful with its functions and variables…  At least I hope it will!  As for now, if you have any questions, comments, or suggestions on my current practice, I would love to hear them!  Feel free to drop a comment or shoot me an email using the contact page.  In the mean time, I plan on posting the final post in this series tomorrow or the day after, with my side project, as well as a whole base ASP.Net MVC4 templated project with LESS added in it so that you can check out the layout I have in this post.  Until next time…

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • How to deal with fellow programmer who likes to delegate task with lack any support from boss [closed]

    - by Rudy
    I have a problem with my fellow programmer. We are currently working together in a small project that need to be shipped every 2 weeks. She has a tendency to ask for help for every issues that she is facing. Whether it's a compile error, algorithm problem or even sync/merge issue that caused by herself. She does not even bother to check Google or try to find out by herself. I can be asked to help her for 5-10 times a day. Everyday her husband keeps calling (4-6 times a day), and most of the code that has been delivered by her are actually incorrect. Today she framed me for sending the wrong delivery product. She went home after lunch on the delivery day without telling PM and other team member on that day and her code she commited does not work at all. It's not even tested. I have no choice to roll back her code and cleaning her code just for sake to able to run the product. I have warned her about her defective codes for almost 3 iterations. She said when she was not around I should be able to test her module for her. I snapped and yelled that I am not her slave and directly reported to my boss. However, my boss is not a person that can manage and care about software quality. What is the most important thing to my boss is delivery of product, whether it tested or not. He can even asked us to deliver something that not even tested by QA to the client, on the next day. Most of our suggestion is not followed by him. He even asked me to apologize to her because I snapped. I am tired of the whole situation. This kind of thing keeps repeated. I do have saving to be able to survive for 6 months and the idea of resigning is keep haunting. There is nothing else that can be learned in my current job and I had been in a better environment than this. What should I do with the situation?

    Read the article

  • A Simple Entity Tagger

    - by Elton Stoneman
    In the REST world, ETags are your gateway to performance boosts by letting clients cache responses. In the non-REST world, you may also want to add an ETag to an entity definition inside a traditional service contract – think of a scenario where a consumer persists its own representation of your entity, and wants to keep it in sync. Rather than load every entity by ID and check for changes, the consumer can send in a set of linked IDs and ETags, and you can return only the entities where the current ETag is different from the consumer’s version.  If your entity is a projection from various sources, you may not have a persistent ETag, so you need an efficient way to generate an ETag which is deterministic, so an entity with the same state always generates the same ETag. I have an implementation for a generic ETag generator on GitHub here: EntityTagger code sample. The essence is simple - we get the entity, serialize it and build a hash from the serialized value. Any changes to either the state or the structure of the entity will result in a different hash. To use it, just call SetETag, passing your populated object and a Func<> which acts as an accessor to the ETag property: EntityTagger.SetETag(user, x => x.ETag); The implementation is all in at 80 lines of code, which is all pretty straightforward: var eTagProperty = AsPropertyInfo(eTagPropertyAccessor); var originalETag = eTagProperty.GetValue(entity, null); try { ResetETag(entity, eTagPropertyAccessor); string json; var serializer = new DataContractJsonSerializer(entity.GetType()); using (var stream = new MemoryStream()) { serializer.WriteObject(stream, entity); json = Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int)stream.Length); } var guid = GetDeterministicGuid(json); eTagProperty.SetValue(entity, guid.ToString(), null); //... There are a couple of helper methods to check if the object has changed since the ETag value was last set, and to reset the ETag. This implementation uses JSON to do the serializing rather than XML. Benefit - should be marginally more efficient as your hashing a much smaller serialized string; downside, JSON doesn't include namespaces or class names at the root level, so if you have two classes with the exact same structure but different names, then instances which have the same content will have the same ETag. You may want that behaviour, but change to use the XML DataContractSerializer if you think that will be an issue. If you can persist the ETag somewhere, it will save you server processing to load up the entity, but that will only apply to scenarios where you can reliably invalidate your ETag (e.g. if you control all the entry points where entity contents can be updated, then you can calculate and persist the new ETag with each update).

    Read the article

  • Protobuf design patterns

    - by Monster Truck
    I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions: The first is a broad general question: What patterns are we seeing people use? Said patterns being related to class organization (e.g., messages per .proto file, packaging, and distribution) and message definition (e.g., repeated fields vs. repeated encapsulated fields*) etc. There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML. I also have specific questions over the following two different patterns: Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity): public V toProtobufMessage() { V.Builder builder = V.newBuilder(); for (Item item : getItemList()) { builder.addItem(item); } return builder.setAmountPayable(getAmountPayable()). setShippingAddress(getShippingAddress()). build(); } public static T fromProtobufMessage(V message_) { return new T(message_.getShippingAddress(), message_.getItemList(), message_.getAmountPayable()); } One advantage I see with (2) is that I can hide away the complexities introduced by V.newBuilder().addField().build() and add some meaningful methods such as isOpenForTrade() or isAddressInFreeDeliveryZone() etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class). One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files. Does anyone have better techniques or further critiques on any of the two approaches? *By encapsulating a repeated field I mean messages such as this one: message ItemList { repeated item = 1; } message CustomerInvoice { required ShippingAddress address = 1; required ItemList = 2; required double amountPayable = 3; } instead of messages such as this one: message CustomerInvoice { required ShippingAddress address = 1; repeated Item item = 2; required double amountPayable = 3; } I like the latter but am happy to hear arguments against it.

    Read the article

  • ??????????

    - by Allen Gao
    ???????RAC??????????????????10gR2?11gR1.??????????????CRS???????1.ocssd : ???????????(Node Monitoring)????(Group Management),??CRS????????????????????????,????????????(network heartbeat)?????(disk heartbeat)???,?????????????????????,????????????,??????????????????,?????node kill escalation(???11gR1????????),??????????????????????????(reboot time,???3?)????????:ocssd.bin??????????????????????????,??????????????????????????????,misscount(???30?,??????????????600?),????????????,???????????????????,?????????????2???,??????,??????????????,?????????????????????:ocssd.bin?????????????(Voting File)??????????,?????????????????????????????,disk timeou(???200?),?????????????????????,CRS???[N/2]+1????????,??N??????,??????2.oclsomon:????????ocssd????,????ocssd.bin??????,???????3.oprocd:??????Linux?Unix??,????????????????????????????????,?????????:????????????init.cssd?????????????????????????1.??????2.<crs???>/log/<????>/cssd/ocssd.log3.oprocd.log(/etc/oracle/oprocd/*.log.* ? /var/opt/oracle/oprocd/*.log.*)4.<crs???>/log/<????>/cssd/oclsomon/oclsomon.log5. Oracle OSWatcher ????????????????????1.?ocssd???????????ocssd.log???????,????????????????????????????????,???????,OSW??(traceroute???),???????(cluster interconnect)??????,?????????[ CSSD]2012-03-02 23:56:18.749 [3086] >WARNING: clssnmPollingThread: node <node_name> at 50% heartbeat fatal, eviction in 14.494 seconds[ CSSD]2012-03-02 23:56:25.749 [3086] >WARNING: clssnmPollingThread: node <node_name> at 75% heartbeat fatal, eviction in 7.494 seconds[ CSSD]2012-03-02 23:56:32.749 [3086] >WARNING: clssnmPollingThread: node <node_name>at 90% heartbeat fatal, eviction in 0.494 seconds[CSSD]2012-03-02 23:56:33.243 [3086] >TRACE:   clssnmPollingThread: Eviction started for node <node_name>, flags 0x040d, state 3, wt4c 0[CSSD]2012-03-02 23:56:33.243 [3086] >TRACE:   clssnmDiscHelper: <node_name>, node(4) connection failed, con (1128a5530), probe(0)[CSSD]2012-03-02 23:56:33.243 [3086] >TRACE:   clssnmDiscHelper: node 4 clean up, con (1128a5530), init state 5, cur state 5[CSSD]2012-03-02 23:56:33.243 [3600] >TRACE:   clssnmDoSyncUpdate: Initiating sync 196446491[CSSD]2012-03-02 23:56:33.243 [3600] >TRACE:   clssnmDoSyncUpdate: diskTimeout set to (27000)ms??:???????ocssd.log?????????????????????,?????????????????????ocssd.log???????,??????????????????????????????,OSWatcher??(iostat???),???i/o????????,?????????2010-08-13 18:34:37.423: [    CSSD][150477728]clssnmvDiskOpen: Opening /dev/sdb82010-08-13 18:34:37.423: [    CLSF][150477728]Opened hdl:0xf4336530 for dev:/dev/sdb8:2010-08-13 18:34:37.429: [   SKGFD][150477728]ERROR: -9(Error 27072, OS Error (Linux Error: 5: Input/output errorAdditional information: 4Additional information: 720913Additional information: -1))2010-08-13 18:34:37.429: [    CSSD][150477728](:CSSNM00060: )clssnmvReadBlocks: read failed at offset 17 of /dev/sdb82010-08-13 18:34:38.205: [    CSSD][4110736288](:CSSNM00058: )clssnmvDiskCheck: No I/O completions for 200880 ms for voting file /dev/sdb8)2010-08-13 18:34:38.206: [    CSSD][4110736288](:CSSNM00018: )clssnmvDiskCheck: Aborting, 0 of 1 configured voting disks available, need 12010-08-13 18:34:38.206: [    CSSD][4110736288]###################################2010-08-13 18:34:38.206: [    CSSD][4110736288]clssscExit: CSSD aborting from thread clssnmvDiskPingMonitorThread 2010-08-13 18:34:38.206: [    CSSD][4110736288]###################################2. ?oclsomon???????????oclsomon.log ?????,??????????ocssd????,??ocssd??????(RT)???,?????????????(?cpu)??,?????????????,OSW??(vmstat,top???),?????????3.?oprocd???????????oprocd?????????,?????????oprocd????? Dec 21 16:15:30.369857 | LASTGASP | AlarmHandler:  timeout(2312 msec) exceeds interval(1000 msec)+margin(500 msec).   Rebooting NOW.??oprocd?????????????????????,?????ntp(?????????),??diagwait=13 ????????,??,?????????????,??????CRS,???????????????,??????????????oprocd????,??,?????OSWatcher??(vmstat,top???),??????????????????????????????,????????????????? ???????,??????MOS ???Note 265769.1 :Troubleshooting 10g and 11.1 Clusterware RebootsNote 1050693.1 :Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

    Read the article

  • Want to use something like Citrix XenClient, Free Alternative?

    - by Chris
    I'm looking to go into IT, general office server management, and it looks like XenClient would be a awesome tool to use. If I get it right, you would store a central image of the OS you want to deploy (in an iso file) on the main server. Then use XenClient to pull that image down to the client, and it will then boot the OS inside of the virtual machine. Does it download the image of the OS and store it locally (like cloning the VM onto the client?) I'd love to find a free (possibly open source?) alternative to this, I keep on hearing about KVM in Linux and PXE booting a minimalistic OS to use remote KVMs.... Would that be what I'm looking for? Ideally, I'd like a system.. - That allows me to manage one central image for multiple clients (virtualized hardware) - Easily push a new VM onto the client for easy updating. - Be able to keep files in sync (but that might be a samba / active directory's job) Would those things be possible with some kind of free alternative? Some guidance would be greatly appreciated.

    Read the article

  • Windows CE 6.0 time setting in registry being overrided

    - by JaminSince83
    I have asked this question on stack overflow but its probably better suited here. So I have a Motorola MC3190 Mobile Barcode scanning device with Windows CE 6.0. Now I want to get the device to sync its date/time on boot up with our domain controller using a registry file that I have created. I have used this registry file below to get close to what I require. REG 1 REGEDIT4 [HKEY_LOCAL_MACHINE\Services\TIMESVC] "UserProcGroup"=dword:00000002 "Flags"=dword:00000010 "multicastperiod"=dword:36EE80 "threshold"=dword:5265C00 "recoveryrefresh"=dword:36EE80 "refresh"=dword:5265C00 "Context"=dword:0 "Autoupdate" = dword:1 "server" = "NAMEOFMYSERVER" "ServerRole" = dword:0 "Trustlocalclock" = dword:0 "Dll"="timesvc.dll" "Keep"=dword:1 "Prefix"="NTP" "Index"=dword:0 [HKEY_LOCAL_MACHINE\nls] "DefaultLCID" = dword:00000809 [HKEY_LOCAL_MACHINE\nls\overrides] "LCID" = dword:00000809 [HKEY_LOCAL_MACHINE\Time] @ = "UTC" "TimeZoneInformation"=hex:\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 [HKEY_LOCAL_MACHINE\Time Zones] @ = "UTC" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Clock] "AutoDST" = dword:00000000 Now it gets the correct date and shows the time zone correctly however the time is always 5 hours behind on Eastern Standard Time, which is really annoying. I have researched heavily into this and this question has been asked before here As you will see I have copied what it suggests but it doesnt work. Something is overiding the time which I dont understand enough about to resolve. I cannot find any other setting to get it to set the time correctly. Any help would be greatly appreciated.

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • WDS's MDT DeploymentShare and REMINST replicated with DFS-R does not read WIM from local WDS

    - by mbrownnyc
    I've read several guides on using DFS-R with WDS and MDT to replicate REMINST and DeploymentShare, and I have a particularly strange problem. On the receiving server, after configuring WDS and mounting the DeploymentShare into MDT's DeploymentWorkbench, I also performed the following: 1) in .\Control\Bootstrap.ini, changed DeployRoot to \%wdsserver%\DeploymentShare$ 2) Changed the UNC path at the root of the MDT Deployment Share in the DeploymentWorkbench to match that of the current server. 3) In Unattend.xml files located: .\Control**, modified the following value to match the current server: <cpi:offlineImage catelog://HOST/ I am able to boot and grab the LiteTouch PE image off the local WDS TFTP server, but the WIM files, the scripts, everything else is being pulled off the WDS server at the remote site (the original WDS server that was the source of the files within the DFS-R replicated folder). What do I do in order to solve this problem? I've grepped all the files below the DeploymentShare to look for instances of the hostname of the WDS server at the remote site (the source of the files), but I found none. Here are the guides I referred to: http://technet.microsoft.com/en-us/library/cc771324%28WS.10%29.aspx http://blogs.technet.com/b/askds/archive/2009/12/16/wds-and-dfsr-love-at-first-sync.aspx http://oasysadmin.com/2011/11/03/copying-moving-and-replicating-the-mdt-2010-deployment-share/

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • RAID-1 and regular drive removal (using RAID-1 as a backup measure)

    - by Vi
    Is using mdadm's RAID-1 of 2 partitions (one on laptop's internal HDD, one on external HDD) a good idea. I want the system to work as RAID-1 if both drives are present, work as regular volume (degradad RAID-1) if external HDD is unplugged and quickly resync when I plug external HDD again. Questions: Is it a good idea? Will write-intent bitmap be enough for this task or I need something else? Should I consider doing it at filesystem level (3b. if yes, how?). Basic requirements are: Quick resync when I re-add the external drive (provided I hasn't changed that partition). More or less consistent data on the removed drive if I remove it not during write/resync operation. If I remove the drive during resync I expect the data to be somewhat inconsistent, but expect quick resync completion when I re-add it again. E.g. I want the the remaining drive to track what is changed (there can be a lot of changes) and that sync back only those parts that need it.

    Read the article

  • Yum Update Failing mod_ssl and glibc_devel

    - by Kerry
    Any ideas on how to get this to not fail? # yum update Freeing read locks for locker 0x82: 4189/140342084876032 Freeing read locks for locker 0x84: 4189/140342084876032 Freeing read locks for locker 0x85: 4189/140342084876032 Freeing read locks for locker 0x86: 4189/140342084876032 Freeing read locks for locker 0x87: 4189/140342084876032 Freeing read locks for locker 0x9a: 4189/140342084876032 Freeing read locks for locker 0x9c: 4189/140342084876032 Freeing read locks for locker 0x9d: 4189/140342084876032 Freeing read locks for locker 0x9e: 4189/140342084876032 Freeing read locks for locker 0x9f: 4189/140342084876032 Freeing read locks for locker 0xa0: 4189/140342084876032 Freeing read locks for locker 0xa1: 4189/140342084876032 Freeing read locks for locker 0xa2: 4189/140342084876032 Freeing read locks for locker 0xa3: 4189/140342084876032 Freeing read locks for locker 0xa4: 4189/140342084876032 Freeing read locks for locker 0xa5: 4189/140342084876032 Freeing read locks for locker 0xa6: 4189/140342084876032 Freeing read locks for locker 0xa7: 4189/140342084876032 Freeing read locks for locker 0xa8: 4189/140342084876032 Freeing read locks for locker 0xa9: 4189/140342084876032 Freeing read locks for locker 0xaa: 4189/140342084876032 Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.hmc.edu * epel: mirrors.kernel.org * extras: centos.mirror.freedomvoice.com * updates: mirrors.sonic.net Setting up Update Process Resolving Dependencies There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them. The program yum-complete-transaction is found in the yum-utils package. --> Running transaction check ---> Package device-mapper-persistent-data.x86_64 0:0.2.8-2.el6 will be updated ---> Package device-mapper-persistent-data.x86_64 0:0.2.8-4.el6_5 will be an update ---> Package glibc-headers.x86_64 0:2.12-1.132.el6 will be updated --> Processing Dependency: glibc-headers = 2.12-1.132.el6 for package: glibc-devel-2.12-1.132.el6.x86_64 ---> Package glibc-headers.x86_64 0:2.12-1.132.el6_5.2 will be an update ---> Package httpd.x86_64 0:2.2.15-29.el6.centos will be updated --> Processing Dependency: httpd = 2.2.15-29.el6.centos for package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 ---> Package httpd.x86_64 0:2.2.15-30.el6.centos will be an update ---> Package kernel.x86_64 0:2.6.32-431.17.1.el6 will be installed ---> Package kernel-devel.x86_64 0:2.6.32-431.17.1.el6 will be installed ---> Package selinux-policy-targeted.noarch 0:3.7.19-231.el6_5.1 will be updated ---> Package selinux-policy-targeted.noarch 0:3.7.19-231.el6_5.3 will be an update --> Finished Dependency Resolution Error: Package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 (@base) Requires: httpd = 2.2.15-29.el6.centos Removing: httpd-2.2.15-29.el6.centos.x86_64 (@base) httpd = 2.2.15-29.el6.centos Updated By: httpd-2.2.15-30.el6.centos.x86_64 (updates) httpd = 2.2.15-30.el6.centos Error: Package: glibc-devel-2.12-1.132.el6.x86_64 (@base) Requires: glibc-headers = 2.12-1.132.el6 Removing: glibc-headers-2.12-1.132.el6.x86_64 (@base) glibc-headers = 2.12-1.132.el6 Updated By: glibc-headers-2.12-1.132.el6_5.2.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.2 Available: glibc-headers-2.12-1.132.el6_5.1.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.1 You could try using --skip-broken to work around the problem ** Found 34 pre-existing rpmdb problem(s), 'yum check' output follows: audit-2.2-4.el6_5.x86_64 is a duplicate with audit-2.2-2.el6.x86_64 audit-libs-2.2-4.el6_5.x86_64 is a duplicate with audit-libs-2.2-2.el6.x86_64 curl-7.19.7-37.el6_5.3.x86_64 is a duplicate with curl-7.19.7-37.el6_4.x86_64 device-mapper-multipath-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-0.4.9-72.el6_5.1.x86_64 device-mapper-multipath-libs-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-libs-0.4.9-72.el6_5.1.x86_64 2:ethtool-3.5-1.4.el6_5.x86_64 is a duplicate with 2:ethtool-3.5-1.2.el6_5.x86_64 glibc-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-2.12-1.132.el6.x86_64 glibc-common-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-devel-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 has missing requires of glibc-headers = ('0', '2.12', '1.132.el6_5.2') gnutls-2.8.5-14.el6_5.x86_64 is a duplicate with gnutls-2.8.5-13.el6_5.x86_64 httpd-2.2.15-29.el6.centos.x86_64 has missing requires of httpd-tools = ('0', '2.2.15', '29.el6.centos') httpd-manual-2.2.15-30.el6.centos.noarch has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') iproute-2.6.32-32.el6_5.x86_64 is a duplicate with iproute-2.6.32-31.el6.x86_64 kernel-firmware-2.6.32-431.17.1.el6.noarch is a duplicate with kernel-firmware-2.6.32-431.11.2.el6.noarch kernel-headers-2.6.32-431.17.1.el6.x86_64 is a duplicate with kernel-headers-2.6.32-431.11.2.el6.x86_64 kpartx-0.4.9-72.el6_5.2.x86_64 is a duplicate with kpartx-0.4.9-72.el6_5.1.x86_64 krb5-libs-1.10.3-15.el6_5.1.x86_64 is a duplicate with krb5-libs-1.10.3-10.el6_4.6.x86_64 libblkid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libblkid-2.17.2-12.14.el6.x86_64 libcurl-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-7.19.7-37.el6_4.x86_64 libcurl-devel-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-devel-7.19.7-37.el6_4.x86_64 libtasn1-2.3-6.el6_5.x86_64 is a duplicate with libtasn1-2.3-3.el6_2.1.x86_64 libuuid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libuuid-2.17.2-12.14.el6.x86_64 libxml2-2.7.6-14.el6_5.1.x86_64 is a duplicate with libxml2-2.7.6-14.el6.x86_64 mdadm-3.2.6-7.el6_5.2.x86_64 is a duplicate with mdadm-3.2.6-7.el6.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 is a duplicate with 1:mod_ssl-2.2.15-29.el6.centos.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') nss-softokn-3.14.3-10.el6_5.x86_64 is a duplicate with nss-softokn-3.14.3-9.el6.x86_64 openssl-1.0.1e-16.el6_5.7.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.4.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.7.x86_64 openssl-devel-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-devel-1.0.1e-16.el6_5.7.x86_64 selinux-policy-3.7.19-231.el6_5.3.noarch is a duplicate with selinux-policy-3.7.19-231.el6_5.1.noarch tzdata-2014d-1.el6.noarch is a duplicate with tzdata-2014b-1.el6.noarch util-linux-ng-2.17.2-12.14.el6_5.x86_64 is a duplicate with util-linux-ng-2.17.2-12.14.el6.x86_64 UPDATE I installed and ran yum-complete-transaction as requested, it finished some things and suggested I run package-cleanup --problems, which yielded this: package-cleanup --problems Loaded plugins: fastestmirror Package httpd-manual-2.2.15-30.el6.centos.noarch requires httpd = ('0', '2.2.15', '30.el6.centos') Package httpd-2.2.15-29.el6.centos.x86_64 requires httpd-tools = ('0', '2.2.15', '29.el6.centos') Package mod_ssl-2.2.15-30.el6.centos.x86_64 requires httpd = ('0', '2.2.15', '30.el6.centos') Package glibc-devel-2.12-1.132.el6_5.2.x86_64 requires glibc-headers = ('0', '2.12', '1.132.el6_5.2') I'm definitely not a sys-admin, what would be the next step? UPDATE 2 I ran yum distro-sync: # yum distro-sync Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.hmc.edu * epel: mirrors.kernel.org * extras: centos.mirror.freedomvoice.com * updates: mirrors.sonic.net Setting up Distribution Synchronization Process Resolving Dependencies --> Running transaction check ---> Package glibc-headers.x86_64 0:2.12-1.132.el6 will be updated --> Processing Dependency: glibc-headers = 2.12-1.132.el6 for package: glibc-devel-2.12-1.132.el6.x86_64 ---> Package glibc-headers.x86_64 0:2.12-1.132.el6_5.2 will be an update ---> Package httpd.x86_64 0:2.2.15-29.el6.centos will be updated --> Processing Dependency: httpd = 2.2.15-29.el6.centos for package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 ---> Package httpd.x86_64 0:2.2.15-30.el6.centos will be an update --> Finished Dependency Resolution Error: Package: 1:mod_ssl-2.2.15-29.el6.centos.x86_64 (@base) Requires: httpd = 2.2.15-29.el6.centos Removing: httpd-2.2.15-29.el6.centos.x86_64 (@base) httpd = 2.2.15-29.el6.centos Updated By: httpd-2.2.15-30.el6.centos.x86_64 (updates) httpd = 2.2.15-30.el6.centos Error: Package: glibc-devel-2.12-1.132.el6.x86_64 (@base) Requires: glibc-headers = 2.12-1.132.el6 Removing: glibc-headers-2.12-1.132.el6.x86_64 (@base) glibc-headers = 2.12-1.132.el6 Updated By: glibc-headers-2.12-1.132.el6_5.2.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.2 Available: glibc-headers-2.12-1.132.el6_5.1.x86_64 (updates) glibc-headers = 2.12-1.132.el6_5.1 You could try using --skip-broken to work around the problem ** Found 34 pre-existing rpmdb problem(s), 'yum check' output follows: audit-2.2-4.el6_5.x86_64 is a duplicate with audit-2.2-2.el6.x86_64 audit-libs-2.2-4.el6_5.x86_64 is a duplicate with audit-libs-2.2-2.el6.x86_64 curl-7.19.7-37.el6_5.3.x86_64 is a duplicate with curl-7.19.7-37.el6_4.x86_64 device-mapper-multipath-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-0.4.9-72.el6_5.1.x86_64 device-mapper-multipath-libs-0.4.9-72.el6_5.2.x86_64 is a duplicate with device-mapper-multipath-libs-0.4.9-72.el6_5.1.x86_64 2:ethtool-3.5-1.4.el6_5.x86_64 is a duplicate with 2:ethtool-3.5-1.2.el6_5.x86_64 glibc-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-2.12-1.132.el6.x86_64 glibc-common-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 is a duplicate with glibc-devel-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6_5.2.x86_64 has missing requires of glibc-headers = ('0', '2.12', '1.132.el6_5.2') gnutls-2.8.5-14.el6_5.x86_64 is a duplicate with gnutls-2.8.5-13.el6_5.x86_64 httpd-2.2.15-29.el6.centos.x86_64 has missing requires of httpd-tools = ('0', '2.2.15', '29.el6.centos') httpd-manual-2.2.15-30.el6.centos.noarch has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') iproute-2.6.32-32.el6_5.x86_64 is a duplicate with iproute-2.6.32-31.el6.x86_64 kernel-firmware-2.6.32-431.17.1.el6.noarch is a duplicate with kernel-firmware-2.6.32-431.11.2.el6.noarch kernel-headers-2.6.32-431.17.1.el6.x86_64 is a duplicate with kernel-headers-2.6.32-431.11.2.el6.x86_64 kpartx-0.4.9-72.el6_5.2.x86_64 is a duplicate with kpartx-0.4.9-72.el6_5.1.x86_64 krb5-libs-1.10.3-15.el6_5.1.x86_64 is a duplicate with krb5-libs-1.10.3-10.el6_4.6.x86_64 libblkid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libblkid-2.17.2-12.14.el6.x86_64 libcurl-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-7.19.7-37.el6_4.x86_64 libcurl-devel-7.19.7-37.el6_5.3.x86_64 is a duplicate with libcurl-devel-7.19.7-37.el6_4.x86_64 libtasn1-2.3-6.el6_5.x86_64 is a duplicate with libtasn1-2.3-3.el6_2.1.x86_64 libuuid-2.17.2-12.14.el6_5.x86_64 is a duplicate with libuuid-2.17.2-12.14.el6.x86_64 libxml2-2.7.6-14.el6_5.1.x86_64 is a duplicate with libxml2-2.7.6-14.el6.x86_64 mdadm-3.2.6-7.el6_5.2.x86_64 is a duplicate with mdadm-3.2.6-7.el6.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 is a duplicate with 1:mod_ssl-2.2.15-29.el6.centos.x86_64 1:mod_ssl-2.2.15-30.el6.centos.x86_64 has missing requires of httpd = ('0', '2.2.15', '30.el6.centos') nss-softokn-3.14.3-10.el6_5.x86_64 is a duplicate with nss-softokn-3.14.3-9.el6.x86_64 openssl-1.0.1e-16.el6_5.7.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.4.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-1.0.1e-16.el6_5.7.x86_64 openssl-devel-1.0.1e-16.el6_5.14.x86_64 is a duplicate with openssl-devel-1.0.1e-16.el6_5.7.x86_64 selinux-policy-3.7.19-231.el6_5.3.noarch is a duplicate with selinux-policy-3.7.19-231.el6_5.1.noarch tzdata-2014d-1.el6.noarch is a duplicate with tzdata-2014b-1.el6.noarch util-linux-ng-2.17.2-12.14.el6_5.x86_64 is a duplicate with util-linux-ng-2.17.2-12.14.el6.x86_64

    Read the article

  • Adding an user to samba

    - by JustMaximumPower
    I'm trying to setup some samba shares in my home network on an Ubuntu 12.04 machine. Everything works fine for my user account (max) but I can not add any new user. Every time I try to add new user they can not use the shares. It's likely that the error is very basic to the concept of samba but please don't just tell me to read the docs. I've been trying that for about 2 weeks now. I've set up the server with my user max who can mount transfer and the share max. Than I added the user simon with sudo adduser --no-create-home --disabled-login --shell /bin/false simon because the user should not be able to ssh into the machine. I did an sudo smbpasswd -a simon and set an (samba) password for simon and added an share for simon. I also added simon to transferusers to give him access to the share transfer. But simon can't connect to transfer or simons. ---- output of testparam: ------- Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Processing section "[max]" Processing section "[simons]" Processing section "[transfer]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [max] comment = Privater share von Max path = /media/Main/max read only = No create mask = 0700 [simons] comment = Privater share von Simon path = /media/Main/simon read only = No create mask = 0700 [transfer] comment = Transferlaufwerk path = /media/Main/transfer read only = No create mask = 0755 ---- The files in /media/Main: ------ drwxrwxr-x 17 max max 4096 Oct 4 19:13 max/ drwx------ 5 simon max 4096 Aug 4 15:18 simon/ drwxrwxr-x 7 max transferusers 258048 Oct 1 22:55 transfer/

    Read the article

  • Vagrant synced folders aren't case sensitive

    - by lvmisooners
    For our web stack, we are moving from a Windows Server to CentOS. To facilitate development, we're utilizing Vagrant to run CentOS VMs locally. We're using Vagrant's Synced Folders feature to allow devs to use their favorite IDEs on their host machine, but we're finding that one key feature is missing from this setup: file system case sensitivity. The synced folder inside the VM apparently takes on the properties of the host's file system, so if I'm developing from a Windows machine, or even OSX, the file system isn't case sensitive. This is a big issue, as our production servers will be pure CentOS, and its file system will be case sensitive. Case sensitivity is one of the main reasons we wanted to have a local VM. We want to prevent "It works on my machine!" Some workarounds we've considered or tried: Use lsyncd to sync from the vagrant share to a location within the VM that is case sensitive updating files on the host doesn't seem to generate the events in the VM that lsync listens to Make a case-sensitive partition on the host (Doesn't work for Windows) Use samba this may be an option, but we haven't vetted it yet. Is there a better way? Note that we have developers using Windows, OS X, and Ubuntu, and the solution needs to work everywhere.

    Read the article

  • Backup Gmail using Mail.app and IMAP without redundancy

    - by Cawas
    I don't care for actually using mail app, I use mostly the gmail interface and mail app just for offline, for quickly reading and eventually replying. Everything is working fine, I think I've followed every guide out there... Here's a great one. But I could find nothing about avoiding redundancy. Well, I can manually do that either by using POP or by checking off most of my labels out of IMAP. But I do use a lot of labels and I often label messages with more than 1 label. And I want them on mail app. Is there anyway to make it keep just 1 copy of repeated messages? Maybe there's a message id or checksum that could be used... If there isn't a way to do it, be assured I still prefer having the extra messages and "wasting" space rather than not having any. edit: I've came across many solutions for finding duplicate files, but they just delete the files. That just make things worst: Mail will just sync it all again. I've realized it's probably better to keep two accounts setup, POP for backup and IMAP for everything else with removing the "All Mail" from it. That's because if the "All Mail" on the server is deleted for any reason, my "All Mail" local will also get deleted, while POP will keep all files regardless of the server. This doesn't solve the redundancy issue at all, but it doesn't create any new issue as well, and I can even use the search properly, without duplicated results, if I search just on the POP. So it helps optimizing a little bit. But I still think the best way to solve this issue would be having something such as aamann's Mail Scripts tweaked to hardlinking the duplicates rather than deleting, and optimized to not need to scan everything every time. I'm trying to contact him and see what we can do. At any pace, I'm still looking for an answer!

    Read the article

  • Google Apps for Domains, Multiple Domains

    - by belliez
    I have a primary google apps for domains account which I use for my personal email, calender, docs etc and is great. I also receive my pop3 company email via settings-Get mail from other accounts in my account. Due to spam I want to make use of gmail servers for my company email and have two options: [1] Add my second domain as a domain alias [2] Create a new apps for domains account If I do [1] above do I access (send and receive) my company email as if it was a separate account or is it merged into my primary domain. I want the two seperated. If I perform [2] can I share my contacts / calender between the two? I also have Act! contact manager which syncs to my primary domain and it is getting messy now with personal and work contacts being changed / sync'd to my Act CM software. I want to try and separate my personal and work contacts (but make the work them avaiable in my primary domain). Hope this makes sense! Your suggestions are gratefully accepted. Thank you

    Read the article

  • SATA drives or chipset throwing DRDY ERR and ICRC ABRT

    - by Matt
    I have an SD-VIA-1A2S PCI card with 2 sata ports (and one ATA-133 that isn't used). Two new Western Digital Caviar Green drives (WD10EARS 1TB) throw repeated errors in kern.log (removed date/time/host info for brevity): [ 7.376475] ata2.00: exception Emask 0x12 SAct 0x0 SErr 0x1000500 action 0x6 [ 7.376480] ata2.00: BMDMA stat 0x5 [ 7.376483] ata2: SError: { UnrecovData Proto TrStaTrns } [ 7.376489] ata2.00: cmd c8/00:40:20:00:00/00:00:00:00:00/e0 tag 0 dma 32768 in [ 7.376490] res 51/84:2f:20:00:00/00:00:00:00:00/e0 Emask 0x12 (ATA bus error) [ 7.376493] ata2.00: status: { DRDY ERR } [ 7.376495] ata2.00: error: { ICRC ABRT } [ 7.376504] ata2: hard resetting link I'm using Ubuntu 9.04 - 2.6.28-18-generic, though I have tried live cds of Ubuntu 9.10, Fedora 12 and OpenSUSE 11.2 - all running various 2.6.31 kernels - and all received the same error. Based on testing these drives and this card in two other machines and combos of connecting the drives directly to the motherboard or the add-in card, I'm relatively convinced that it's the VIA chipset that is the problem. Another computer that also has an onboard VIA SATA chipset (like the add-in card) produces the same errors when the drives are directly on that motherboard. I have been able to verify that the drives are perfectly good, and I tried everything I can think of in terms of swapping cables, psu isn't overloaded, etc. The error happens on boot once or twice, after using fdisk on the drive once or twice, and constantly when attempting to sync a new mdadm raid 1 array created on the two drives. Any thoughts on where to go from here - driver/kernel wise? I'm completely open to buying a new PCI add-in card if someone can recommend one with 2 internal sata ports that works well in Debian/Ubuntu. Thanks!

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Errors when attempting to update source files, Server 2012R2 (errors 80073701 and 14081)

    - by jeremy
    I have a Windows Server 2012R2 machine that I installed with Server Core, and then decided that I wanted to switch to GUI. I'll make the long story short: I ran windows updates, and now the source files are older/out of sync with the operating system, and I need to update the source files. Here are a couple of articles that outline how this is supposed to work: http://blog.coretech.dk/kaj/why-i-cant-convert-my-windows-server-2012-r2-core-to-gui/ http://blogs.technet.com/b/joscon/archive/2012/11/14/how-to-update-local-source-media-to-add-roles-and-features.aspx I have followed these instructions, but the updates are not successfully updating the source. I get errors like: "An error occurred - Package_for_KB29671203 Error: 0x80073701, Error: 14081, The referenced assembly could not be found." or "add-windowspackage failed. error code = 0x80073701, add-windowspackage: the referenced assembly could not be found" I've extensively searched for help on those error codes related to Server 2012 and windows updates, but my google-fu is failing me. I am using windows update packages found in c:\Windows\SoftwareDistribution\Download How can I get these updates to bring my source files up to current? Thanks!

    Read the article

  • Recursive move utility on Unix?

    - by Thomas Vander Stichele
    Sometimes I have two trees that used to have the same content, but have grown out of sync (because I moved disks around or whatever). A good example is a tree where I mirror upstream packages from Fedora. I want to merge those two trees again by moving all of the files from tree1 into tree2. Usually I do this with: rsync -arv tree1/* tree2 Then delete tree1. However, this takes an awful lot of time and disk space, and it would be much easier to be able to do: mv -r tree1/* tree2 In other words, a recursive move. It would be faster because first of all it would not even copy, just move the inodes, and second I wouldn't need a delete at the end. Does this exist ? As a test case, consider the following sequence of commands: $ mkdir -p a/b $ touch a/b/c1 $ rsync -arv a/ a2 sending incremental file list created directory ./ b/ b/c1 b/c2 sent 173 bytes received 57 bytes 460.00 bytes/sec total size is 0 speedup is 0.00 $ touch a/b/c2 What command would now have the effect of moving a/b/c2 to a2/b/c2 and then deleting the a subtree (since everything in it is already in the destination tree) ?

    Read the article

  • VMware databursts to vCenter

    - by Erwin Blonk
    [edit Nov.16th 2009: Thank you for the responses. I'm no longer on this project so the problem is out of my hands. Be sure I will return with other problems :-) ] I have a strange occurence of regular databursts over my WAN links from 2 sites to a site with vCenter. What happens is that a vCenter server is managing local ESX 3.5 servers and 2 from 2 sites across a WAN link. Each server sends an approximately 3MB worth of TLS data (less than 10% of the time it varies to higher or lower) every 15 minutes (with a margin of 2 minutes). So far, I've not been able to single out a process that causes it. I looked through all applications on each site. So far, it seems to originate from one server on each site. Although it may be coincidence and therefore not relevant, I found that one server, with very few exceptions, does a burst at 00:00 on the hour. The other 3 during the hour are a bit off the 15 minute mark but back at the top of the hour, you can sync your watch on it. The other server follows 5 minutes after that with no such precision. But, as said, it never differs more than 2 minutes. Servers are ESX 3.5, vCenter is 2.5.

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >