Search Results

Search found 4875 results on 195 pages for 'john hunt'.

Page 188/195 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How to stream authenticated content with MediaPlayer on Android

    - by 102790073222983779908
    I've seen quite a few posts askign this question on SO but there doesn't seem to be a definitive answer (or at least an answer I like!) I've got content protected behind basic auth (username/password) -- I can download it fine using the various HTTP download clases but for the life of me I can't sort out how to tell media player to stream it (and provide the authentication). I saw one post that suggested it wasn't possible since the MediaPlayer is all native code and doesn't things like the Authenticator. There are plenty of examples of how to first download to a cached copy and then play that back but....That sort of sucks (and the files maybe 100's of MB's). I saw at least one proposal to download it in smalish chunks and then start & stop the playback (redirecting to the new file) but that sort of sucks also since there would (I presume) be a stutter (I haven't tried it though) The best idea I have at this point is to start downloading to a cache file and then when it's 'full enough' start up playback while I continue to fill the file.... I hope that this works (but again, haven't tried it). Am I missing something obvious? It's so painful to have all the various pieces almost working and I sort of convinced myself that there had to be a way to natively stream protected content (or have it take a already established & qualified InputStream) but it appears no joy. BTW I'm a Mac/iPhone guy and a newb at Android so I'm still fighting a bit of Java learning.... Excuse me if I'm missing somthing obvious. -john

    Read the article

  • Not getting an array as result (calling a webservice by AJAX-JSON)

    - by Pasargad
    I'm trying to get the result of my web service as an array and then loop over the result to fetch all of the data; what I have done so far: In my web service when I return the result I use return json_encode($newFiles); and the result is as following: "[{\"path\":\"c:\\\\my_images\\\\123.jpg\",\"ID\":\"123\",\"FName\":\"John\",\"LName\":\"Brown\",\"dept\":\"Hr\"}]" tehn in my Web application I'm calling the rest web service by the following code in the RestService class: public function getNewImages($time) { $url = $this->rest_url['MyService'] . "?action=getAllNewPhotos&accessKey=" . $this->rest_key['MyService'] . "&lastcheck=" . $time; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data = curl_exec($ch); if ($data) { return json_decode($data); } else { return null; } } and then in my controller I have the following code: public function getNewImgs($time="2011-11-03 14:35:08") { $newImgs = $this->restservice->getNewImages($time); echo json_encode$newImgs; } and I'm calling this `enter code here`controller method by AJAX: $("#searchNewImgManually").click(function(e) { e.preventDefault(); $.ajax({ type: "POST", async: true, datatype: "json", url: "<?PHP echo base_url("myProjectController/getNewImgs"); ?>", success: function(imgsResults) { alert(imgsResults[0]); } }); }); but instead of giving me the first object it is just giving me quotation mark (the first charachter of the result) " Why is that? I'm passing in JSON format and in AJAX I mentioned datatype as "JSON" ! Please let me know if you need more clarification! Thanks :)

    Read the article

  • Basic SQL Query, I am newbie

    - by user3530547
    I just started my database and query class on Monday. We met on Monday and just went over the syllabus, and on Wednesday the network at school was down so we couldn't even do the power point lecture. Right now I am working on my first homework assignment and I am almost finished but I am having trouble on one question. Here is is... Write a SELECT statement that returns one column from the Customers table named FullName that joins the LastName and FirstName columns. Format the columns with the last name, a comma, a space, and the first name like this: Doe, John Sort the result set by last name in ascending sequence. Return only the contacts whose last name begins with letters from M to Z. Here is what I have so far... USE md0577283 SELECT FirstName,LastName FROM Customers ORDER BY LastName,FirstName My question is how do I format is Lastname, FirstName like the professor wants and how do I only select names M-Z? If someone could point me in the right direction I would greatly appreciate it. Thank you. PS With all do respect, I didn't ask for the answer I asked for a nudge in the right direction so why the down vote guys?

    Read the article

  • Grouping by property value and writing group members

    - by Will S
    I need to group the following list by the department value but am having trouble with the LINQ syntax. Here's my list of objects: var people = new List<Person> { new Person { name = "John", department = new List<fields> {new fields { name = "department", value = "IT"}}}, new Person { name = "Sally", department = new List<fields> {new fields { name = "department", value = "IT"}}}, new Person { name = "Bob", department = new List<fields> {new fields { name = "department", value = "Finance"}}}, new Person { name = "Wanda", department = new List<fields> {new fields { name = "department", value = "Finance"}}}, }; I've toyed around with grouping. This is as far as I've got: var query = from p in people from field in p.department where field.name == "department" group p by field.value into departments select new { Department = departments.Key, Name = departments }; So can iterate over the groups, but not sure how to list the Person names - foreach (var department in query) { Console.WriteLine("Department: {0}", department.Department); foreach (var foo in department.Department) { // ?? } } Any ideas on what to do better or how to list the names of the relevant departments?

    Read the article

  • Questions about "sets"

    - by James
    I have a test tomorrow that I am revising for and the lecturer has supplied some sample questions with no answers. I was hoping I could get some help with a couple of them. I've written what I think the answer is for them. 1. What is the type of the set {1, 2, 3}? integer/number 2. What is the type of the set {{1}, {2}, {3}}? integer/number (unsure what putting each number in {} does?) 3. What is the type of the set {{1}, {2}, {3}, empty}? integer/number 4. What is the type of the set {1, {2}, 3}? — is it well typed? integer/number 5. What is the type of the set {1, 2, john}? — is it well typed? unsure for a mixed set. Taking a complete guess of void or empty. Any help will be much appreciated.

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Getting started with Exchange Web Services 2010

    - by Adam Tuttle
    I've been tasked with writing a SOAP web-service in .Net to be middleware between EWS2010 and an application server that previously used WebDAV to connect to Exchange. (As I understand it, WebDAV is going away with EWS2010, so the application server will no longer be able to connect as it previously did, and it is exponentially harder to connect to EWS without WebDAV. The theory is that doing it in .Net should be easier than anything else... Right?!) My end goal is to be able to get and create/update email, calendar items, contacts, and to-do list items for a specified Exchange account. (Deleting is not currently necessary, but I may build it in for future consideration, if it's easy enough). I was originally given some sample code, which did in fact work, but I quickly realized that it was outdated. The types and classes used appear nowhere in the current documentation. For example, the method used to create a connection to the Exchange server was: ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); For what it's worth, this was using an assembly that came with the sample code: Microsoft.Exchange.WebServices.dll ("MEWS"). Before I realized that this wasn't the current standard way to accomplish the connection, and it worked, I tried to build on it and add a method to create calendar items, which I copied from here: static void CreateAppointment(ExchangeServiceBinding esb) { // Create the appointment. CalendarItemType appointment = new CalendarItemType(); ... } Right away, I'm confronted with the difference between ExchangeService and ExchangeServiceBinding ("ESB"); so I started Googling to try and figure out how to get an ESB definition so that the CreateAppointment method will compile. I found this blog post that explains how to generate a proxy class from a WSDL, which I did. Unfortunately, this caused some conflicts where types that were defined in the original Assembly, Microsoft.Exchange.WebServices.dll (that came with the sample code) overlapped with Types in my new EWS.dll assembly (which I compiled from the code generated from the services.wsdl provided by the Exchange server). I excluded the MEWS assembly, which only made things worse. I went from a handful of errors and warnings to 25 errors and 2,510 warnings. All kinds of types and methods were not found. Something is clearly wrong, here. So I went back on the hunt. I found instructions on adding service references and web references (i.e. the extra steps it takes in VS2008), and I think I'm back on the right track. I removed (actually, for now, just excluded) all previous assemblies I had been trying; and I added a service reference for https://my.exchange-server.com/ews/services.wsdl Now I'm down to just 1 error and 1 warning. Warning: The element 'transport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty. This is in reference to a change that was made to web.config when I added the service reference; and I just found a fix for that here on SO. I've commented that section out as indicated, and it did make the warning go away, so woot for that. The error hasn't been so easy to get around, though: Error: The type or namespace name 'ExchangeService' could not be found (are you missing a using directive or an assembly reference?) This is in reference to the function I was using to create the EWS connection, called by each of the web methods: private ExchangeService getService(String AutoDiscoverEmailAddress, String AuthEmailAddress, String AuthEmailPassword) { ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); return svc; } This function worked perfectly with the MEWS assembly from the sample code, but the ExchangeService type is no longer available. (Nor is ExchangeServiceBinding, that was the first thing I checked.) At this point, since I'm not following any directions from the documentation (I couldn't find anywhere in the documentation that said to add a service reference to your Exchange server's services.wsdl -- but that does seem to be the best/farthest I've gotten so far), I feel like I'm flying blind. I know I need to figure out whatever it is that should replace ExchangeService / ExchangeServiceBinding, implement that, and then work through whatever errors crop up as a result of that switch... But I have no idea how to do that, or where to look for how to do it. Googling "ExchangeService" and "ExchangeServiceBinding" only seem to lead back to outdated blog posts and MSDN, neither of which has proven terribly helpful thus far. Help me, Obi-Wan, you're my only hope!

    Read the article

  • Lync 2010, Kamailio, & Trixbox 2.6.23 (Asterisk 1.4)

    - by slashp
    I'm having an issue trying to connect Lync 2010 phone calls with our trixbox PBX. I've gotten to the point where Kamailio seems to be functioning properly and acting as a bridge between TCP traffic (from Lync) & UDP traffic (to the trixbox, as Asterisk 1.4 does not support SIP over TCP). Our Lync box IP: 10.100.10.41 Our Kamailio box IP: 10.100.10.44 Our trixbox IP: 10.100.10.2 The issue I'm running into is as follows when enabling SIP debugging for the Kamailio box: <--- SIP read from 10.100.10.44:5060 ---> PRACK sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41 SIP/2.0 FROM: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 TO: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e CSEQ: 24 PRACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 CONTACT: <sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41> CONTENT-LENGTH: 0 USER-AGENT: RTCC/4.0.0.0 MediationServer RAck: 1 23 INVITE <-------------> --- (12 headers 0 lines) --- Sending to 10.100.10.44 : 5060 (NAT) <--- Transmitting (NAT) to 10.100.10.44:5060 ---> SIP/2.0 481 Call leg/transaction does not exist Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d;received=10.100.10.44 Via: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 From: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 To: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e Call-ID: 192daae6-00e1-4140-bddd-0394b35d475b CSeq: 24 PRACK User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 <------------> trixbox1*CLI> <--- SIP read from 10.100.10.44:5060 ---> ACK sip:[email protected];user=phone SIP/2.0 FROM: "John Jones"<sip:9121;[email protected];user=phone>;tag=4852bab430;epid=CF2380792B TO: <sip:[email protected];user=phone>;tag=3684a6a24e;epid=CF2380792B CSEQ: 23 ACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK79a21c CONTENT-LENGTH: 0 My SIP trunk on the trixbox looks like this: [from-lync] exten => _+4XXX!,1,Noop(Stripping + from start of number) exten => _+4XXX!,n,Goto(from-internal,${EXTEN:1}) Though I am still having no luck getting the + stripped or the call to go through. Any ideas would be greatly appreciated. Thank you! -slashp

    Read the article

  • Filtering Security Logs by User and Logon Type

    - by Trido
    I have been asked to find out when a user has logged on to the system in the last week. Now the audit logs in Windows should contain all the info I need. I think if I search for Event ID 4624 (Logon Success) with a specific AD user and Logon Type 2 (Interactive Logon) that it should give me the information I need, but for the life of my I cannot figure out how to actually filter the Event Log to get this information. Is it possible inside of the Event Viewer or do you need to use an external tool to parse it to this level? I found http://nerdsknowbest.blogspot.com.au/2013/03/filter-security-event-logs-by-user-in.html which seemed to be part of what I needed. I modified it slightly to only give me the last 7 days worth. Below is the XML I tried. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[(EventID=4624) and TimeCreated[timediff(@SystemTime) &lt;= 604800000]]]</Select> <Select Path="Security">*[EventData[Data[@Name='Logon Type']='2']]</Select> <Select Path="Security">*[EventData[Data[@Name='subjectUsername']='Domain\Username']]</Select> </Query> </QueryList> It only gave me the last 7 days, but the rest of it did not work. Can anyone assist me with this? EDIT Thanks to the suggestions of Lucky Luke I have been making progress. The below is my current query, although as I will explain it isn't returning any results. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[System[(EventID='4624')] and System[TimeCreated[timediff(@SystemTime) &lt;= 604800000]] and EventData[Data[@Name='TargetUserName']='john.doe'] and EventData[Data[@Name='LogonType']='2'] ] </Select> </Query> </QueryList> As I mentioned, it wasn't returning any results so I have been messing with it a bit. I can get it to produce the results correctly until I add in the LogonType line. After that, it returns no results. Any idea why this might be? EDIT 2 I updated the LogonType line to the following: EventData[Data[@Name='LogonType'] and (Data='2' or Data='7')] This should capture Workstation Logons as well as Workstation Unlocks, but I still get nothing. I then modify it to search for other Logon Types like 3, or 8 which it finds plenty of. This leads me to believe that the query works correctly, but for some reason there are no entries in the Event Logs with Logon Type equalling 2 and this makes no sense to me. Is it possible to turn this off?

    Read the article

  • Am I crazy? (How) should I create a jQuery content editor?

    - by Brendon Muir
    Ok, so I created a CMS mainly aimed at Primary Schools. It's getting fairly popular in New Zealand but the one thing I hate with a passion is the largely bad quality of in browser WYSIWYG editors. I've been using KTML (made by InterAKT which was purchased by Adobe a few years ago). In my opinion this editor does a lot of great things (image editing/management, thumbnailing and pretty good content editing). Unfortunately time has had its nasty way with this product and new browsers are beginning to break features and generally degrade the performance of this tool. It's also quite scary basing my livelihood on a defunct product! I've been hunting, in fact I regularly hunt around to see if anything has changed in the WYSIWYG arena. The closest thing I've seen that excites me is the WYSIHAT framework, but they've decided to ignore a pretty relevant editing paradigm which I'm going to outline below. This is the idea for my proposed editor, and I don't know of any existing products that can do this properly: Right, so the traditional model for editing let's say a Page in a CMS is to log into a 'back end' and click edit on the page. This will then load another screen with the editor in it and perhaps a few other fields. More advanced CMS's will maybe have several editing boxes that are for different portions of the page. Anyway, the big problem with this way of doing things is that the user is editing a document outside of the final context it will appear in. In the simplest terms, this means the page template. Many things can be wrong, e.g. the with of the editing area might be different to the width of the actual template area. The height is nearly always fixed because existing editors always seem to use IFRAMES for backward compatibility. And there are plenty of other beefs which I'm sure you're quite aware of if you're in this development area. Here's my editor utopia: You click 'Edit Page': The actual page (with its actual template) displays. Portions of the page have been marked as editable via a class name. You click on one of these areas (in my case it'd just be the big 'body' area in the middle of the template) and a editing bar drops down from the top of the screen with all your standard controls (bold, italic, insert image etc...). Iframes are never used, instead we rely on setting contentEditable to true on the DIV's in question. Firefox 2 and IE6 can go away, let's move on. You can edit the page knowing exactly how it will look when you save it. Because all the styles for this template are loaded, your headings will look correct, everything will be just dandy. Is this such a radical concept? Why are we still content with TinyMCE and that other editor that is too embarrassing to use because it sounds like a swear word!? Let's face the facts: I'm a JavaScript novice. I did once play around in this area using the Javascript Anthology from SitePoint as a guide. It was quite a cool learning experience, but they of course used the IFRAME to make their lives easier. I tried to go a different route and just use contentEditable and even tried to sidestep the native content editing routines (execCommand) and instead wrote my own. They kind of worked but there were always issues. Now we have jQuery, and a few libraries that abstract things like IE's lack of Range support. I'm wondering, am I crazy, or is it actually a good idea to try and build an editor around this editing paradigm using jQuery and relevant plugins to make the job easier? My actual questions: Where would you start? What plugins do you know of that would help the most? Is it worth it, or is there a magical project that already exists that I should join in on? What are the biggest hurdles to overcome in a project like this? Am I crazy? I hope this question has been posted on the right board. I figured it is a technical question as I'm wanting to know specific hurdles and pitfalls to watch out for and also if it is technically feasible with todays technology. Looking forward to hearing peoples thoughts and opinions.

    Read the article

  • Looking for a Software to harden Windows machines

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So IF anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • =?UTF-8?B??= in Emails sent via php mail problem

    - by Camran
    I have a website, and in the "Contact" section I have a form which users may fill in to contact me. The form is a simple form which action is a php page. The php code: $to = "[email protected]"; $name=$_POST['name']; // sender name $email=$_POST['email']; // sender email $tel= $_POST['tel']; // sender tel $subject=$_POST['subject']; // subject CHOSEN FROM DROPLIST, ALL TESTED $text=$_POST['text']; // Message from sender $text.="\n\nTel:".$tel; // Added to message to show me the telephone nr to the sender at bottom of message $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); Could somebody please tell me why this works most of the time, but sometimes I receive email whith no text and the subject line showing =?UTF-8?B??= I use outlook express, and I have read this http://stackoverflow.com/questions/454833/system-net-mail-and-utf-8bxxxxx-headers but it didn't help. The problem is not in Outlook, because when I log in to the actual mailprogram where I fetch the POP3 emails from, the email looks the same. When I right click in Outlook and chose "message source" then there is no "From" information. Ex, a good message should look like this: Subject: =?UTF-8?B?w5Z2cmlndA==?= MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 From: John Doe However, the ones with problem looks like this: Subject: =?UTF-8?B??= MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 From: As if the information has been lost somewhere. You should know also that I have a VPS, which I manage myself. I use postfix as an emailserver, if thats got anything to do with it. But then again, why does it work sometimes? Also another thing that I have noticed is that sometimes special characters are not shown correctly (by both Outlook and the webmail). For instance, the name "Björkman" in swedish is shown like Björkman, but again, only sometimes. I hope anybody knows something about this problem, because it is very hard to track down for me atleast. If you need more input let me know.

    Read the article

  • Searching For a Desktop Security Software to harden Windows machines, anybody?

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So if anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • How to loop through all illustrator files in a folder (CS6)

    - by Julian
    I have written some JavaScript to save .ai files to two separate locations with different resolutions, one of them being cropped to a reduced size art board. (Courtesy of John Otterud / Articmill for the main part). There are other variables in the script that I am not using at present but I want to leave the functionality there for a later date/additional layers to export/other resolutions etc. I can't get it to loop through all files in a folder. I cannot find the script that works - or insert it at the right place. I can get as far a selecting the folder and I suppose creating an array but after that what next? This is the create array part of the script - // JavaScript Document //Set up vairaibles var destDoc, sourceDoc, sourceFolder, newLayer; // Select the source folder. sourceFolder = Folder.selectDialog('Select the folder with Illustrator files that you want to mere into one', '~'); destDoc = app.documents.add(); // If a valid folder is selected if (sourceFolder != null) { files = new Array(); // Get all files matching the pattern files = sourceFolder.getFiles(); I have inserted this at the beginning of the main script (probably where I am going wrong because I can select the folder but then nothing more) #target illustrator var docRef = app.activeDocument; with (docRef) { if (layers[i].name = 'HEADER') { layers[i].name = '#'+ activeDocument.name; save() } } // *** Export Layers as PNG files (in multiple resolutions) *** var subFolderName = "For_PLMA"; var subFolderTwoName = "For_VLP"; var saveInMultipleResolutions = true; // ... // Note: only use one character! var exportLayersStartingWith = "%"; var exportLayersWithArtboardClippingStartingWith = "#"; // ... var normalResolutionFileAppend = "_VLP"; var highResolutionFileAppend = "_PLMA"; // ... var normalResolutionScale = 100; var highResolutionScale = 200; var veryhighResolutionScale = 300; // *** Start of script *** var doc = app.activeDocument; // Make sure we have saved the document if (doc.path != "") { Then the rest of the export script runs on from there.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • XNA Notes 007

    - by George Clingerman
    Every week I keep wondering if there’s going to be enough activity in the community to keep doing these notes on a weekly basis and every week I’m reminded of just how awesome and active the XNA community is. There’s engines being made, tutorials being created, games being crafted. There’s information being shared, questions being answered and then there’s another whole community around the Xbox LIVE Indie Games themselves. It’s really incredibly to just watch all that’s going on and I’m glad I’m playing a small part in all of this. So here’s what I noticed happening in the XNA community last week. If there’s things I’m missing, always feel free to let me know. I love learning about new corners of the XNA community that I wasn’t aware of or just have been missing! XNA Developers: Uditha Bandara held an XNA Game Development Workshops at Singapore Universities http://uditha.wordpress.com/2011/02/18/xna-game-development-workshops-at-singapore-universities-event-update/ Binary Tweed gives his talks about Indie City and gives his opinion on the false promise of digital distribution http://www.develop-online.net/news/37053/OPINION-The-false-promise-of-digital-distribution Kris Steele posts his Trivia or Die postmortem http://www.krissteele.net/blogdetails.aspx?id=246 @MadNinjaSkills (James Johnston) posts his feelings on testing for XBLIG http://www.ezmuze.co.uk/101 Simon (@DDReaper) posts hints and tips for XNA developers to help get the size of their projects down http://twitter.com/#!/DDReaper/status/38279440924545024 http://xna-uk.net/blogs/darkgenesis/archive/2011/02/17/look-at-the-size-of-that-thing.aspx Michael B. McLaughlin proving why he should be an XNA MVP posts the list of commonly used value types in XNA games http://geekswithblogs.net/mikebmcl/archive/2011/02/17/list-of-commonly-used-value-types-in-xna-games.aspx http://twitter.com/#!/mikebmcl/status/38166541354811392 Paul Powell (@ITSligoPaul) posts about a common sprite batch as a game service http://itspaulsblog.blogspot.com/2011/02/xna-common-sprite-batch-as-game-service.html @SigilXNA (John Defenbaugh) posts his new level editor video for the sequel to Opac’s Journey http://twitter.com/SigilXNA/statuses/36548174373982209 http://twitter.com/#!/SigilXNA/status/36548174373982209 http://youtu.be/QHbmxB_2AW8 @jwatte updates kW Animation for XNA 4.0 http://www.enchantedage.com/xna-animation @DSebJ posts Blender to SunBurn http://twitter.com/#!/DSebJ/status/36564920224976896 http://dsebj.evolvingsoftware.com/?p=187 Ads and WP7 Games - @mechaghost shares his revenue data for his ad based games http://www.occasionalgamer.com/2011/02/09/ads-and-wp7-games/ Xbox LIVE Indie Games (XBLIG): Steven Hurdle posts day 100 of his quest to find a fantastic XBLIG purchase every day http://writingsofmassdeduction.com/2011/02/17/day-100-radiangames-ballistic/ Xbox 360 Indie Game Buying Guide - 12 games for $60 including several Xbox LIVE Indie games! (although if the XNA community was asked we could have recommended 60 games for $60...) http://www.indiegamemag.com/xbox360-indie-games-buying-guide/ The best selling Xbox LIVE Indie games of 2010 http://www.1up.com/news/xbox-live-most-popular-games I’d buy that for a dollar! - the California Literary Review points out a few gems on the XBLIG marketplace (and other places) where you can game on the cheap. http://calitreview.com/14125 Armless Octopus Episode 39 - The Indie Gem Octocast http://www.armlessoctopus.com/2011/02/17/armless-octocast-episode-39-the-indie-gem-octocast/ Ska Studios posts a plethora of updates http://www.ska-studios.com/2011/02/11/good-morning-gato-49/ http://www.ska-studios.com/2011/02/14/vampire-smile-valentines/ http://www.ska-studios.com/2011/02/16/the-dishwasher-vs-finds-a-home/ Kotaku posts about the Xbox LIVE Indie Game that makes you go Pew Pew Pew Pew Pew Pew http://kotaku.com/#!5760632/the-game-that-makes-you-go-pew-pew-pew-pew-pew-pew-pew GameMarx continues to be active and doing a ton for the XBLIG community reviews and Top 5 indie games of the week 2/4-2/10 http://www.gamemarx.com/video/the-show/22/ep-9-february-11-2010.aspx a new podcast Xbox Indie New Releases http://twitter.com/#!/gamemarx/status/36888849107910656 http://www.gamemarx.com/news/2011/02/13/a-new-podcast-xbox-indie-new-releases.aspx @MasterBlud uploads Indocalypse XBLIG Collections #2 http://www.youtube.com/watch?v=uzCZSv075mc&feature=youtu.be&a http://twitter.com/#!/MasterBlud/status/37100029697064960 Just Press Start interviews Michael Hicks from MichaelArts, 18 year old creator of Honor in Vengeance http://justpressstart.net/?p=465 Achievement Locked interviews Kris Steele of FunInfused Games http://xboxindies.wordpress.com/2011/02/11/interview-fun-infused-games/ XNA Game Development: XNA -UK launches their XAP test service to help the XNA community http://xna-uk.net/blogs/news/archive/2011/02/18/xna-uk-xap-test-service-now-live.aspx Transmute shows off a video of the standard character editor http://www.youtube.com/watch?v=qqH6gErG948&feature=youtu.be Microsoft Tech Student introduces their first tech student of the month.  Meet Daniel Van Tassel from the University of Utah and learn how he created an Xbox LIVE Indie Game using XNA Studio http://blogs.msdn.com/b/techstudent/archive/2010/12/22/introducing-our-first-tech-student-of-the-month-daniel-van-tassel.aspx XNA for Silverlight Developers Part 3 - Animation (transforms) http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx XNA for Silverlight Developers Part 4 - Animation (frame based) http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-4-Animation-frame-based.aspx @suhinini tweets about an XNA Sprite Font generation tool http://twitter.com/#!/suhinini/status/36841370131890176 http://www.nubik.com/SpriteFont/ XNATouch 1.5 is out and in it’s words is faster, simpler, more reliable and has the XNA 4.0 API http://monogame.codeplex.com/releases/view/60815 IndieCity is hosting marketing workshops for Indie Developers (UK and US) http://forums.create.msdn.com/forums/p/75197/457654.aspx#457654 New York Students - Learn XNA and Silverlight for Xbox 360 and Windows Phone 7 http://forums.create.msdn.com/forums/p/72753/456964.aspx#456964 http://blogs.msdn.com/b/andrewparsons/archive/2011/01/13/learn-to-build-your-own-games-for-xbox-360-and-windows-phone-7.aspx http://blogs.msdn.com/b/andrewparsons/archive/2011/01/13/build-a-game-in-48-hours-win-a-kinect-or-windows-phone-7.aspx Extra Credits: Videogame Music http://www.escapistmagazine.com/videos/view/extra-credits/2019-Videogame-Music Steve Pavlina posts an article with useful information for all XNA/XBLIG developers http://www.stevepavlina.com/blog/2011/02/completion-vs-perfection/

    Read the article

  • Deploy ASP.NET Web Applications with Web Deployment Projects

    - by Ben Griswold
    One may quickly build and deploy an ASP.NET web application via the Publish option in Visual Studio.  This option works great for most simple deployment scenarios but it won’t always cut it.  Let’s say you need to automate your deployments. Or you have environment-specific configuration settings. Or you need to execute pre/post build operations when you do your builds.  If so, you should consider using Web Deployment Projects. The Web Deployment Project type doesn’t come out-of-the-box with Visual Studio 2008.  You’ll need to Download Visual Studio® 2008 Web Deployment Projects – RTW and install if you want to follow along with this tutorial. I’ve created a shiny new ASP.NET MVC project.  Web Deployment Projects work with websites, web applications and MVC projects so feel free to go with any web project type you’d like.  Once your web application is in place, it’s time to add the Web Deployment project.  You can hunt and peck around the File > New > New Project… dialogue as long as you’d like, but you aren’t going to find what you need.  Instead, select the web project and then choose the “Add Web Deployment Project…” hiding behind the Build menu option. I prefer to name my projects based on the environment in which I plan to deploy.  In this case, I’ll be rolling to the QA machine. Don’t expect too much to happen at this point.  A seemingly empty project with a funny icon will be added to your solution.  That’s it. I want to take a minute and talk about configuration settings before we continue.  Some of the common settings which might change from environment to environment are appSettings, connectionStrings and mailSettings.  Here’s a look at my updated web.config: <appSettings>   <add key="MvcApplication293.Url" value="http://localhost:50596/" />     </appSettings> <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   <system.net>   <mailSettings>     <smtp from="[email protected]">         <network host="server.com" userName="username" password="password" port="587" defaultCredentials="false"/>     </smtp>   </mailSettings> </system.net> I want to update these values prior to deploying to the QA environment.  There are variations to this approach, but I like to maintain environment-specific settings for each of the web.config sections in the Config/[Environment] project folders.  I’ve provided a screenshot of the QA environment settings below. It may be obvious what one should include in each of the three files.  Basically, it is a copy of the associated web.config section with updated setting values.  For example, the AppSettings.config file may include a reference to the QA web url, the DB.config would include the QA database server and login information and the StmpSettings.config would include a QA Stmp server and user information. <?xml version="1.0" encoding="utf-8" ?> <appSettings>   <add key="MvcApplication293.Url" value="http://qa.MvcApplicatinon293.com/" /> </appSettings> AppSettings.config  <?xml version="1.0" encoding="utf-8" ?> <connectionStrings>   <add name="ApplicationServices"        connectionString="server=QAServer;integrated security=SSPI;database=MvcApplication293"        providerName="System.Data.SqlClient"/>   </connectionStrings> Db.config  <?xml version="1.0" encoding="utf-8" ?> <smtp from="[email protected]">     <network host="qaserver.com" userName="qausername" password="qapassword" port="587" defaultCredentials="false"/> </smtp> SmtpSettings.config  I think our web project is ready to deploy.  Now, it’s time to concentrate on the Web Deployment Project itself.  Right-click on the project file and open the Property Pages. The first thing to call out is the Configuration dropdown.  I only deploy a project which is built in Release Mode so I only setup the Web Deployment Project for this mode.  (This is when you change the Configuration selection to “Release.”)  I typically keep the Output Folder default value – .\Release\.  When the application is built, all artifacts will be dropped in the .\Release\ folder relative to the Web Deployment Project root.  The final option may be up for some debate.  I like to roll out updatable websites so I select the “Allow this precompiled site to be updatable” option.  I really do like to follow standard SDLC processes when I release my software but there are those times when you just have to make a hotfix to production and I like to keep this option open if need be.  If you are strongly opposed to this idea, please, by all means, don’t check the box. The next tab is boring.  I don’t like to deploy a crazy number of DLLs so I merge all outputs to a single assembly.  Again, you may have another option and feel free to change this selection if you so wish. If you follow my lead, take care when choosing a single assembly name.  The Assembly Name can not be the same as the website or any other project in your solution otherwise you’ll receive a circular reference build error.  In other words, I can’t name the assembly MvcApplication293 or my output window would start yelling at me. Remember when we called out our QA configuration files?  Click on the Deployment tab and you’ll see how where going to use them.  Notice the Web.config file section replacements value.  All this does is swap called out web.config sections with the content of the Config\QA\* files.  You can reduce or extend this list as you deem fit.  Did you see the “Use external configuration source file” option?  You know how you can point any of your web.config sections to an external file via the configSource attribute?  This option allows you to leverage that technique and instead of replacing the content of the sections, you will replace the configSource attribute value instead. <appSettings configSource="Config\QA\AppSettings.config" /> Go ahead and Apply your changes.  I’d like to take a look at the project file we just updated.  Right-click on the Web Deployment Project and select “Open Project File.” One of the first configuration blocks reflects core Release build settings.  There are a couple of points I’d like to call out here: DebugSymbols=false ensures the compilation debug attribute in your web.config is flipped to false as part of build process.  There’s some crumby (more likely old) documentation which implies you need a ToggleDebugCompilation task to make this happen.  Nope. Just make sure the DebugSymbols is set to false.  EnableUpdateable implies a single dll for the web application rather than a dll for each object and and empty view file. I think updatable applications are cleaner and include the benefit (or risk based on your perspective) that portions of the application can be updated directly on the server.  I called this out earlier but I wanted to reiterate. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">     <DebugSymbols>false</DebugSymbols>     <OutputPath>.\Release</OutputPath>     <EnableUpdateable>true</EnableUpdateable>     <UseMerge>true</UseMerge>     <SingleAssemblyName>MvcApplication293</SingleAssemblyName>     <DeleteAppCodeCompiledFiles>true</DeleteAppCodeCompiledFiles>     <UseWebConfigReplacement>true</UseWebConfigReplacement>     <ValidateWebConfigReplacement>true</ValidateWebConfigReplacement>     <DeleteAppDataFolder>true</DeleteAppDataFolder>   </PropertyGroup> The next section is self-explanatory.  The content merely reflects the replacement value you provided via the Property Pages. <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>   </ItemGroup> You’ll want to extend the ItemGroup section to include the files you wish to exclude from the build.  The sample ExcludeFromBuild nodes exclude all obj, svn, csproj, user, pdb artifacts from the build. Enough though they files aren’t included in your web project, you’ll need to exclude them or they’ll show up along with required deployment artifacts.  <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\obj\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.csproj" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.user" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\bin\*.pdb" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\Notes.txt" />   </ItemGroup> Pre/post build and Pre/post merge tasks are added to the final code block.  By default, your project file should look like the following – a completely commented out section. <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.   <Target Name="BeforeBuild">   </Target>   <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>   <Target Name="AfterBuild">   </Target>   –> Update the section to remove all temporary Config folders and files after the build.  <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.     <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>     <Target Name="BeforeBuild">      </Target>       –>   <Target Name="AfterBuild">     <!– WebConfigReplacement requires the Config files. Remove after build. –>     <RemoveDir Directories="$(OutputPath)\Config" />   </Target> That’s it for setup.  Save the project file, flip the solution to Release Mode and build.  If there’s an issue, consult the Output window for details.  If all went well, you will find your deployment artifacts in your Web Deployment Project folder like so. Both the code source and published application will be there. Inside the Release folder you will find your “published files” and you’ll notice the Config folder is no where to be found.  In the Source folder, all project files are found with the exception of the items which were excluded from the build. I’ll wrap up this tutorial by calling out a little Web Deployment pet peeve of mine: there doesn’t appear to be a way to add an existing web deployment project to a solution.  The best I can come up with is create a new web deployment project and then copy and paste the contents of the existing project file into the new project file.  It’s not a big deal but it bugs me. Download the Solution

    Read the article

  • WPF Databinding- Not your fathers databinding Part 1-3

    - by Shervin Shakibi
    As Promised here is my advanced databinding presentation from South Florida Code camp and also Orlando Code camp. you can find the demo files here. http://ssccinc.com/wpfdatabinding.zip Here is a quick description of the first demos, there will be 2 other Blogposting in the next few days getting into more advance databinding topics.   Example00 Here we have 3 textboxes, The first textbox mySourceElement Second textbox has a binding to mySourceElement and Path= Text <Binding ElementName="mySourceElement" Path="Text"  />   Third textbox is also bound to the Text property but we use inline Binding <TextBlock Text="{Binding ElementName=mySourceElement,Path=Text }" Grid.Row="2" /> Here is the entire XAML     <Grid  >           <Grid.RowDefinitions >             <RowDefinition Height="*" />             <RowDefinition Height="*" />             <RowDefinition Height="*" />         </Grid.RowDefinitions>         <TextBox Name="mySourceElement" Grid.Row="0"                  TextChanged="mySourceElement_TextChanged">Hello Orlnado</TextBox>         <TextBlock Grid.Row="1">                        <TextBlock.Text>                 <Binding ElementName="mySourceElement" Path="Text"  />             </TextBlock.Text>         </TextBlock>         <TextBlock Text="{Binding ElementName=mySourceElement,Path=Text }" Grid.Row="2" />     </Grid> </Window> Example01 we have a slider control, then we have two textboxes bound to the value property of the slider. one has its text property bound, the second has its fontsize property bound. <Grid>      <Grid.RowDefinitions >          <RowDefinition Height="40px" />          <RowDefinition Height="40px" />          <RowDefinition Height="*" />      </Grid.RowDefinitions>      <Slider Name="fontSizeSlider" Minimum="5" Maximum="100"              Value="10" Grid.Row="0" />      <TextBox Name="SizeTextBox"                    Text="{Binding ElementName=fontSizeSlider, Path=Value}" Grid.Row="1"/>      <TextBlock Text="Example 01"                 FontSize="{Binding ElementName=SizeTextBox,  Path=Text}"  Grid.Row="2"/> </Grid> Example02 very much like the previous example but it also has a font dropdown <Grid>      <Grid.RowDefinitions >          <RowDefinition Height="20px" />          <RowDefinition Height="40px" />          <RowDefinition Height="40px" />          <RowDefinition Height="*" />      </Grid.RowDefinitions>      <ComboBox Name="FontNameList" SelectedIndex="0" Grid.Row="0">          <ComboBoxItem Content="Arial" />          <ComboBoxItem Content="Calibri" />          <ComboBoxItem Content="Times New Roman" />          <ComboBoxItem Content="Verdana" />      </ComboBox>      <Slider Name="fontSizeSlider" Minimum="5" Maximum="100" Value="10" Grid.Row="1" />      <TextBox Name="SizeTextBox"      Text="{Binding ElementName=fontSizeSlider, Path=Value}" Grid.Row="2"/>      <TextBlock Text="Example 01" FontFamily="{Binding ElementName=FontNameList, Path=Text}"                 FontSize="{Binding ElementName=SizeTextBox,  Path=Text}"  Grid.Row="3"/> </Grid> Example03 In this example we bind to an object Employee.cs Notice we added a directive to our xaml which is clr-namespace and the namespace for our employee Class xmlns:local="clr-namespace:Example03" In Our windows Resources we create an instance of our object <Window.Resources>     <local:Employee x:Key="MyEmployee" EmployeeNumber="145"                     FirstName="John"                     LastName="Doe"                     Department="Product Development"                     Title="QA Manager" /> </Window.Resources> then we bind our container to the that instance of the data <Grid DataContext="{StaticResource MyEmployee}">         <Grid.RowDefinitions>             <RowDefinition Height="*" />             <RowDefinition Height="*" />             <RowDefinition Height="*" />             <RowDefinition Height="*" />             <RowDefinition Height="*" />         </Grid.RowDefinitions>         <Grid.ColumnDefinitions >             <ColumnDefinition Width="130px" />             <ColumnDefinition Width="178*" />         </Grid.ColumnDefinitions>     </Grid> and Finally we have textboxes that will bind to that textbox         <Label Grid.Row="0" Grid.Column="0">Employee Number</Label>         <TextBox Grid.Row="0" Grid.Column="1" Text="{Binding Path=EmployeeNumber}"></TextBox>         <Label Grid.Row="1" Grid.Column="0">First Name</Label>         <TextBox Grid.Row="1" Grid.Column="1" Text="{Binding Path=FirstName}"></TextBox>         <Label Grid.Row="2" Grid.Column="0">Last Name</Label>         <TextBox Grid.Row="2" Grid.Column="1" Text="{Binding Path=LastName}" />         <Label Grid.Row="3" Grid.Column="0">Title</Label>         <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding Path=Title}"></TextBox>         <Label Grid.Row="4" Grid.Column="0">Department</Label>         <TextBox Grid.Row="4" Grid.Column="1" Text="{Binding Path=Department}" />

    Read the article

  • XNA Notes 006

    - by George Clingerman
    If you used to think the XNA community was small and inactive, hopefully these XNA Notes are opening your eyes. And I honestly feel like I’m still only catching the tail end of everything that’s going on. It’s a large and active community and you can be so mired down in one part of it you miss all sorts of cool stuff another part is doing. XNA is many things to a lot of people and that makes for a lot of really awesome things going on. So here’s what I saw going on this last week! Time Critical XNA New: XNA Team - Peer Review now closes for XNA 3.1 games http://blogs.msdn.com/b/xna/archive/2011/02/08/peer-review-pipeline-closed-for-new-xna-gs-3-1-games-or-updates-on-app-hub.aspx http://twitter.com/XNACommunity/statuses/34649816529256448 The XNA Team posts about a meet up with Microsoft for Creator’s going to be at GDC, March 3rd at the Lobby Bar http://on.fb.me/fZungJ XNA Team: @mklucher is busying playing the the bubblegum on WP7 made by a member of the XNA team (although reportedly made in Silverlight? Crazy! ;) ) http://twitter.com/mklucher/statuses/34645662737895426 http://bubblegum.me Shawn Hargreaves posts multiple posts (is this a sign that something new is coming from the XNA team? Usually when Shawn has time to post, something has just wrapped up…) Random Shuffle http://blogs.msdn.com/b/shawnhar/archive/2011/02/09/random-shuffle.aspx Doing the right thing: resume, rewind or skip ahead http://blogs.msdn.com/b/shawnhar/archive/2011/02/10/doing-the-right-thing-resume-rewind-or-skip-ahead.aspx XNA Developers: Andrew Russel was on .NET Rocks recently talking with Carl and Richard about developing games for Xbox, iPhone and Android http://www.dotnetrocks.com/default.aspx?ShowNum=635 Eric W. releases the Fishing Girl source code into the wild http://ericw.ca/blog/posts/fishing-girl-now-open-source/ http://forums.create.msdn.com/forums/p/74642/454512.aspx#454512 BinaryTweedDeej reminds that XNA community that Indie City wants you involved http://twitter.com/BinaryTweedDeej/statuses/34596114028044288 http://www.indiecity.com Mike McLaughlin (@mikebmcl) releases his first two XNA articles on the TechNet wiki http://social.technet.microsoft.com/wiki/contents/articles/xna-framework-overview.aspx http://social.technet.microsoft.com/wiki/contents/articles/content-pipeline-overview.aspx John Watte plays around with the Content Pipeline and Music Visualization exploring just what can be done. http://www.enchantedage.com/xna-content-pipeline-fft-song-analysis http://www.enchantedage.com/fft-in-xna-content-pipeline-for-beat-detection-for-the-win Simon Stevens writes up his talk on Vector Collision Physics http://www.simonpstevens.com/News/VectorCollisionPhysics @domipheus puts together an XNA Task Manager http://www.flickr.com/photos/domipheus/5405603197/ MadNinjaSkillz releases his fork of Nick's Easy Storage component on CodePlex http://twitter.com/MadNinjaSkillz/statuses/34739039068229634 http://ezstorage.codeplex.com @ActiveNick was interviewed by Rob Cameron and discusses Windows Phone 7, Bing Maps and XNA http://twitter.com/ActiveNick/statuses/35348548526546944 http://msdn.microsoft.com/en-us/cc537546 Radiangames (Luke Schneider) posts about converting his games from XNA to Unity http://radiangames.com/?p=592 UberMonkey (@ElementCy) posts about a new project in the works, CubeTest a Minecraft style terrain http://www.ubergamermonkey.com/personal-projects/new-project-in-the-works/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Ubergamermonkey+%28UberGamerMonkey%29 Xbox LIVE Indie Games (XBLIG): VideoGamer Rob review Bonded Realities http://videogamerrob.wordpress.com/2011/02/05/xblig-review-bonded-realities/ XBLIG Round Up on Gamergeddon http://www.gamergeddon.com/2011/02/06/xbox-indie-game-round-up-february-6th/ Are gamers still rating Indie Games after the Xbox Dashboard update? http://www.gamemarx.com/news/2011/02/06/are-gamers-still-rating-indie-games-after-the-xbox-dashboard-update.aspx Joystiq - Xbox Live Indie Gems: Corrupted http://www.joystiq.com/2011/02/04/xbox-live-indie-gems-corrupted/ Raymond Matthews of DarkStarMatryx reviews (Almost) Total Mayhem and Aban Hawkins & the 1000 Spikes http://www.darkstarmatryx.com/?p=225 http://www.darkstarmatryx.com/?p=229 8 Bit Horse reviews Aban Hawkins & the 1000 spikes http://8bithorse.blogspot.com/2011/01/aban-hawkins-1000-spikes-xbl-indie.html 2010 wrap-up for FunInfused Games http://www.krissteele.net/blogdetails.aspx?id=245 NeoGaf roundup of January's XBLIGs http://www.neogaf.com/forum/showthread.php?t=420528 Armless Ocotopus interviews Michael Ventnor creator of Bonded Realities http://www.armlessoctopus.com/2011/02/07/interview-michael-ventnor-of-red-crest-studios/ @recharge_media posts about the new city music for Woodvale in Sin Rising http://rechargemedia.com/2011/02/08/new-city-theme-woodvale/ @DrMisty posts some footage of YoYoYo in action http://www.mstargames.co.uk/mistryblogmain/54-yoyoyoblogs/184-video-update.html Xona Games - Decimation X3 on Reviews on the Run http://video.citytv.com/video/detail/782443063001.000000/reviews-on-the-run--february-8-2011/g4/ @benkane gives an early peek at his action RPG coming to XBLIG http://www.youtube.com/watch?v=bDF_PrvtwU8 Rock, Paper Shotgun talks to Zeboyd games about bringing Cthulhu Saves the World to PC http://www.rockpapershotgun.com/2011/02/11/summoning-cthulhu-natter-with-zeboyd/ Xbox LIVE Indieverse interviews the creator of Bonded Realities http://xbl-indieverse.blogspot.com/2011/02/xbl-indieverse-interview-red-crest.html XNA Game Development: Dream-In-Code posts about an upcoming XNA Challenge/Coding contest http://www.dreamincode.net/forums/blog/1385/entry-3192-xna-challengecontest/ Sgt.Conker covers Fishing Girl and IndieFreaks Game Framework release http://www.sgtconker.com/2011/02/fishing-girl-did-not-sell-a-single-copy/ http://www.sgtconker.com/2011/02/indiefreaks-game-framework-v0-2-0-0/ @slyprid releases Transmute v0.40a with lots of new features and fixes http://twitter.com/slyprid/statuses/34125423067533312 http://twitter.com/slyprid/statuses/35326876243337216 http://forgottenstarstudios.com/ Jeff Brown writes an XNA 4.0 tutorial on Saving/Loading on the Xbox 360 http://www.robotfootgames.com/xna-tutorials/92-xna-tutorial-savingloading-on-xbox-360-40 XNA for Silverlight Developers: Part 3- Animation http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+xna-connection-twitter-specific-stream+%28XNA+Connection%27s+Twitter+specific+stream%29 The news from Nokia is definitely something XNA developers will want to keep their eye on http://blogs.forum.nokia.com/blog/nokia-developer-news/2011/02/11/letter-to-developers?sf1066337=1

    Read the article

  • RSS Feeds currently on Simple-Talk

    - by Andrew Clarke
    There are a number of news-feeds for the Simple-Talk site, but for some reason they are well hidden. Whilst we set about reorganizing them, I thought it would be a good idea to list some of the more important ones. The most important one for almost all purposes is the Homepage RSS feed which represents the blogs and articles that are placed on the homepage. Main Site Feed representing the Homepage ..which is good for most purposes but won't always have all the blogs, or maybe it will occasionally miss an article. If you aren't interested in all the content, you can just use the RSS feeds that are more relevant to your interests. (We'll be increasing these categories soon) The newsfeed for SQL articles The .NET section newsfeed The newsfeed for Red Gate books The newsfeed for Opinion articles The SysAdmin section newsfeed if you want to get a more refined feed, then you can pick and choose from these feeds for each category so as to make up your custom news-feed in the SQL section, SQL Training Learn SQL Server Database Administration TSQL Programming SQL Server Performance Backup and Recovery SQL Tools SSIS SSRS (Reporting Services) in .NET there are... ASP.NET Windows Forms .NET Framework ,NET Performance Visual Studio .NET tools in Sysadmin there are Exchange General Virtualisation Unified Messaging Powershell in opinion, there is... Geek of the Week Opinion Pieces in Books, there is .NET Books SQL Books SysAdmin Books And all the blogs have got feeds. So although you can get all the blogs from here.. Main Blog Feed          You can get individual RSS feeds.. AdamRG's Blog       Alex.Davies's Blog       AliceE's Blog       Andrew Clarke's Blog       Andrew Hunter's Blog       Bart Read's Blog       Ben Adderson's Blog       BobCram's Blog       bradmcgehee's Blog       Brian Donahue's Blog       Charles Brown's Blog       Chris Massey's Blog       CliveT's Blog       Damon's Blog       David Atkinson's Blog       David Connell's Blog       Dr Dionysus's Blog       drsql's Blog       FatherJack's Blog       Flibble's Blog       Gareth Marlow's Blog       Helen Joyce's Blog       James's Blog       Jason Crease's Blog       John Magnabosco's Blog       Laila's Blog       Lionel's Blog       Matt Lee's Blog       mikef's Blog       Neil Davidson's Blog       Nigel Morse's Blog       Phil Factor's Blog       red@work's Blog       reka.burmeister's Blog       Richard Mitchell's Blog       RobbieT's Blog       RobertChipperfield's Blog       Rodney's Blog       Roger Hart's Blog       Simon Cooper's Blog       Simon Galbraith's Blog       TheFutureOfMonitoring's Blog       Tim Ford's Blog       Tom Crossman's Blog       Tony Davis's Blog       As well as these blogs, you also have the forums.... SQL Server for Beginners Forum     Programming SQL Server Forum    Administering SQL Server Forum    .NET framework Forum    .Windows Forms Forum   ASP.NET Forum   ADO.NET Forum 

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >