Search Results

Search found 13645 results on 546 pages for 'bugz r us'.

Page 332/546 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • How Can I Bypass the X-Frame-Options: SAMEORIGIN HTTP Header?

    - by Daniel Coffman
    I am developing a web page that needs to display, in an iframe, a report served by another company's SharePoint server. They are fine with this. The page we're trying to render in the iframe is giving us X-Frame-Options: SAMEORIGIN which causes the browser (at least IE8) to refuse to render the content in a frame. First, is this something they can control or is it something SharePoint just does by default? If I ask them to turn this off, could they even do it? Second, can I do something to tell the browser to ignore this http header and just render the frame?

    Read the article

  • "must be convertible to System.Web.UI.Page" using custom base page in Visual Studio 2010

    - by Payton Byrd
    I have a HUGE problem. We just converted our large project to a Visual Studio 2010 solution, but maintained .Net 3.5 targets. This seemed to go swimmingly, almost too easy. Today I just encountered a huge problem. When we add a new asp.net tag to a page the designer class is not being updated. I looked around and noticed that the type specified in the Page's Inherits attribute was underlined in red. Hovering over that gives the error "must be convertible to System.Web.UI.Page". Obviously the designer isn't casting the page correctly and it's because we are using a custom base page, just as we had been with no problems in VS 2008. Has anyone else encountered this problem? If so, what's the solution. This is a show-stopper for us to use VS 2010 (and lots of egg on our faces for moving to it in the first place).

    Read the article

  • jQuery Globalization Plugin from Microsoft

    - by ScottGu
    Last month I blogged about how Microsoft is starting to make code contributions to jQuery, and about some of the first code contributions we were working on: jQuery Templates and Data Linking support. Today, we released a prototype of a new jQuery Globalization Plugin that enables you to add globalization support to your JavaScript applications. This plugin includes globalization information for over 350 cultures ranging from Scottish Gaelic, Frisian, Hungarian, Japanese, to Canadian English.  We will be releasing this plugin to the community as open-source. You can download our prototype for the jQuery Globalization plugin from our Github repository: http://github.com/nje/jquery-glob You can also download a set of samples that demonstrate some simple use-cases with it here. Understanding Globalization The jQuery Globalization plugin enables you to easily parse and format numbers, currencies, and dates for different cultures in JavaScript. For example, you can use the Globalization plugin to display the proper currency symbol for a culture: You also can use the Globalization plugin to format dates so that the day and month appear in the right order and the day and month names are correctly translated: Notice above how the Arabic year is displayed as 1431. This is because the year has been converted to use the Arabic calendar. Some cultural differences, such as different currency or different month names, are obvious. Other cultural differences are surprising and subtle. For example, in some cultures, the grouping of numbers is done unevenly. In the "te-IN" culture (Telugu in India), groups have 3 digits and then 2 digits. The number 1000000 (one million) is written as "10,00,000". Some cultures do not group numbers at all. All of these subtle cultural differences are handled by the jQuery Globalization plugin automatically. Getting dates right can be especially tricky. Different cultures have different calendars such as the Gregorian and UmAlQura calendars. A single culture can even have multiple calendars. For example, the Japanese culture uses both the Gregorian calendar and a Japanese calendar that has eras named after Japanese emperors. The Globalization Plugin includes methods for converting dates between all of these different calendars. Using Language Tags The jQuery Globalization plugin uses the language tags defined in the RFC 4646 and RFC 5646 standards to identity cultures (see http://tools.ietf.org/html/rfc5646). A language tag is composed out of one or more subtags separated by hyphens. For example: Language Tag Language Name (in English) en-AU English (Australia) en-BZ English (Belize) en-CA English (Canada) Id Indonesian zh-CHS Chinese (Simplified) Legacy Zu isiZulu Notice that a single language, such as English, can have several language tags. Speakers of English in Canada format numbers, currencies, and dates using different conventions than speakers of English in Australia or the United States. You can find the language tag for a particular culture by using the Language Subtag Lookup tool located here:  http://rishida.net/utils/subtags/ The jQuery Globalization plugin download includes a folder named globinfo that contains the information for each of the 350 cultures. Actually, this folder contains more than 700 files because the folder includes both minified and un-minified versions of each file. For example, the globinfo folder includes JavaScript files named jQuery.glob.en-AU.js for English Australia, jQuery.glob.id.js for Indonesia, and jQuery.glob.zh-CHS for Chinese (Simplified) Legacy. Example: Setting a Particular Culture Imagine that you have been asked to create a German website and want to format all of the dates, currencies, and numbers using German formatting conventions correctly in JavaScript on the client. The HTML for the page might look like this: Notice the span tags above. They mark the areas of the page that we want to format with the Globalization plugin. We want to format the product price, the date the product is available, and the units of the product in stock. To use the jQuery Globalization plugin, we’ll add three JavaScript files to the page: the jQuery library, the jQuery Globalization plugin, and the culture information for a particular language: In this case, I’ve statically added the jQuery.glob.de-DE.js JavaScript file that contains the culture information for German. The language tag “de-DE” is used for German as spoken in Germany. Now that I have all of the necessary scripts, I can use the Globalization plugin to format the product price, date available, and units in stock values using the following client-side JavaScript: The jQuery Globalization plugin extends the jQuery library with new methods - including new methods named preferCulture() and format(). The preferCulture() method enables you to set the default culture used by the jQuery Globalization plugin methods. Notice that the preferCulture() method accepts a language tag. The method will find the closest culture that matches the language tag. The $.format() method is used to actually format the currencies, dates, and numbers. The second parameter passed to the $.format() method is a format specifier. For example, passing “c” causes the value to be formatted as a currency. The ReadMe file at github details the meaning of all of the various format specifiers: http://github.com/nje/jquery-glob When we open the page in a browser, everything is formatted correctly according to German language conventions. A euro symbol is used for the currency symbol. The date is formatted using German day and month names. Finally, a period instead of a comma is used a number separator: You can see a running example of the above approach with the 3_GermanSite.htm file in this samples download. Example: Enabling a User to Dynamically Select a Culture In the previous example we explicitly said that we wanted to globalize in German (by referencing the jQuery.glob.de-DE.js file). Let’s now look at the first of a few examples that demonstrate how to dynamically set the globalization culture to use. Imagine that you want to display a dropdown list of all of the 350 cultures in a page. When someone selects a culture from the dropdown list, you want all of the dates in the page to be formatted using the selected culture. Here’s the HTML for the page: Notice that all of the dates are contained in a <span> tag with a data-date attribute (data-* attributes are a new feature of HTML 5 that conveniently also still work with older browsers). We’ll format the date represented by the data-date attribute when a user selects a culture from the dropdown list. In order to display dates for any possible culture, we’ll include the jQuery.glob.all.js file like this: The jQuery Globalization plugin includes a JavaScript file named jQuery.glob.all.js. This file contains globalization information for all of the more than 350 cultures supported by the Globalization plugin.  At 367KB minified, this file is not small. Because of the size of this file, unless you really need to use all of these cultures at the same time, we recommend that you add the individual JavaScript files for particular cultures that you intend to support instead of the combined jQuery.glob.all.js to a page. In the next sample I’ll show how to dynamically load just the language files you need. Next, we’ll populate the dropdown list with all of the available cultures. We can use the $.cultures property to get all of the loaded cultures: Finally, we’ll write jQuery code that grabs every span element with a data-date attribute and format the date: The jQuery Globalization plugin’s parseDate() method is used to convert a string representation of a date into a JavaScript date. The plugin’s format() method is used to format the date. The “D” format specifier causes the date to be formatted using the long date format. And now the content will be globalized correctly regardless of which of the 350 languages a user visiting the page selects.  You can see a running example of the above approach with the 4_SelectCulture.htm file in this samples download. Example: Loading Globalization Files Dynamically As mentioned in the previous section, you should avoid adding the jQuery.glob.all.js file to a page whenever possible because the file is so large. A better alternative is to load the globalization information that you need dynamically. For example, imagine that you have created a dropdown list that displays a list of languages: The following jQuery code executes whenever a user selects a new language from the dropdown list. The code checks whether the globalization file associated with the selected language has already been loaded. If the globalization file has not been loaded then the globalization file is loaded dynamically by taking advantage of the jQuery $.getScript() method. The globalizePage() method is called after the requested globalization file has been loaded, and contains the client-side code to perform the globalization. The advantage of this approach is that it enables you to avoid loading the entire jQuery.glob.all.js file. Instead you only need to load the files that you need and you don’t need to load the files more than once. The 5_Dynamic.htm file in this samples download demonstrates how to implement this approach. Example: Setting the User Preferred Language Automatically Many websites detect a user’s preferred language from their browser settings and automatically use it when globalizing content. A user can set a preferred language for their browser. Then, whenever the user requests a page, this language preference is included in the request in the Accept-Language header. When using Microsoft Internet Explorer, you can set your preferred language by following these steps: Select the menu option Tools, Internet Options. Select the General tab. Click the Languages button in the Appearance section. Click the Add button to add a new language to the list of languages. Move your preferred language to the top of the list. Notice that you can list multiple languages in the Language Preference dialog. All of these languages are sent in the order that you listed them in the Accept-Language header: Accept-Language: fr-FR,id-ID;q=0.7,en-US;q=0.3 Strangely, you cannot retrieve the value of the Accept-Language header from client JavaScript. Microsoft Internet Explorer and Mozilla Firefox support a bevy of language related properties exposed by the window.navigator object, such as windows.navigator.browserLanguage and window.navigator.language, but these properties represent either the language set for the operating system or the language edition of the browser. These properties don’t enable you to retrieve the language that the user set as his or her preferred language. The only reliable way to get a user’s preferred language (the value of the Accept-Language header) is to write server code. For example, the following ASP.NET page takes advantage of the server Request.UserLanguages property to assign the user’s preferred language to a client JavaScript variable named acceptLanguage (which then allows you to access the value using client-side JavaScript): In order for this code to work, the culture information associated with the value of acceptLanguage must be included in the page. For example, if someone’s preferred culture is fr-FR (French in France) then you need to include either the jQuery.glob.fr-FR.js or the jQuery.glob.all.js JavaScript file in the page or the culture information won’t be available.  The “6_AcceptLanguages.aspx” sample in this samples download demonstrates how to implement this approach. If the culture information for the user’s preferred language is not included in the page then the $.preferCulture() method will fall back to using the neutral culture (for example, using jQuery.glob.fr.js instead of jQuery.glob.fr-FR.js). If the neutral culture information is not available then the $.preferCulture() method falls back to the default culture (English). Example: Using the Globalization Plugin with the jQuery UI DatePicker One of the goals of the Globalization plugin is to make it easier to build jQuery widgets that can be used with different cultures. We wanted to make sure that the jQuery Globalization plugin could work with existing jQuery UI plugins such as the DatePicker plugin. To that end, we created a patched version of the DatePicker plugin that can take advantage of the Globalization plugin when rendering a calendar. For example, the following figure illustrates what happens when you add the jQuery Globalization and the patched jQuery UI DatePicker plugin to a page and select Indonesian as the preferred culture: Notice that the headers for the days of the week are displayed using Indonesian day name abbreviations. Furthermore, the month names are displayed in Indonesian. You can download the patched version of the jQuery UI DatePicker from our github website. Or you can use the version included in this samples download and used by the 7_DatePicker.htm sample file. Summary I’m excited about our continuing participation in the jQuery community. This Globalization plugin is the third jQuery plugin that we’ve released. We’ve really appreciated all of the great feedback and design suggestions on the jQuery templating and data-linking prototypes that we released earlier this year.  We also want to thank the jQuery and jQuery UI teams for working with us to create these plugins. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. You can follow me at: twitter.com/scottgu

    Read the article

  • Using telerik radGrid - how to set the Date format for autogenerated column in edit mode

    - by Mark Breen
    Hello All, Using VS2008, and Telerik radGrid version 2010.1.519.35 I have a about 50 DNN modules using telerik radgrid and I need to display my dates in dd/mm/yy format. It is possible to do this easily in view mode, but when I switch to edit mode, it is more of a struggle. I can write a snippit of code to reformat the displayed date values to dd/mm/yy, but for inserts the user must enter mm/dd/yy. IOW, I need to change the culture of the form to en-GB culture. In my DotnetNuke App, I have made a change to the web.config, but it still assumes en-US format. I am not sure whether I need to set this at web.config level, page level or at the column within the control. I am struggling with this for a month or more and any help would be appriciated, thanks Mark Breen Ireland BMW R80GS 1987

    Read the article

  • CSS Dropdown Menu issues

    - by Simon Hume
    Can anyone help with a small problem. I've got a nice simple CSS dropdown menu http://www.cinderellahair.co.uk/new/CSSDropdown.html The problem I have is when you rollover a menu item that has children which are wider than the content, it pushes the whole menu right. Aside of shortening the child menu links down, is there any tweak I can make to my CSS to stop this happening? CSS Code: /* General */ #cssdropdown, #cssdropdown ul { list-style: none; } #cssdropdown, #cssdropdown * { padding: 0; margin: 0; } #cssdropdown {padding:43px 0px 0px 0px;} /* Head links */ #cssdropdown li.headlink { margin:0px 40px 0px -1px; float: left; background-color: #e9e9e9;} #cssdropdown li.headlink a { display: block; padding: 0px 0px 0px 5px; text-decoration:none; } #cssdropdown li.headlink a:hover { text-decoration:underline; } /* Child lists and links */ #cssdropdown li.headlink ul { display: none; text-align: left; padding:10px 0px 0px 0px; font-size:12px; float:left;} #cssdropdown li.headlink:hover ul { display: block; } #cssdropdown li.headlink ul li a { padding: 5px; height: 17px; } #cssdropdown li.headlink ul li a:hover { background-color: #333; } /* Pretty styling */ body { font-family:Georgia, "Times New Roman", Times, serif; font-size: 16px; } #cssdropdown a { color: grey; } #cssdropdown ul li a:hover { text-decoration: none; } #cssdropdown li.headlink { background-color: white; } #cssdropdown li.headlink ul { padding-bottom: 10px;} HTML: <ul id="cssdropdown"> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/index.php">HOME</a></li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/gallery/gallery.php">GALLERY</a> <ul> <li><a href="http://amazon.com/">CELEBRITY</a></li> <li><a href="http://ebay.com/">BEFORE &amp; AFTER</a></li> <li><a href="http://craigslist.com/">HAIR TYPES</a></li> </ul> </li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/about-cinderella-hair-extensions/about-us.php">ABOUT US</a> <ul> <li><a href="http://amazon.com/">WHY CHOOSE CINDERELLA</a></li> <li><a href="http://ebay.com/">TESTIMONIALS</a></li> <li><a href="http://craigslist.com/">MINI VIDEO CLIPS</a></li> <li><a href="http://craigslist.com/">OUR HAIR PRODUCTS</a></li> </ul> </li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/news-and-offers/news.php">NEWS &amp; OFFERS</a> <ul> <li><a href="http://amazon.com/">VERA WANG FREE GIVEAWAY</a></li> <li><a href="http://ebay.com/">CINDERELLA ON TV</a></li> <li><a href="http://craigslist.com/">CINDERELLA IN THE PRESS</a></li> <li><a href="http://craigslist.com/">CINDRELLA NEWSLETTERS</a></li> </ul> </li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/cinderella-salon/salon-finder.php">SALON FINDER</a></li> </ul> JS Code: $(document).ready(function(){ $('#cssdropdown li.headlink').hover( function() { $('ul', this).css('display', 'block'); }, function() { $('ul', this).css('display', 'none'); }); }); Full code is on the link above, just view source.

    Read the article

  • Multiselect combobox with ExtJs

    - by Justin Johnson
    How do you implement a multiselect combobox as part of a Ext.FormPanel using ExtJs? I've been looking, but can't seem to find a solution that is compatible with the latest version of ExtJs (this question is similar, but doesn't have a working/current solution). This is what I have so far, but it's a single select: new Ext.FormPanel({ labelAlign: 'top', frame: true, width: 800, items: [{ layout: 'column', items:[{ columnWidth: 1, layout: 'form', items: [{ xtype: 'combo', fieldLabel: 'Countries', name: 'c[]', anchor: '95%', allowBlank: false, typeAhead: true, triggerAction: 'all', lazyRender: true, mode: 'local', store: new Ext.data.ArrayStore({ id: 0, fields: ['myId', 'displayText'], data: [ ["CA", 'Canada'], ["US", 'United States'], ["JP", 'Japan'], ] }), valueField: 'myId', displayField: 'displayText' }] }] }] }).render(document.body); I didn't see any parameters in the documentation that suggests that this is supported. I also found this and this but I could only get them working with Ext 2.

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • WPF DataBinding, CollectionViewSource, INotifyPropertyChanged

    - by plotnick
    First time when I tried to do something in WPF, I was puzzled with WPF DataBinding. Then I studied thorougly next example on MSDN: http://msdn.microsoft.com/en-us/library/ms771319(v=VS.90).aspx Now, I quite understand how to use Master-Detail paradigm for a form which takes data from one source (one table) - both for master and detailed parts. I mean for example I have a grid with data and below the grid I have a few fields with detailed data of current row. But how to do it, if detailed data comes from different but related tables? for example: you have a Table 'Users' with columns - 'id' and 'Name'. You also have another table 'Pictures' with columns like 'id','Filename','UserId'. And now, using master-detail paradigm you have to built a form. And every time when you chose a row in a Master you should get all associated pictures in Details. What is the right way to do it? Could you please show me an example?

    Read the article

  • Using External GPS Device w/ Android via Bluetooth

    - by jeremynealbrown
    Hello, I am writing an Android 2.0 application that is going to be used in a somewhat remote location. There will be no internet connection available and it will be used outside of the U.S. I am considering the option of using an external Nokia or similar GPS device as a receiver/transmitter and connecting to it for use in my application via the bluetooth API. I am looking for opinions on whether or not this is even possible, if anyone has attempted it or if someone has a suggestion for some other external device or approach that would be more fitting. Link to possible external GPS device: http://europe.nokia.com/find-products/accessories/all-accessories/navigation/gps-modules/nokia-gps-module-ld-3w Thanks in advance...

    Read the article

  • WPF, PreviewKeyDown event and underscore char

    - by commanderz
    Hi, I'm catching key hits with PreviewKeyDown event in my WPF component. I need to distinguish characters being typed: letters vs. numbers vs. underscore vs. anything else. Letters and numbers work fine (just convert the Key property of the KeyEventArgs object to string and work with character 0 of that string), but this won't work for underscore. It's ToString value depends on localized keyboard settings (it shows up as "OemMinus" on EN/US keyboard and "OemQuestion" on CZ/QWERTY keyboard). So how can I RELIABLY find out, if the typed char is underscore in the PreviewKeyDown event? Thanks for any help

    Read the article

  • iPhone - How do you make a resizable rectangle for cropping images?

    - by 0SX
    Hi everyone, I'm having a trouble making a re-sizable rectangle for cropping my images. I'm trying to achieve something like this picture: http://img192.imageshack.us/img192/8930/customcropbox.jpg Well, the only problem is I have no clue where to actually start. I need some advice to how I can achieve this effect of cropping. What documentation should I read up on? Core Graphics or Quartz 2d? Both? I've been coding for the iPhone since it's release date but I've never actually used core graphics and etc. Any help or advice would be much appreciated. I'll throw my code up here as I progress to show how it's done when I achieve it. :-) Also, this rectangular box is moveable across the screen in a UIImageView which just makes it more interesting. Thanks for the help and I look forward to achieving this goal.

    Read the article

  • WCF Binding Created In Code

    - by Daniel
    Hello I've a must to create wcf service with parameter. I'm following this http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/8f18aed8-8e34-48ea-b8be-6c29ac3b4f41 First this is that I don't know how can I set this custom behavior "MyServiceBehavior" in my Web.config in ASP.NET MVC app that will host it. As far as I know behaviors must be declared in section in wcf.config. How can I add reference there to my behavior class from service assembly? An second thing is that I the following example the create local host, but how I can add headers used in constructor when I use service reference and it will already create instance of web service, right? Regards, Daniel Skowronski

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Why is it 8 here,understanding buffer overflow

    - by Mask
    void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8;//why is it 8?? } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } The above demo is from here: http://insecure.org/stf/smashstack.html But it's not working here: D:\test>gcc -Wall -Wextra hw.cpp && a.exe hw.cpp: In function `void function(int, int, int)': hw.cpp:6: warning: unused variable 'buffer2' hw.cpp: At global scope: hw.cpp:4: warning: unused parameter 'a' hw.cpp:4: warning: unused parameter 'b' hw.cpp:4: warning: unused parameter 'c' 1 And I don't understand why it's 8 though the author thinks: A little math tells us the distance is 8 bytes.

    Read the article

  • JSR-299 CDI / Weld vs. Google Guice

    - by deamon
    Weld, the JSR-299 Contexts and Dependency Injection reference implementation, considers itself as a kind of successor of Spring and Guice. CDI was influenced by a number of existing Java frameworks, including Seam, Guice and Spring. However, CDI has its own, very distinct, character: more typesafe than Seam, more stateful and less XML-centric than Spring, more web and enterprise-application capable than Guice. But it couldn't have been any of these without inspiration from the frameworks mentioned and lots of collaboration and hard work by the JSR-299 Expert Group (EG). http://docs.jboss.org/weld/reference/latest/en-US/html/1.html What makes Weld more capable for enterprise application compared to Guice? Are there any advantages or disadvantages compared to Guice? What do you think about Guice AOP compared to Weld interceptors? What about performance?

    Read the article

  • How to use "SelectMany" with DataServiceQuery<>

    - by sako73
    I have the following DataServiceQuery running agaist an ADO Data Service (with the update installed to make it run like .net 4): DataServiceQuery<Account> q = (_gsc.Users .Where(c => c.UserId == myId) .SelectMany(c => c.ConsumerXref) .Select(x => x.Account) .Where(a => a.AccountName == "My Account" && a.IsActive) .Select(a => a)) as DataServiceQuery<Account>; When I run it, I get an exception: Cannot specify query options (orderby, where, take, skip) on single resource As far as I can tell, I need to use a version of "SelectMany" that includes an additonal lambda expression (http://msdn.microsoft.com/en-us/library/bb549040.aspx), but I am not able to get this to work correctly. Could someone show me how to properly structure the "SelectMany" call? Thank you for any help.

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC FileContentResult IE 8.0 hangs on download

    - by marc.d
    some of my users are expieriencing problems when they try to download a report, the download just hangs on 0%, restarting IE usually fixes the problem. why does this happen? i am using ASP.NET MVC (v1), the my action looks like this <Authorize()> _ <AcceptVerbs(HttpVerbs.Get)> _ Function RenderReport(ByVal guid As Guid, ByVal anonym As Boolean) As FileContentResult ... Dim mimeType As String = String.Empty Dim renderedBytes() As Byte = EmployeePresentation.Render(guid, mimeType, Server.MapPath("~/Reports/..."), anonym) Return File(renderedBytes, mimeType, filename) End Function the filename is US-ASCII encoded, filesize is usally around 300Kb, mimeType is application/pdf tia

    Read the article

  • Calling AuditQuerySystemPolicy() (advapi32.dll) from C# returns "The parameter is incorrect"

    - by JCCyC
    The sequence is like follows: Open a policy handle with LsaOpenPolicy() (not shown) Call LsaQueryInformationPolicy() to get the number of categories; For each category: Call AuditLookupCategoryGuidFromCategoryId() to turn the enum value into a GUID; Call AuditEnumerateSubCategories() to get a list of the GUIDs of all subcategories; Call AuditQuerySystemPolicy() to get the audit policies for the subcategories. All of these work and return expected, sensible values except the last. Calling AuditQuerySystemPolicy() gets me a "The parameter is incorrect" error. I'm thinking there must be some subtle unmarshaling problem. I'm probably misinterpreting what exactly AuditEnumerateSubCategories() returns, but I'm stumped. You'll see (commented) I tried to dereference the return pointer from AuditEnumerateSubCategories() as a pointer. Doing or not doing that gives the same result. Code: #region LSA types public enum POLICY_INFORMATION_CLASS { PolicyAuditLogInformation = 1, PolicyAuditEventsInformation, PolicyPrimaryDomainInformation, PolicyPdAccountInformation, PolicyAccountDomainInformation, PolicyLsaServerRoleInformation, PolicyReplicaSourceInformation, PolicyDefaultQuotaInformation, PolicyModificationInformation, PolicyAuditFullSetInformation, PolicyAuditFullQueryInformation, PolicyDnsDomainInformation } public enum POLICY_AUDIT_EVENT_TYPE { AuditCategorySystem, AuditCategoryLogon, AuditCategoryObjectAccess, AuditCategoryPrivilegeUse, AuditCategoryDetailedTracking, AuditCategoryPolicyChange, AuditCategoryAccountManagement, AuditCategoryDirectoryServiceAccess, AuditCategoryAccountLogon } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct POLICY_AUDIT_EVENTS_INFO { public bool AuditingMode; public IntPtr EventAuditingOptions; public UInt32 MaximumAuditEventCount; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct GUID { public UInt32 Data1; public UInt16 Data2; public UInt16 Data3; public Byte Data4a; public Byte Data4b; public Byte Data4c; public Byte Data4d; public Byte Data4e; public Byte Data4f; public Byte Data4g; public Byte Data4h; public override string ToString() { return Data1.ToString("x8") + "-" + Data2.ToString("x4") + "-" + Data3.ToString("x4") + "-" + Data4a.ToString("x2") + Data4b.ToString("x2") + "-" + Data4c.ToString("x2") + Data4d.ToString("x2") + Data4e.ToString("x2") + Data4f.ToString("x2") + Data4g.ToString("x2") + Data4h.ToString("x2"); } } #endregion #region LSA Imports [DllImport("kernel32.dll")] extern static int GetLastError(); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern UInt32 LsaNtStatusToWinError( long Status); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaOpenPolicy( ref LSA_UNICODE_STRING SystemName, ref LSA_OBJECT_ATTRIBUTES ObjectAttributes, Int32 DesiredAccess, out IntPtr PolicyHandle ); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaClose(IntPtr PolicyHandle); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaFreeMemory(IntPtr Buffer); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern void AuditFree(IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern long LsaQueryInformationPolicy( IntPtr PolicyHandle, POLICY_INFORMATION_CLASS InformationClass, out IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditLookupCategoryGuidFromCategoryId( POLICY_AUDIT_EVENT_TYPE AuditCategoryId, IntPtr pAuditCategoryGuid); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditEnumerateSubCategories( IntPtr pAuditCategoryGuid, bool bRetrieveAllSubCategories, out IntPtr ppAuditSubCategoriesArray, out ulong pCountReturned); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditQuerySystemPolicy( IntPtr pSubCategoryGuids, ulong PolicyCount, out IntPtr ppAuditPolicy); #endregion Dictionary<string, UInt32> retList = new Dictionary<string, UInt32>(); long lretVal; uint retVal; IntPtr pAuditEventsInfo; lretVal = LsaQueryInformationPolicy(policyHandle, POLICY_INFORMATION_CLASS.PolicyAuditEventsInformation, out pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) { LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception((int)retVal); } POLICY_AUDIT_EVENTS_INFO myAuditEventsInfo = new POLICY_AUDIT_EVENTS_INFO(); myAuditEventsInfo = (POLICY_AUDIT_EVENTS_INFO)Marshal.PtrToStructure(pAuditEventsInfo, myAuditEventsInfo.GetType()); IntPtr subCats = IntPtr.Zero; ulong nSubCats = 0; for (int audCat = 0; audCat < myAuditEventsInfo.MaximumAuditEventCount; audCat++) { GUID audCatGuid = new GUID(); if (!AuditLookupCategoryGuidFromCategoryId((POLICY_AUDIT_EVENT_TYPE)audCat, new IntPtr(&audCatGuid))) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } if (!AuditEnumerateSubCategories(new IntPtr(&audCatGuid), true, out subCats, out nSubCats)) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } // Dereference the first pointer-to-pointer to point to the first subcategory // subCats = (IntPtr)Marshal.PtrToStructure(subCats, subCats.GetType()); if (nSubCats > 0) { IntPtr audPolicies = IntPtr.Zero; if (!AuditQuerySystemPolicy(subCats, nSubCats, out audPolicies)) { int causingError = GetLastError(); if (subCats != IntPtr.Zero) AuditFree(subCats); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } AUDIT_POLICY_INFORMATION myAudPol = new AUDIT_POLICY_INFORMATION(); for (ulong audSubCat = 0; audSubCat < nSubCats; audSubCat++) { // Process audPolicies[audSubCat], turn GUIDs into names, fill retList. // http://msdn.microsoft.com/en-us/library/aa373931%28VS.85%29.aspx // http://msdn.microsoft.com/en-us/library/bb648638%28VS.85%29.aspx IntPtr itemAddr = IntPtr.Zero; IntPtr itemAddrAddr = new IntPtr(audPolicies.ToInt64() + (long)(audSubCat * (ulong)Marshal.SizeOf(itemAddr))); itemAddr = (IntPtr)Marshal.PtrToStructure(itemAddrAddr, itemAddr.GetType()); myAudPol = (AUDIT_POLICY_INFORMATION)Marshal.PtrToStructure(itemAddr, myAudPol.GetType()); retList[myAudPol.AuditSubCategoryGuid.ToString()] = myAudPol.AuditingInformation; } if (audPolicies != IntPtr.Zero) AuditFree(audPolicies); } if (subCats != IntPtr.Zero) AuditFree(subCats); subCats = IntPtr.Zero; nSubCats = 0; } lretVal = LsaFreeMemory(pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal); lretVal = LsaClose(policyHandle); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal);

    Read the article

  • sharp architecture, FluentNHibernate, automapper, DTO, 1:m persistence question

    - by csetzkorn
    Let us say we have a class A which has a reference to another class B (1:m) public class A { public virtual B B { get; set; } } I reflect this using FluentNHibernate within the sharp architecture and also manage to ‘initialise’ A and its B via a DTO and Automapper. The DTO contain A’s values and B (just B’s id value/A’s foreign key initialised). I was hoping that I can persist A by ‘just’ using its 'out of the box repository' without requiring B’s repository using: SaveOrUpdate(A); (A has been mapped using AutoMapper and contains B with its Id initialised) Was my assumption too naive? Can I achieve this somehow (without ever requiring B’s repository)? Or do I have to use A’s and B’s repository in, for example, the controller or some other service layer? Thanks. Best wishes, Christian

    Read the article

  • NATUPnP IStaticPortMappingCollection::Add returns HRESULT 0x80040214

    - by dauphic
    I'm attempting to use Microsoft's NATUPnP library to create a port mapping. Unfortunately, I'm unable to. My router supports UPnP, it is enabled, and I can create mappings with other (pre-built) applications. I can also read existing mappings. When I call the Add function, it fails and returns HRESULT 0x80040214 (which is undocumented). I have absolutely no idea what might be going on. IStaticPortMapping* newMapping = NULL; hr = portMappings->Add(27015, L"TCP", 27015, L"MYCOMPUTER", VARIANT_TRUE, L"TestMapping", &newMapping); You can see the reference for this function at http://msdn.microsoft.com/en-us/library/aa366148%28v=VS.85%29.aspx. The portMappings object is, of course, valid; I use it earlier in the code to enumerate over the existing mappings. If anyone has experience with this and might know what my problem is, I'd appreciate any help.

    Read the article

  • Expire Output Cache ASP.Net MVC

    - by Max Fraser
    I am using the standared outputcache tag in my MVC app which works great but I need to force it to be dumped at certain times. How do I achieve this? The page that gets cached is built from a very simple route {Controller}/{PageName} - so most pages are something like this: /Pages/About-Us Here is the output cache tag that is at the top of my .aspx v iew page just to be clear: <@ OutputCache Duration="100" VaryByParam="None" %> So in another action on the same controller where content is updated I need to dump this cache, or even all of it - it's a very small app so not a big deal deal to dump all cached items.

    Read the article

  • Message pump in C# Windows service

    - by Pickles
    Hello, I have a Windows Service written in C# that handles all of our external hardware I/O for a kiosk application. One of our new devices is a USB device that comes with an API in a native DLL. I have a proper P/Invoke wrapper class created. However, this API must be initialized with an HWnd to a windows application because it uses the message pump to raise asynchronous events. Besides putting in a request to the hardware manufacturer to provide us with an API that does not depend on a Windows message pump, is there any way to manually instantiate a message pump in a new thread in my Windows Service that I can pass into this API? Do I actually have to create a full Application class, or is there a lower level .NET class that encapsulates a message pump? Thanks

    Read the article

  • Java Generics, JPA 2, J2EE, JSF 2, GWT, Ajax, Oracle's Java Strategies, Flex, iPhone, Agile ALM, Gra

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Java Conference and Workshops has announced the complete program of over 50 sessions on the present and future of the Java language and VM, how they are evolving to meet the community's ever-changing needs, and some of the cutting-edge tools, technologies & techniques used for building robust enterprise Java applications today. The GIDs.Java track at Great Indian Developer Summit takes place 22 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Java at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 22 and 23 April 2010, GIDS.Java offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Java "Pointy Haired Bosses and Pragmatic Programmers" is led by Dr. Venkat Subramaniam. He speaks about how each of us has a professional responsibility to be objective and make decisions that will help us and our teams be productive and deliver results. Venkat will pick on some fallacies, lay down facts, and discuss how to stay professional and objective in our daily efforts. The second keynote of the day explains the practical features that make the Cloud so interesting, and why everyone should start using it in their everyday life. Simone Brunozzi, Amazon Web Services Technology Evangelist, will detail technical examples, business details all mixed with a lot of Italian humor to ensure audience enjoy this talk without a single line of code. The third keynote of the day gives an exciting overview of directions in the Java space for Oracle, featuring concrete signs of Oracles heavy investment, a clear concise strategy overview, and deep dives into some of the most interesting pieces of technology being developed in the Java Platform Group today; such as JavaEE, JDK7, JavaFX, and our exciting new visual tools. Featuring demos by a Java evangelism team star, Simon Ritter, this talk takes you top to bottom in Java Technology. Featured talks at GID.Web include: Good, Bad, and Ugly of Java Generics, Venkat Subramaniam Pure Java Ajax: An Overview of GWT 2.0, Marty Hall How JPA 2.0 Makes a Good Thing Even Better, Mike Keith Building Enterprise RIAs with Adobe Flex and Java, Sujit Reddy G Integrated Ajax Support in JSF 2.0, Marty Hall Design Patterns in Java and Groovy, Venkat Subramaniam A Gentle Introduction to iPhone and Obj-C for Java Developers, Matthew McCullough Cloud Computing: Azure for Java Developers, Janakiram MSV Ajax Support in the Prototype JavaScript Library, Marty Hall First steps to IT Heaven Through the Cloud. Part III: .Java, Simone Brunozi Building Web 2.0 User Interfaces for Web Service Models using JSF, Frank Nimphius and Jobinesh P Acceptance Test Driven Development, John Tobin and Mohammed Mohsinali Architecting Your Java Applications for the Cloud, Praveen Srivatsa Effective Java, Venkat Subramaniam The Amazing Groovy Weight-loss Plan, Scott Davis Enterprise Modeling - from Conceptual Planning to Technical Blueprints, J Sripad Java Collections Renaissance, Donald Raab and Vlad Zakharov Power 7 and IBM J9VM, Himanshu Goyal A Whistle-stop Tour of Maven 3.0, Matthew McCullough Mass Volume Opportunities for Java Developers, Jouko Nuottila Emerging Technology Complex Event Processing, Duvvuri Srinivas Agile ALM for Distributed Development, Karthi Swaminathan Dim Sum Grails - A Sampler of Practical Non Database-Driven Grails Applications, Scott Davis Diagnosing Performance Bottlenecks in J2EE, Deepak Kaul Business Driven Identity Management, Suneet Agera Combining Java EE with OSGi using Eclipse Gemini, Mike Keith Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Giving a child window focus in IE8

    - by Andrew K
    I'm trying to launch a popup window from a Javascript function and ensure it has focus using the following call: window.open(popupUrl, popupName, "...").focus(); It works in every other browser, but IE8 leaves the new window in the background with the flashing orange taskbar notification. Apparently this is a feature of IE8: http://msdn.microsoft.com/en-us/library/ms536425%28VS.85%29.aspx It says that I should be able to focus the window by making a focus() call originating from the new page, but that doesn't seem to work either. I've tried inserting window.focus() in script tags in the page and the body's onload but it has no effect. Is there something I'm missing about making a focus() call as the page loads, or another way to launch a popup that IE8 won't hide?

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >