Search Results

Search found 12211 results on 489 pages for 'industry standard'.

Page 433/489 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Experience the iPad UI On Your PC

    - by Matthew Guay
    Want to test drive iPad without heading over to an Apple store?  Here’s a way you can experience some of the iPad UI straight from your browser! The iPad is the latest gadget from Apple to wow the tech world, and people even waited in line all night to be one of the first to get their hands on one.  Thanks to a simple JavaScript trick, however, you can get a feel for some of its new features without leaving your computer.  This won’t let you try out everything on the iPad, but it will let you see how the new lists and pop-over menus work just like they do in the new apps. Test drive the iPad’s UI from your browser Normally, the Apple iPhone developer library online looks like a standard webpage. But, on the iPad, it looks and feels like a full-blown native iPad app.  With a nifty JavaScript trick from boredzo.org you can use this same interface on your PC.  Since the iPad uses the Safari browser, we ran this test in Safari for Windows.  If you don’t already have it installed, you can download it from Apple (link below) and setup as normal. Now, open Safari and browse to Apple’s developer page at: http://www.developer.apple.com   Now, enter the following in the address bar, and press Enter. javascript:localStorage.setItem('debugSawtooth', 'true')   Finally, click this link to go to the iPhone OS documentation. http://developer.apple.com/iphone/library/iPad/ After a short delay, it should open in full iPad style! The left menu works just like the menus on the iPad, complete with transitions.  It feels entirely like a native application, instead of a webpage.  To scroll through text, click and pull up or down similar to the way you would use it on a touch screen. Some pages even include a pop-over menu like many of the new iPad apps use. Note that the page will be rendered for the size of your browser, and if you resize your window the page will not resize with it.  Simply press F5 to reload the page, and it will resize to fit the new window size.  If you resize your window to be tall and narrow, like the iPad in horizontal mode, the webpage will change and the left menu will disappear in lieu of a drop-down menu just like it would if you rotated the iPad. This works in Chrome as well, since it, like Safari, is based on Webkit.  However, it didn’t seem to work in our test on Firefox or other browsers. We’ve previously covered how you can experience some of the iPhone’s UI with the online iPhone user guide.  Check it out if you haven’t yet: View Mobile Websites in Windows with Safari 4 Developer Tools Conclusion Although this doesn’t let you really try out all of the iPad’s interface, it at least gives you a taste of how it works.  It’s exciting to see how much functionality can be packed into webapps today.  And don’t forget, How-to Geek is giving away an iPad to a random fan!  Head over to our Facebook page and fan How-to Geek if you haven’t already done so. Win an iPad on the How-To Geek Facebook Fan Page Similar Articles Productive Geek Tips Want an iPad? How-To Geek is Giving One Away!Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]The Complete List of iPad Tips, Tricks, and TutorialsShare Your Windows Vista Experience Index ScoreAnother Blog You Should Subscribe To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • It was worth the wait… Welcome Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} It certainly was worth the wait to meet Oracle GoldenGate 11gR2, because it is full of new features on multiple fronts. In fact, this release has the longest and strongest list of new features in Oracle GoldenGate’s history. The new release brings GoldenGate closer to the Oracle Database while expanding the support for global implementations and heterogeneous systems. It is more secure, more flexible, and faster. We announced the availability of Oracle GoldenGate 11gR2 via a press release. If you haven’t seen it yet, please check it out. As covered in this announcement, there are a variety of improvements in the product: Integrated Capture for Oracle Database: brings Oracle GoldenGate’s Capture process closer to the Oracle Database engine and enables support for Advanced Compression among other benefits. Enhanced Conflict Detection & Resolution, speeds and simplifies the conflict detection and resolution process for Active-Active deployments. Globalization, meaning Oracle GoldenGate can be deployed for databases that use multi-byte/Unicode character sets. Security and Performance Improvements, includes support Federal Information Protection Standard (FIPS). Increased Extensibility by kicking off actions based on an event record in the transaction log or in the Trail file. Integration with Oracle Enterprise Manager 12c , in addition to the Oracle GoldenGate Monitor product. Expanded Heterogeneity, including capture from IBM DB2 for i on iSeries (AS/400) and delivery to Postgres We will explain these new features in more detail at our upcoming launch webcast: Harness the Power of the New Release of Oracle GoldenGate 11g- (Sept 12 8am/10am PT) In addition to learning more about these new features, the webcast will allow you to ask your questions to product management via live Q&A section. So, I hope you will not miss this opportunity to explore the new release of Oracle GoldenGate 11g and see how it can deliver enterprise-class real-time data integration solutions.. I look forward to a great webcast to unveil GoldenGate’s new capabilities.

    Read the article

  • Building a project in VS that depends on a static and dynamic library

    - by fg nu
    Noob noobin'. I would appreciate some very careful handholding in setting up an example in Visual Studio 2010 Professional where I am trying to build a project which links: a previously built static library, for which the VS project folder is "C:\libjohnpaul\" a previously built dynamic library, for which the VS project folder is "C:\libgeorgeringo\" These are listed as Recipes 1.11, 1.12 and 1.13 in the C++ Cookbook. The project fails to compile for me with unresolved dependencies (see details below), and I can't figure out why. Project 1: Static Library The following are the header and source files that were compiled in this project. I was able to compile this project fine in VS2010, to the named standard library "libjohnpaul.lib" which lives in the folder ("C:/libjohnpaul/Release/"). // libjohnpaul/john.hpp #ifndef JOHN_HPP_INCLUDED #define JOHN_HPP_INCLUDED void john( ); // Prints "John, " #endif // JOHN_HPP_INCLUDED // libjohnpaul/john.cpp #include <iostream> #include "john.hpp" void john( ) { std::cout << "John, "; } // libjohnpaul/paul.hpp #ifndef PAUL_HPP_INCLUDED #define PAUL_HPP_INCLUDED void paul( ); // Prints " Paul, " #endif // PAUL_HPP_INCLUDED // libjohnpaul/paul.cpp #include <iostream> #include "paul.hpp" void paul( ) { std::cout << "Paul, "; } // libjohnpaul/johnpaul.hpp #ifndef JOHNPAUL_HPP_INCLUDED #define JOHNPAUL_HPP_INCLUDED void johnpaul( ); // Prints "John, Paul, " #endif // JOHNPAUL_HPP_INCLUDED // libjohnpaul/johnpaul.cpp #include "john.hpp" #include "paul.hpp" #include "johnpaul.hpp" void johnpaul( ) { john( ); paul( ); Project 2: Dynamic Library Here are the header and source files for the second project, which also compiled fine with VS2010, and the "libgeorgeringo.dll" file lives in the directory "C:\libgeorgeringo\Debug". // libgeorgeringo/george.hpp #ifndef GEORGE_HPP_INCLUDED #define GEORGE_HPP_INCLUDED void george( ); // Prints "George, " #endif // GEORGE_HPP_INCLUDED // libgeorgeringo/george.cpp #include <iostream> #include "george.hpp" void george( ) { std::cout << "George, "; } // libgeorgeringo/ringo.hpp #ifndef RINGO_HPP_INCLUDED #define RINGO_HPP_INCLUDED void ringo( ); // Prints "and Ringo\n" #endif // RINGO_HPP_INCLUDED // libgeorgeringo/ringo.cpp #include <iostream> #include "ringo.hpp" void ringo( ) { std::cout << "and Ringo\n"; } // libgeorgeringo/georgeringo.hpp #ifndef GEORGERINGO_HPP_INCLUDED #define GEORGERINGO_HPP_INCLUDED // define GEORGERINGO_DLL when building libgerogreringo.dll # if defined(_WIN32) && !defined(__GNUC__) # ifdef GEORGERINGO_DLL # define GEORGERINGO_DECL _ _declspec(dllexport) # else # define GEORGERINGO_DECL _ _declspec(dllimport) # endif # endif // WIN32 #ifndef GEORGERINGO_DECL # define GEORGERINGO_DECL #endif // Prints "George, and Ringo\n" #ifdef __MWERKS__ # pragma export on #endif GEORGERINGO_DECL void georgeringo( ); #ifdef __MWERKS__ # pragma export off #endif #endif // GEORGERINGO_HPP_INCLUDED // libgeorgeringo/ georgeringo.cpp #include "george.hpp" #include "ringo.hpp" #include "georgeringo.hpp" void georgeringo( ) { george( ); ringo( ); } Project 3: Executable that depends on the previous libraries Lastly, I try to link the aforecompiled static and dynamic libraries into one project called "helloBeatlesII" which has the project directory "C:\helloBeatlesII" (note that this directory does not nest the other project directories). The linking process that I did is described below: To the "helloBeatlesII" solution, I added the solutions "libjohnpaul" and "libgeorgeringo"; then I changed the properties of the "helloBeatlesII" project to additionally point to the include directories of the other two projects on which it depends ("C:\libgeorgeringo\libgeorgeringo" & "C:\libjohnpaul\libjohnpaul"); added "libgeorgeringo" and "libjohnpaul" to the project dependencies of the "helloBeatlesII" project and made sure that the "helloBeatlesII" project was built last. Trying to compile this project gives me the following unsuccessful build: 1------ Build started: Project: helloBeatlesII, Configuration: Debug Win32 ------ 1Build started 10/13/2012 5:48:32 PM. 1InitializeBuildStatus: 1 Touching "Debug\helloBeatlesII.unsuccessfulbuild". 1ClCompile: 1 helloBeatles.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl georgeringo(void)" (?georgeringo@@YAXXZ) referenced in function _main 1helloBeatles.obj : error LNK2019: unresolved external symbol "void __cdecl johnpaul(void)" (?johnpaul@@YAXXZ) referenced in function _main 1E:\programming\cpp\vs-projects\cpp-cookbook\helloBeatlesII\Debug\helloBeatlesII.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:01.34 ========== Build: 0 succeeded, 1 failed, 2 up-to-date, 0 skipped ========== At this point I decided to call in the cavalry. I am new to VS2010, so in all likelihood I am missing something straightforward.

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • Windows 8 Apps with HTML5 and JavaScript

    - by Stephen.Walther
    Last week, I finished writing Windows 8 Apps with HTML5 and JavaScript – Yikes! That is a long title. This book is all about writing apps for Windows 8 which can be added to the Windows Store. The book focuses on building apps using HTML5 and JavaScript. If you are already comfortable building websites, then building Windows Store apps is not a huge leap.  I explain how you can create productivity apps, like a Task List app, and games, like a simple arcade game. I also explain how you can publish your app to the Windows Store and make money. To celebrate the release of Windows 8, my publisher is offering a huge 40% discount on the book until November 30, 2012. If you want to take advantage of this discount, follow the link below and enter the discount code WINDEV40 during checkout. http://www.informit.com/promotions/promotion.aspx?promo=139036&walther So what’s in the book?  Here’s an overview of each of the chapters: Chapter 1 – Building Windows Store Apps Contains a walkthrough of creating a super simple Windows app for taking pictures from your webcam. Explains how to publish your app to the Windows Store. Chapter 2 – WinJS Fundamentals Provides an overview of the Windows Library for JavaScript which is the Microsoft library for creating Windows Store apps with JavaScript. Chapter 3 – Observables, Bindings, and Templates You learn how to display a list of items using a template. For example, you learn how to create a template which can be used to display a list of products. Chapter 4 – Using WinJS Controls Overview of the core set of JavaScript controls included with the WinJS library. You learn how to use the Tooltip, ToggleSwitch, Rating, DatePicker, TimePicker, and FlipView controls. Chapter 5 – Creating Forms This chapter explains how to take advantage of HTML5 forms to display specialized keyboards and perform form validation. Chapter 6 – Menus and Flyouts You learn how to display popups, menus, and toolbars using the JavaScript controls included with the WinJS library. Chapter 7 – Using the ListView Control This entire chapter is devoted to the ListView control which is the most important control in the WinJS library. You can use the ListView control to display, sort, filter, and edit a list of items. Chapter 8 – Creating Data Sources Learn how to use a ListView control to display data from the file system, a web service, and IndexedDB. Chapter 9 – App Events and States This chapter explains the standard application events which are raised in a Windows Store app such as the activated and checkpoint events. You also learn how to build apps which adapt automatically to different view states such as portrait and landscape. Chapter 10 – Page Fragments and Navigation This chapter discusses two subjects: You learn how to create custom WinJS controls with Page Controls and you learn how to build apps with multiple pages.  Chapter 11 – Using the Live Connect API Learn how to use Windows Live Services to authenticate users, interact with SkyDrive, and retrieve user profile information (such as a user’s birthday or profile picture). Chapter 12 – Graphics and Games This chapter is devoted to building the Brain Eaters app which is a simple arcade game. Navigate a maze and eat all of the food pellets while avoiding the brain-eating zombies to win the game. Learn how to create the game using HTML5 Canvas.   If you want to buy the book, remember to use the magic discount code WINDEV40 and visit the following link: http://www.informit.com/promotions/promotion.aspx?promo=139036&walther

    Read the article

  • Silverlight Cream for January 16, 2011 -- #1029

    - by Dave Campbell
    In this Issue: Michael Washington, Jesse Liberty, Deborah Kurata(-2-, -3-, -4-), Sergey Barskiy(-2-), Miroslav Nedyalkov, Jeff Prosise, and Matthias Shapiro(-2-). Above the Fold: Silverlight: "Building a Multi-Page Silverlight LOB Application" Deborah Kurata WP7: "Windows Phone 7 [Controls] Project" Sergey Barskiy Sketchflow: "Sketchflow To Final" Michael Washington From SilverlightCream.com: Sketchflow To Final Check out this post by Michael Washington detailing the Sketchflow he did of his app, and how the final result tracks amazingly well. Windows Phone From Scratch #19 – MVVM Light Toolkit Soup To Nuts #4 Continuing to try to catch up to Jesse Liberty is this post, number 19 in the Windows Phone series and the 4th in that series about MVVMLight, and discussing binding a collection in the ViewModel to a ListBox in the view. Building a Multi-Page Silverlight LOB Application Deborah Kurata has the first 4 parts up (in 2 days) in a 6-part tutorial series she's doing on building a Silverlight LOB app. The first post was an intro and link to the rest as they become available. This 2nd post is getting the app newed up and making sure you've got your head wrapped around multiple pages. Theming a Silverlight Application using Existing Themes Deborah Kurata's next part is about getting started with themes in your app using the themes provided in the toolkit specifically. Theming a Silverlight Application using Custom Themes Deborah Kurata's next tutorial in the series is also about themes, but this time it's about custom themes... or rather customized from a 'standard' one in this case. Adding a New Page to a Multi-Page Silverlight Application Deborah Kurata's last available post in the tutorial series is this one on adding a new page to the app. Windows Phone 7 Project Sergey Barskiy has a pair of posts up about a calendar control that he is building and has out on CodePlex... nice-looking control too! Windows Phone 7 Controls Project Update Sergey Barskiy's second post is an update to the calendar... the biggest update being the ability to use the Toolkit context menu. How to Create Ad Rotator with Telerik TransitionControl and CoverFlow control for Silverlight Miroslav Nedyalkov uses the Telerik TransitionControl and CoverFlow controls to produce a great-looking ad rotator using any ContentControl or ListBox... very nice demo on the page.... Building Touch Interfaces for Windows Phones, Part 2 Jeff Prosise has part 2 of his tutorial series on WP7 Touch Interfaces up... and he's processing touch events directly in this one. Fixing the ListPicker / ScrollViewer Problem in Windows Phone 7 Matthias Shapiro has a couple of posts out that I've missed... this one is on an issue with ListPickers in a ScrollViewer where the listpicker gets hit rather than the scroll, and of course he has a work-around... but you'll need the source for the ListPicker to do it. Embedding a Sound File in Windows Phone 7 app (Silverlight) The next post by Matthias Shapiro is an explanation of embedding a sound file in a WP7 app with 2 conditions: 1) it downloads with your app, and 2) it plays no matter what. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • Unit Testing DateTime – The Crazy Way

    - by João Angelo
    We all know that the process of unit testing code that depends on DateTime, particularly the current time provided through the static properties (Now, UtcNow and Today), it’s a PITA. If you go ask how to unit test DateTime.Now on stackoverflow I’ll bet that you’ll get two kind of answers: Encapsulate the current time in your own interface and use a standard mocking framework; Pull out the big guns like Typemock Isolator, JustMock or Microsoft Moles/Fakes and mock the static property directly. Now each alternative has is pros and cons and I would have to say that I glean more to the second approach because the first adds a layer of abstraction just for the sake of testability. However, the second approach depends on commercial tools that not every shop wants to buy or in the not so friendly Microsoft Moles. (Sidenote: Moles is now named Fakes and it will ship with VS 2012) This tends to leave people without an acceptable and simple solution so after reading another of these types of questions in SO I came up with yet another alternative, one based on the first alternative that I presented here but tries really hard to not get in your way with yet another layer of abstraction. So, without further dues, I present you, the Tardis. The Tardis is single section of conditionally compiled code that overrides the meaning of the DateTime expression inside a single class. You still get the normal coding experience of using DateTime all over the place, but in a DEBUG compilation your tests will be able to mock every static method or property of the DateTime class. An example follows, while the full Tardis code can be downloaded from GitHub: using System; using NSubstitute; using NUnit.Framework; using Tardis; public class Example { public Example() : this(string.Empty) { } public Example(string title) { #if DEBUG this.DateTime = DateTimeProvider.Default; this.Initialize(title); } internal IDateTimeProvider DateTime { get; set; } internal Example(string title, IDateTimeProvider provider) { this.DateTime = provider; #endif this.Initialize(title); } private void Initialize(string title) { this.Title = title; this.CreatedAt = DateTime.UtcNow; } private string title; public string Title { get { return this.title; } set { this.title = value; this.UpdatedAt = DateTime.UtcNow; } } public DateTime CreatedAt { get; private set; } public DateTime UpdatedAt { get; private set; } } public class TExample { public void T001() { // Arrange var tardis = Substitute.For<IDateTimeProvider>(); tardis.UtcNow.Returns(new DateTime(2000, 1, 1, 6, 6, 6)); // Act var sut = new Example("Title", tardis); // Assert Assert.That(sut.CreatedAt, Is.EqualTo(tardis.UtcNow)); } public void T002() { // Arrange var tardis = Substitute.For<IDateTimeProvider>(); var sut = new Example("Title", tardis); tardis.UtcNow.Returns(new DateTime(2000, 1, 1, 6, 6, 6)); // Act sut.Title = "Updated"; // Assert Assert.That(sut.UpdatedAt, Is.EqualTo(tardis.UtcNow)); } } This approach is also suitable for other similar classes with commonly used static methods or properties like the ConfigurationManager class.

    Read the article

  • OpenGL/GLSL: Render to cube map?

    - by BobDole
    I'm trying to figure out how to render my scene to a cube map. I've been stuck on this for a bit and figured I would ask you guys for some help. I'm new to OpenGL and this is the first time I'm using a FBO. I currently have a working example of using a cubemap bmp file, and the samplerCube sample type in the fragment shader is attached to GL_TEXTURE1. I'm not changing the shader code at all. I'm just changing the fact that I wont be calling the function that was loading the cubemap bmp file and trying to use the below code to render to a cubemap. You can see below that I'm also attaching the texture again to GL_TEXTURE1. This is so when I set the uniform: glUniform1i(getUniLoc(myProg, "Cubemap"), 1); it can access it in my fragment shader via uniform samplerCube Cubemap. I'm calling the below function like so: cubeMapTexture = renderToCubeMap(150, GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE); Now, I realize in the draw loop below that I'm not changing the view direction to look down the +x, -x, +y, -y, +z, -z axis. I really was just wanting to see something working first before implemented that. I figured I should at least see something on my object the way the code is now. I'm not seeing anything, just straight black. I've made my background white still the object is black. I've removed lighting, and coloring to just sample the cubemap texture and still black. I'm thinking the problem might be the format types when setting my texture which is GL_RGB8, GL_RGBA but I've also tried: GL_RGBA, GL_RGBA GL_RGB, GL_RGB I thought this would be standard since we are rendering to a texture attached to a framebuffer, but I've seen different examples that use different enum values. I've also tried binding the cube map texture in every draw call that I'm wanting to use the cube map: glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); Also, I'm not creating a depth buffer for the FBO which I saw in most examples, because I'm only wanting the color buffer for my cube map. I actually added one to see if that was the problem and still got the same results. I could of fudged that up when I tried. Any help that can point me in the right direction would be appreciated. GLuint renderToCubeMap(int size, GLenum InternalFormat, GLenum Format, GLenum Type) { // color cube map GLuint textureObject; int face; GLenum status; //glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE1); glGenTextures(1, &textureObject); glBindTexture(GL_TEXTURE_CUBE_MAP, textureObject); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, InternalFormat, size, size, 0, Format, Type, NULL); } // framebuffer object glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, textureObject, 0); status = glCheckFramebufferStatus(GL_FRAMEBUFFER); printf("%d\"\n", status); printf("%d\n", GL_FRAMEBUFFER_COMPLETE); glViewport(0,0,size, size); for (face = 1; face < 6; face++) { drawSpheres(); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, textureObject, 0); } //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebuffer(GL_FRAMEBUFFER, 0); return textureObject; }

    Read the article

  • My Doors - Why Standards Matter to Business

    - by Brian Dayton
    "Standards save money." "Standards accelerate projects." "Standards make better solutions."   What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc.   When "standards" come up in presentations and discussions do you: -          Nod your head politely -          Tune out and check your smart phone -          Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?"   Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen."   -          24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors -          20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry -          19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself -          5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't   We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day.   The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: -          How standards can benefit your business -          Your IT organization's philosophy around standards -          Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through   The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there.   Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • Surface V2.0

    - by Dennis Vroegop
    It’s been quiet around here. And the reason for that is that it’s been quiet around Surface for a while. Now, a lot of people assume that when a product team isn’t making too much noise that must mean they stopped working on their product. Remember the PDC keynote in 2010? Just because they didn’t mention WPF there a lot of people had the idea that WPF was dead and abandoned for Silverlight. Of course, this couldn’t be farther from the truth. The same applies to Surface. While we didn’t hear much from the team in Redmond they were busy putting together the next version of the platform. And at the CES in January the world saw what they have been up to all along: Surface V2.0 as it’s commonly known. Of course, the product is still in development. It’s not here yet, we can’t buy one yet. However, more and more information comes available and I think this is a good time to share with you what it’s all about! The biggest change from an organizational point of view is that Microsoft decided to stop producing the hardware themselves. Instead, they have formed a partnership with Samsung who will manufacture the devices. This means that you as a buyer get the benefits of a large, worldwide supplier with all the services they can offer. Not that Microsoft didn’t do that before but since Surface wasn’t a ‘big’ product it was sometimes hard to get to the right people. The new device is officially called the “Samsung SUR 40 for Microsoft Surface” which is quite a mouthful. The software that runs the device is of course still coming from Microsoft. Let’s dive into the technical specs (note: all of this is preliminary, it’s still in the Alpha phase!): Audio out HDMI / StereoRCA / SPDIF / 2 times 3.5mm audio out jack Brightness 300 CD/m2 Communications 1GB Ethernet/802.11/Bluetooth Contrast Ratio 1:1000 CPU AMD Athlon X2 245e 2.9Ghz Dual Core Display Resolution Full HD 1080p 1920x1080 / 16:9 aspect ratio GPU AMD Radeon HD 6750 1GB GDDRS HDD 320 GB / 7200 RPM HDMI In / HDMI out Yes I/O Ports 4 USB, SD Card reader Operation System Embedded Windows 7 Professional 64 bits Panel Size 40” diagonal Protection Glass Gorilla Glass RAM 4 GB DD3 Weight / with standard legs 70.0 Kg / 154 lbs Weight / standalone 39.5 Kg / 87 lbs Height (without legs) 4 inch Contact points recognized > 50 Cool Factor Extremely   Ok, the last point is not official, but I do think it needs to be there. Let’s talk software. As noted, it runs Windows 7 Professional 64 bit, which means you can run Visual Studio 2010 on it. The software is going to be developed in WPF4.0 with the additional Surface SDK 2.0. It will contain all the things you’ve seen before plus some extra’s. They have taken some steps to align it more with the Surface Toolkit which you can download today, so if you do things right your software should be portable between a WPF4.0 Windows 7 Multi-touch app and the Surface v2 environment. It still uses infrared to detect contacts, so in that respect nothing much has changed conceptually. We still can differentiate between a finger, a tag or a blob. Of course, since the new platform has a much higher resolution (compared to the 1024x768 of the first version) you might need to look at your code again. I’ve seen a lot of applications on Surface that assume the old resolution and moving that to V2 is going to be some work. To be honest: as I am under NDA I cannot disclose much about the new software besides what I have told you here, but trust me: it’s going to blow people away. Now, the biggest question for me is: when can I get one? Until we can, have a look here: Tags van Technorati: surface,samsung,WPF

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Change the Way Google Search Results Display in Firefox

    - by Asian Angel
    Are you tired of the default look for search results at Google? If you want a different and customized pleasing look for them, then join us as we look at the GoogleMonkeyR User Script. Note: User Style Scripts & User Scripts can be added to most browsers but we are using Firefox & the Greasemonkey extension for our example here. Before Here is the standard look for search results at Google…not bad but it really does not stand out that well either. Installing the User Script You may be asking yourself what makes this particular user script different from others. Take a look at the list of goodies that you get access to and you will understand: Multiple columns of results Removes “Sponsored Links” Add numbers to the results Auto-load more results Removes web search dialogues Open links in a new tab Favicons GooglePreview Self updating Can be configured from a simple user dialogue To get started click on the Webpage Install Button. Once you click on the Webpage Install Button you will see the following window asking for confirmation to add the user script to Firefox. Click Install to complete the process. GoogleMonkeyR in Action Refreshing the same search page shown above shows a noticeable difference already. The light blue background makes the search results stand out a bit better. This is an improvement from before but you will definitely want to have a look to see just how far you can go… Right click on the Greasemonkey Status Bar Icon, go to User Script Commands, and select GoogleMonkeyR Preferences. Once you have clicked on GoogleMonkeyR Preferences the search page will be shaded out and you will have access to the user script’s preferences. This is where you can really make your search results unique looking! Here are the changes that we started out with… After refreshing our search results things looked even better. A look at the entire page of results with our browser maximized and set for two columns. If you have the Auto load more results Option enabled new results will be added very quickly as you scroll down. Our set of search results after adding Favicons & GooglePreview Images. Conclusion If you have been wanting a more dramatic and pleasing look for the search results at Google then you can not go wrong with the GoogleMonkeyR User Script. Change as little or as much as you want to get that perfect look in your browser. Link Install the GoogleMonkeyR User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysMake Firefox Show Google Results for Default Address Bar SearchesCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same Tab TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored?

    - by mmorris
    I created a RAID with: sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2 sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb Which I appended it to /etc/mdadm/mdadm.conf, see below: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500 # by mkconf $Id$ ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:06 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:06 md2 So I think all is good and I reboot. After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:18 md brw-rw---- 1 root disk 9, 126 Oct 30 11:18 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:18 md127 All is not lost, I: sudo mdadm --stop /dev/md126 sudo mdadm --stop /dev/md127 sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1 sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2 and verify everything: sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:26 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:26 md2 So once again, I think all is good and I reboot. Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:42 md brw-rw---- 1 root disk 9, 126 Oct 30 11:42 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:42 md127 What am I missing here?

    Read the article

  • ATG Live Webcast June 14: Technical Preview of EBS 12.2 Online Patching

    - by BillSawyer
    Online Patching is is one of the cornerstone new features in our upcoming Oracle E-Business Suite 12.2 release. This ground-breaking feature is based upon Edition-Based Redefinition, a new 11gR2 Database feature that was built to Oracle Applications division specifications to allow the E-Business Suite's database tier to be patched while the environment is running.  Online Patching combines the use of Edition-Based Redefinition and new E-Business Suite technologies to allow patching to the E-Business Suite's database and application tier servers while the environment is being actively used by its end-users. This webcast provides a detailed technical preview of: How this new feature works How it affects E-Business Suite end-users How it affects E-Business Suite database administrators and patching lifecycles How it affects developers and third-party software vendors responsible for E-Business Suite customizations and extensions The presenter for this event is Kevin Hudson, Senior Director and one of the Online Patching architects. There will be a special extended Q&A Session at the end of this presentation, given the nature of the materials and the questions that we expect from you. ATG Development staff supporting the Q&A session will include Elke Phelps, Santiago Bastidas, Max Arderius, and other ATG architects. Date:               Thursday, June 14, 2012Time:              8:00 AM - 10:00 AM Pacific Standard Time (Special 2-hour Time)Presenter:    Kevin Hudson, Senior Director, Applications Technology IntegrationWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128   International Participant Dial-In Number:      706-634-9568   Dial-In Passcode:                                              100815To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597470987If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. When will Oracle E-Business Suite 12.2 be released? Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. We'll post updates here as soon as soon as they're available.    

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • ATG Live Webcast June 28: Scrambling Sensitive Data in EBS 12 Cloned Environments

    - by BillSawyer
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.   The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. This ATG Live Webcast is your chance to come learn about the Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack from the experts. Oracle E-Business Suite Release 12.1.3 Template for Data Masking The agenda for the Oracle E-Business Suite Template for Data Masking Pack webcast includes the following topics: What does data masking do in E-Business Suite environments? De-identify the data Mask sensitive data Maintain data validity How can EBS customers use data masking? References Join Eric Bing, Senior Director and Elke Phelps, Senior Principal Product Manager, as they discusses the Oracle E-Business Suite Template for Data Masking Pack.Date:                  Thursday, June 28, 2012Time:                 8:00AM Pacific Standard TimePresenters:     Eric Bing, Senior Director                           Elke Phelps, Senior Principal Product ManagerWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              100865To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  599097152If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • TechEd Israel 2010 may only accept speakers from sponsors

    - by RoyOsherove
    A month or so ago, Microsoft Israel started sending out emails to its partners and registered event users to “Save the date!” – Micraoft Teched Israel is coming, and it’s going to be this november! “Great news” I thought to myself. I’d been to a couple of the MS teched events, as a speaker and as an attendee, and it was lovely and professionally done. Israel is an amazing place for technology and development and TechEd hosted some big names in the world of MS software. A couple of weeks ago, I was shocked to hear from a couple of people that Microsoft Israel plans to only accept non-MS teched speakers, only from sponsors of the event. That means that according to the amount that you have paid, you get to insert one or more of your own selected speakers as part of teched. I’ve spent the past couple of weeks trying to gather more evidence of this, and have gotten some input from within MS about this information. It looks like that is indeed the case, though no MS rep. was prepared to answer any email I had publicly. If they approach me now I’d be happy to print their response. What does this mean? If this is true, it means that Microsoft Israel is making a grave mistake – They are diluting the quality of the speakers for pure money factors. That means, that as a teched attendee, who paid good money, you might be sitting down to watch nothing more that a bunch of infomercials, or sub-standard speakers – since speakers are no longer selected on quality or interest in their topic. They are turning the conference from a learning event to a commercial driven event They are closing off the stage to the community of speakers who may not be associated with any organization  willing to be a sponsor They are losing speakers (such as myself) who will not want to be part of such an event. (yes – even if my company ends up sponsoring the event, I will not take part in it, Sorry Eli!) They are saying “F&$K you” to the community of MVPs who should be the people to be approached first about technical talks (my guess is many MVPs wouldn’t want to talk at an event driven that way anyway ) I do hope this ends up not being true, but it looks like it is. MS Israel had already done such a thing with the Developer Days event previouly held in Israel – only sponsors were allowed to insert speakers into the event. If this turns out to be true I would urge the MS community in Israel to NOT TAKE PART AT THIS EVENT in any form (attendee, speaker, sponsor or otherwise). by taking part, you will be telling MS Israel it’s OK to piss all over the community that they are quietly suffocating anyway. The MVP case MS Israel has managed to screw the MVP program as well. MS MVPs (I’m one) have had a tough time here in Israel the past couple of years. ever since yosi taguri left the blue badge ranks, there was not real community leader left. Whoever runs things right now has their eyes and minds set elsewhere, with the software MVP community far from mind and heart. No special MVP events (except a couple of small ones this year). No real MVP leadership happens here, with the MVP MEA lead (Ruari) being on a remote line, is not really what’s needed. “MVP? What’s that?” I’m sure many MS Israel employees would say. Exactly my point. Last word I’ve been disappointed by the MS machine for a while now, but their slowness to realize what real community means in the past couple of years really turns me off. Maybe it’s time to move on. Maybe I shouldn’t be chasing people at MS Israel begging for a room to host the Agile Israel user group. Maybe it’s time to say a big bye bye and start looking at a life a bit more disconnected.

    Read the article

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >