Search Results

Search found 23661 results on 947 pages for 'worse is better'.

Page 134/947 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Webcast: The ART of Migrating and Modernizing IBM Mainframe Applications

    - by todd.little
    Tuxedo provides an excellent platform to migrate mainframe applications to distributed systems. As the only distributed transaction processing monitor that offers quality of service comparable or better than mainframe systems, Tuxedo allows customers to migrate their existing mainframe based applications to a platform with a much lower total cost of ownership. Please join us on Thursday April 29 at 10:00am Pacific Time for this exciting webcast covering the new Oracle Tuxedo Application Runtime for CICS and Batch 11g. Find out how easy it is to migrate your CICS and mainframe batch applications to Tuxedo.

    Read the article

  • TechEd 2012 - day 3

    - by Stefan Barrett
    The content has got more useful for me as a developer, and I've now seen 2 things which I think will make a big difference: Fake in vs2012 - allows me to stub or fake out libraries making unit testing easier/possible. C++ AMP & auto - auto might get me to start using c++ again (it makes code like for each much nicer/easier to write), while AMP is something I want to play with (moves the processing onto the GPU) The food got a little better, while there was less sign of the snacks.

    Read the article

  • Meta Description Tag Implies Content

    Although Meta data are become less and less necessary for your onsite SEO, there is a still enough of a debate to delve a little deeper into the subject. A great web page preparation seeking to gain better search engine rankings is the effective use of HTML-based meta-tags. While these elements have no direct effect upon site positioning, they do offer website designers more control in the way a site is presented when it does return in search engine's results.

    Read the article

  • How might one teach OO without referencing physical real-world objects?

    - by hal10001
    I remember reading somewhere that the original concepts behind OO were to find a better architecture for handling the messaging of data between multiple systems in a way that protected the state of that data. Now that is probably a poor paraphrase, but it made me wonder if there is a way of teaching OO without the (Bike, Car, Person, etc.) object analogies, and that instead focuses on the messaging aspects. If you have articles, links, books, etc., that would be helpful.

    Read the article

  • Webmasters Out of Control

    I've written about this topic before and it is one of my pet hates, small business owners who pay webmasters to build web sites for them and then are never given access to the files or the "engine room" of the site to be able to make on page SEO changes and more importantly get the site better rankings on search engines such as Google. SEO is not set and forget, because you need to constantly make changes and massage pages on your site.

    Read the article

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • Is there a Javascript library for creating vintage photos?

    - by Nguyen Thanh Tu
    I'm working on a Canvas object in HTML5, and I am attempting to make some photos look "better". I tried VintageJS, an existing photo-retouching Javascript library, and Picozu, a web application cloning some Adobe Photoshop functionalities, but I'm still not happy. Can you help me with an algorithm or point to an existing Javascript library that would allow me to make my photos look like the following example? http://i46.photobucket.com/albums/f137/thanhtu_zx/Untitled-1.jpg

    Read the article

  • Top 5 White Hat Techniques in SEO

    SEO techniques that help your site to get better page rank in search engines can be both White Hat and Black Hat. White hat techniques are accepted as ethical and useful long term, while black hat methods forcibly manipulate page rankings which are either discarded or not preferred because they flout the ethical norms. You should know the difference between these two and select a firm that utilizes only white hat techniques.

    Read the article

  • Discover How to Deliver Measurable Business Value from your HCM Strategy

    - by Jay Richey, HCM Product Marketing
    Join our live Webcast on Wednesday, July 13 to learn how to fine tune your HCM strategy and better utlize your Oracle HCM investment.  In this session you'll learn how to access, analyze and act on information from multiple sources to ensure that all workforce decisions are focused on meeting overall business objectives. Date:Wednesday, July 13, 2011Time:10:00 a.m. PT / 1:00 p.m. ET Register now!

    Read the article

  • Count a row VS Save the Row count after each update

    - by SAFAD
    I want to know whether saving row count in a table is better than counting it each time of the proccess. Quick Example : A visitor goes to Group Clan, the page displays clan information and Members who have joined the group,Should the page look for all the users who joined the clan and count them, or just display the number of members already saved in table ? I think the first one is not possible to get manipulated with but IT MIGHT cost performance Your Ideas ?

    Read the article

  • What do you do when a client requires Rich Text Editing on their website?

    - by George Stocker
    As we all know by now, XSS attacks are dangerous and really easy to pull off. Various frameworks make it easy to encode HTML, like ASP.NET MVC does: <%= Html.Encode("string"); %> But what happens when your client requires that they be able to upload their content directly from a Microsoft Word document? Here's the scenario: People can copy and paste content from Microsoft word into a WYSIWYG editor (in this case tinyMCE), and then that information is posted to a web page. The website is public, but only members of that organization will have access to post information to a webpage. What is the best way to handle this requirement? Currently there is no checking done on what the client posts (since only 'trusted' users can post), but I'm not particularly happy with that and would like to lock it down further in case an account is hacked. The platform in question is ASP.NET MVC. The only conceptual method that I'm aware of that meets these requirements is to whitelist HTML tags and let those pass through. Is there another way? If not, is the best way to let them store it in the Database in any form, but only display it properly encoded and stripped of bad tags? NB: The questions differ in that he only assumes there's one way. I'm also asking the following questions: 1. Is there a better way that doesn't rely on HTML Whitelists? 2. Is there a better way that relies on a different view engine? 3. Is there a WYSIWYG editor that includes the ability to whitelist on the fly? 4. Should I even worry about this since it will only be for 'private posting' (Much in the same way that a private blog allows HTML From the author, but since only he can post, it's not an issue)? Edit #2: If suggesting a WYSIWYG editor, it must be free (as in speech, or as in beer). Update: All of the suggestions thus far revolve around a specific Rich Text Editor to use: Only provide an editor as a suggestion if it allows for sanitization of HTML tags; and it fulfills the requirement of accepting pasted documents from a WYSIWYG Editor like Microsoft Word. There are three methods that I know of: 1. Not allow HTML. 2. Allow HTML, but sanitize it 3. Find a Rich Text Editor that sanitizes and allows HTML. The previous questions remain (1-4 above). Related Question Preventing Cross Site Scripting (XSS)

    Read the article

  • TDD - what are the short term gains/benefits?

    - by ratkok
    Quite often benefits of using TDD are considered as 'long term' gains - the overall code will be better structured, more testable, overall less bugs reported by customers, etc. However, where are the short terms benefits of using TDD? Are there any which are actually tengible and easily measureable? Is it important to have an obvious (or even not obvious by quantifiable) short term benefit at all, if the long term gains are measurable?

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Android 4 Fragments with Mono for Android

    - by Wallym
    With the release of Android 3.0, Google added support for larger displays and attention-grabbing UI designs and layouts. On a tablet screen, UI components can be used to present better information. How does Android do this? It has a technology called Fragments, and I'll look at its implementation in the currently shipping operating system, Android 4. (Let's get past all the jokes about Android and fragmentation on its device platform.)For more information on this, check out my article at Visual Studio Magazine - http://visualstudiomagazine.com/articles/2012/12/13/android-4-and-fragments.aspx

    Read the article

  • TDD - what are the short term gains/benefits?

    - by ratkok
    Quite often benefits of using TDD are considered as 'long term' gains - the overall code will be better structured, more testable, overall less bugs reported by customers, etc. However, where are the short terms benefits of using TDD? Are there any which are actually tengible and easily measureable? Is it important to have an obvious (or even not obvious by quantifiable) short term benefit at all, if the long term gains are measurable?

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • How do I fix Google Earth 6.2.2 theme and fonts in 12.04?

    - by Joshua Robison
    Here is the problem. I would like to know how to get Google Earth using better fonts with anti-aliasing for English and Japanese. Don't know what these fonts are but they are hideous and completely different from my system fonts or Libre Office fonts or even QT apps like VLC and MiniTube. I'm using 12.04 LTS 32bit and almost had complete success with the 11.10 fix posted here Google Earth and Skype theme the theme and fonts will show up correctly for about 5 seconds before the program crashes.

    Read the article

  • How do I make the F-keys work in byobu on 12.04, for midnight commander (mc), htop, etc?

    - by Jorge Castro
    I use byobu with the tmux backend on my 12.04 server. I'd like to use the midnight commander shortcut keys with it, but the F keys don't work. I've seen some posts on the issues here: https://bugs.launchpad.net/byobu/+bug/386363 https://answers.launchpad.net/byobu/+question/127610 but they are out of date and don't seem to work for newer versions of byobu. How can I either work around this or use MC in a way that works better?

    Read the article

  • Making the WPeFfort

    - by Laila
    Microsoft Visual Studio 2010 will be launched on April 12th. The basic layout looks pretty much as it did, so it is not immediately obvious on first inspection that it was completely rewritten in the Windows Presentation Foundation (WPF). The current VS 2008 codebase had reached the end of its life; It was getting slow to initialize and sluggish to run, and was never going to allow for multi-monitor support or easier extensibility. It can't have been an easy decision to rewrite Visual Studio, but the gamble seems to have paid off. Although certain bugs in the betas caused some anxiety about performance, these seem to have been fixed, and the new Visual Studio is definitely faster. In rewriting the codebase, it has been possible to make obvious improvements, such as being able to run different windows on different monitors, and you only being presented with the Toolbox controls and References that are appropriate to your target .NET version. There is also an IntelliTrace debugger, and Intellisense has been improved by virtue of separating a 'Suggestion Mode' and 'Completion Mode' (with its 'Generate From.' 'Highlight References.', and 'Navigate to...' features). At the same time, there has been quite a clearout; Certain features that had been tucked away in the previous versions, such as Brief or Emacs emulation support, have been dropped. (Yes, they were being used!) There are a lot of features that didn't require the rewrite, but are welcome. It is now easier to develop WPF applications (e.g. drag-and-drop Databinding), and there is support for Azure. There are more, and better templates and the design tools are greatly improved (e.g. Expression Web, Expression Blend, WPF Sketchflow, Silverlight designer, Document Map Margin and Inline Call Hierarchy). Sharepoint is better supported, and Office apps will benefit from C#'s support of optional and named arguments, and allowing several Office Solutions within a Deployment package. Most importantly, it is a vote of confidence in the WPF. VS 2010 is the essential missing component that has been impeding the faster adoption of WPF. The fact that it is actually now written in WPF should now reassure the doubters, and convince more developers to make the move from WinForms to WPF. In using WPF, the developers of Visual Studio have had the clout to fix some issues which have been bothering WPF developers for some time (such as blurred text). Do you see a brighter future as a result of transferring from WinForms to WPF? I'd love to know what you think. Cheers, Laila

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >