Search Results

Search found 8790 results on 352 pages for 'known hosts'.

Page 267/352 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • SQLAuthority News – Why I am Going to Attend #SQLPASS Summit 2012 – Seattle

    - by pinaldave
    I am going to Seattle I once again attend SQLPASS this year. This will be my fourth SQLPASS. Lots of people ask me why I am going to SQLPASS every year. Well there are so many different reasons for that. I go to SQLPASS because – I love it!  Here are few of the reasons I go to SQLPASS. Meet friends whom I have never met before Meet community at large – it is fun to hang around with like minded people Meet Rick Morelan – my book co-author and friend Attend various SQL Parties – there are so many parties around – see the list below Explore various new tools from various third party vendors Meet fellow Chapter Leaders and Regional Mentors And of course attend SQL Server Learning Sessions from industry known experts. The three-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. If I am a consultant or vendor who is looking for better career opportunities, PASS Summit is the perfect platform to meet and show my skills to my new potential customers and employers. Further, breakfasts, lunches, and evening receptions, which are included with registration, are meant to provide more and more networking opportunities. At PASS Summit, I gain not only new ideas but also inspire myself from top professionals and experts. Learning new things about SQL Server, interacting with different kinds of professionals, and sharing issues and solutions will definitely improve my understanding and turn me into a better SQL Server professional who can leverage and optimize SQL Server to improve business. I am going – are you joining? Note: This is re-blogged with modification from my 2 years old blog posts on a similar subject. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • VirtualBox 3.2 is released! A Red Letter Day?

    - by Fat Bloke
    Big news today! A new release of VirtualBox packed full of innovation and improvements. Over the next few weeks we'll take a closer look at some of these new features in a lot more depth, but today we'll whet your appetite with the headline descriptions. To start with, we should point out that this is the first Oracle-branded version which makes today a real Red-letter day ;-)  Oracle VM VirtualBox 3.2 Version 3.2 moves VirtualBox forward in 3 main areas ( handily, all beginning with "P" ) : performance, power and supported guest operating system platforms.  Let's take a look: Performance New Latest Intel hardware support - Harnessing the latest in chip-level support for virtualization, VirtualBox 3.2 supports new Intel Core i5 and i7 processor and Intel Xeon processor 5600 Series support for Unrestricted Guest Execution bringing faster boot times for everything from Windows to Solaris guests; New Large Page support - Reducing the size and overhead of key system resources, Large Page support delivers increased performance by enabling faster lookups and shorter table creation times. New In-hypervisor Networking - Significant optimization of the networking subsystem has reduced context switching between guests and host, increasing network throughput by up to 25%. New New Storage I/O subsystem - VirtualBox 3.2 offers a completely re-worked virtual disk subsystem which utilizes asynchronous I/O to achieve high-performance whilst maintaining high data integrity; New Remote Video Acceleration - The unique built-in VirtualBox Remote Display Protocol (VRDP), which is primarily used in virtual desktop infrastructure deployments, has been enhanced to deliver video acceleration. This delivers a rich user experience coupled with reduced computational expense, which is vital when servers are running hundreds of virtual machines; Power New Page Fusion - Traditional Page Sharing techniques have suffered from long and expensive cache construction as pages are scrutinized as candidates for de-duplication. Taking a smarter approach, VirtualBox Page Fusion uses intelligence in the guest virtual machine to determine much more rapidly and accurately those pages which can be eliminated thereby increasing the capacity or vm density of the system; New Memory Ballooning- Ballooning provides another method to increase vm density by allowing the memory of one guest to be recouped and made available to others; New Multiple Virtual Monitors - VirtualBox 3.2 now supports multi-headed virtual machines with up to 8 virtual monitors attached to a guest. Each virtual monitor can be a host window, or be mapped to the hosts physical monitors; New Hot-plug CPU's - Modern operating systems such Windows Server 2008 x64 Data Center Edition or the latest Linux server platforms allow CPUs to be dynamically inserted into a system to provide incremental computing power while the system is running. Version 3.2 introduces support for Hot-plug vCPUs, allowing VirtualBox virtual machines to be given more power, with zero-downtime of the guest; New Virtual SAS Controller - VirtualBox 3.2 now offers a virtual SAS controller, enabling it to run the most demanding of high-end guests; New Online Snapshot Merging - Snapshots are powerful but can eat up disk space and need to be pruned from time to time. Historically, machines have needed to be turned off to delete or merge snapshots but with VirtualBox 3.2 this operation can be done whilst the machines are running. This allows sophisticated system management with minimal interruption of operations; New OVF Enhancements - VirtualBox has supported the OVF standard for virtual machine portability for some time. Now with 3.2, VirtualBox specific configuration data is also stored in the standard allowing richer virtual machine definitions without compromising portability; New Guest Automation - The Guest Automation APIs allow host-based logic to drive operations in the guest; Platforms New USB Keyboard and Mouse - Support more guests that require USB input devices; New Oracle Enterprise Linux 5.5 - Support for the latest version of Oracle's flagship Linux platform; New Ubuntu 10.04 ("Lucid Lynx") - Support for both the desktop and server version of the popular Ubuntu Linux distribution; And as a man once said, "just one more thing" ... New Mac OS X (experimental) - On Apple hardware only, support for creating virtual machines run Mac OS X. All in all this is a pretty powerful release packed full of innovation and speedups. So what are you waiting for?  -FB 

    Read the article

  • XAML RadControls Q1 2010 Official

    Q1 2010 release focuses on strengthening 3 main aspects of RadControls for Silverlight and RadControls for WPF: Ensuring first-class performance for all data-centric controls through various techniques, Enhancing and polishing RadControls themes Providing highly advanced, enterprise-level features, especially for the data visualization controls  We know that performance is crucial for line-of-business applications. Therefore, we always make sure that RadControls can help you achieve unmatched performance and this has always been our number one priority. RadControls achieve unbeatable performance through UI and Data Virtualization, Data Sampling and built-in Load On Demand features. Several of the major controls in the bundles have been enhanced with UI virtualization support Scheduler, CoverFlow and Book. As a part of Q1 2010 we also want to bring an unparalleled visual richness to your applications. To achieve that we have done a major rework of all our themes. We used a uniform templating approach across all controls, streamlined naming conventions for resources and delivered a much more consistent look of the controls along the way. RadControls for WPF bundle has been enriched with two new controls Map and Book.   Another new control has been included in the Q1 2010 release. However, it continues to be in a CTP stage. This is the Transition control. We decided that this is the better way to proceed as we will need some more input from our community on how exactly to develop this control further. Therefore, we will be regularly blogging on the development progress so that we can clearly indicate the direction, in which the control is evolving and gather your feedback on whether this is the best direction. Our Charting controls for Silverilght and WPF have been advanced with major new features such as Data Sampling, Zooming and Scrolling, Automatic SmartLables positioning, Sorting and Filtering and many more. The new built-in paging of the GridView control now allows you to page through your data, thus resulting in an event faster and more responsive grid that can easily handle enormously large datasets. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WebCenter Customer Spotlight: Fundação Petrobras de Seguridade Social

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Solution Summary Fundação Petrobras de Seguridade Social (PetroS) is a Brazilian nonprofit organization pension fund serving 152 private companies with more then 145,000 plan participants and  managing a portfolio of  US$35.9 billion in equity. PetroS business objective was to implement a robust and flexible online solution to enable the foundation to successfully compete in the pension marketplace. PetroS implemented a robust, flexible and highly available database solution based on Oracle Database 11g, Enterprise Edition and modernized the company’s Web portal using Oracle WebCenter Suite.  The solution enables PetroS to make an average of 5,000 online loans per month to its 110,000 qualifying participants and supports 2,000 online consultations daily. Company ProfileFundação Petrobras de Seguridade Social (Petrobras Social Security Foundation)—better known as PetroS—is a pension fund founded in the 1970s to pay supplementary retirement benefits to Petrobras employees. Later, the foundation expanded its market to include 152 private companies, including Sanasa, Repsol YPF and Alesat. The Foundation, a nonprofit organization, has more than 145,000 plan participants, US$35.9 billion in equity, and a loan portfolio of US$730 million. Business ChallengesPetroS business objectives were to implement a robust and flexible solution to enable the foundation to successfully compete in the pension marketplace without affecting its monthly payments to 60,000 beneficiaries and to extend service delivery through the Web to better serve the fund’s more than 145,000 participants. Solution DeployedPetroS worked with Oracle Consulting to implement a robust, flexible and high available database solution based on Oracle Database 11g, Enterprise Edition and modernized the company’s Web portal using Oracle WebCenter Suite, enabling PetroS to deliver more flexible services to pension plan participants. Business ResultsThe solution enables PetroS to make an average of 5,000 online loans per month to its 110,000 qualifying participants and supports 2,000 online consultations daily. “The combination of Oracle Database 11g and Oracle WebCenter Suite has helped us deliver faster and better service for over 130,000 clients who participate in the 96 supplementary pension plans we currently offer. It has certainly helped to fuel our growth.” Newton Carneiro da Cunha, Diretor Administrativo, Fundação Petrobrás de Seguridade Social           Additional Information PetroS Customer Snapshot Oracle WebCenter Suite Oracle Database 11g, Enterprise Edition Oracle Real Application Clusters Oracle Diagnostics Pack Oracle Tuning Pack Oracle Consulting

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • The Social Business Thought Leaders

    - by kellsey.ruppel
    Enterprise Gamification, Big Data, Social Support, Total Customer Experience, Pull Organizations, Social Business. Are these purely the latest buzzwords to enter the market or significant trends that companies should keep an eye on? Oracle recently sponsored and presented at the 5th Social Business Forum, one of the largest European events on the use of social media as a business tool and accelerator. Through the participation of dozens of practitioners, experts and customer success stories, the conference demonstrated how a perfect storm of technology, management and cultural change is pushing peer-to-peer conversations deep into business processes. It is clear that Social Business is serving as a new propellant of agility, efficiency and reactivity. According to Deloitte and MIT what we have learned to call Social Business is considered important in the next 3 years by 86% of managers (see Social Business: What Are Companies Really Doing?, MIT Sloan Management Review and Deloitte). McKinsey further estimates the value that can be unlocked in terms of knowledge-worker productivity, consumer insights, product co-creation, improved sales, marketing and customer service up to $1300B (See The social economy: Unlocking value and productivity through social technologies, McKinsey Global Institute). This impacts any industry, with the strongest effects seen in Media & Entertainment, Technology, Telcos and Education. For those not able to attend the Social Business Forum and also for the many friends that joined us in Milan, we decided to keep the conversation going by extracting some golden nuggets from the perspective of five of the most well-known thought-leaders in this space. Starting this week you will have the chance to view: John Hagel (Author of the Power of Pull and Co-Chairman Center for the Edge at Deloitte & Touche) Christian Finn (Senior Director, WebCenter Evangelist at Oracle) Steve Denning (Author of The Radical Management and Independent Management Consulting Professional) Esteban Kolsky (Principal & Founder at ThinkJar) Ray Wang (Principal Analyst & CEO at Constellation Research) Stay tuned to hear: How pull organizations are addressing some of the deepest challenges impacting the market. How to integrate social into existing infrastructure and processes. How to apply radical management to become more agile and profitable. About the importance of gamification as an engagement lever. The first interview with John Hagel will be published tomorrow. Don't miss it and the entire series!

    Read the article

  • Apprentice Boot Camp in South Africa (Part 2)

    - by Tim Koekkoek
    By Maximilian Michel (DE), Jorge Garnacho (ES), Daniel Maull (UK), Adam Griffiths (UK), Guillermo De Las Nieves (ES), Catriona McGill (UK), Ed Dunlop (UK) Today we have the second part of the adventures of seven apprentices from all over Europe in South-Africa!  Kruger National Park & other experiences Going to the Kruger National Park was definitely an experience we will all remember for the rest of our lives. This trip,organised by Patrick Fitzgerald, owner of the Travellers Nest (where we all stayed), took us from the hustle and bustle of Joburg to experience what Africa is all about, the wild! Although the first week’s training we had prior to this trip to the Kruger was going very well, we all knew this was to be a very nice break before we started the second week of training. And we were right, the animals, scenery and sights we saw were just simply incredible and like I said something we will remember for the rest of our lives. To see lions, elephants, cheetahs and rhinos and many more in a zoo is one thing, but to see them in the wild, in their natural habitat is very special and I personally only realised this from the early 5 am start on the first morning in the Kruger, which was definitely worth it. Not only was it all about the safari, we ate some wonderful food, in particular on the Saturday night, Patrick made us a traditional South African Braai which was one of my favourite meals of the whole two weeks. After the Kruger National Park we had a whole day of traveling back to Johannesburg but even this was made to be a good day by our hosts. Despite the early start on the road it was all worth it by the time we reached God's Window. The walk to the top was made a lot harder by all the steaks we had eaten in the first week but the hard walk was worth it at the top, with views that stretched for miles. The Food The food in South Africa is typically meat and in big amounts, while there we ate a lot of big beef steaks, ribs and kudu sausage. All of the meat we ate was usually cooked with a sauce such as a Barbeque glaze. The restaurants we visited were: Upperdeck Restaurant, with live music and a great terrace to eat, the atmosphere was good for enjoying the music and eating our food. Most of us ate  Spare ribs that weighed 600 kg, with barbecue sauce that was delicious. Die Bosvelder Pub & Restaurant is a restaurant with a very surprising decor, this is because the walls had many of south Africa’s famous animals on them. The food was maybe the best we ate in South Africa. Our orders were: Springbokvlakte Lambs' Neck Stew, beef in gravy and steaks topped with cheese and then more meat on top! All meals were accompanied by a selection of white sauce cauliflower, spinach and zanhorias. Pepper Chair Restaurant, where the specialty is T-Bone steaks of 1.4 kg, but most of us were happy to attempt the 1 kg. Cooked with barbecue sauce over the meat, it was very good!  The only problem was their size causing the  the meat to get cold if you did not eat it very fast! We’re all waiting for our 1.0 kg t-bone steak including our Senior Director EMEA Systems Support Germany & Switzerland: Werner Hoellrigl The Godfather Restaurant, the food here was more meat in abundance. We ate: great ribs, hamburgers, steaks and all accompanied with a small plates of carrot and sauteed spinach, very good. We had two great weeks in South-Africa! If you want to join Oracle, then check http://campus.oracle.com 

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • BizTalk 2010 Certification Exam

    - by Paul Petrov
    I took a shot at new (to me) certification exam for BizTalk 2010. I was able to pass it without any preparation just based on the experience. That does not mean this exam is a very simple one. Comparing to previous (2006 R2) it covers some new areas (like WCF) and has some demanding questions and situation to think about. But the most challenging factor is broad feature coverage. Overall, the impression that if BizTalk continues to grow in scope it’s better to create separate exams for core functionality and extended features (like EDI, RFID, LOB adapters) because it’s really hard to cover vast array of BizTalk capabilities. As far as required knowledge and questions allocation I think Microsoft description is on target. There were definitely more questions on deployment, configuration and administration aspects comparing to previous exam. WCF and WCF based adapters now play big role and this topic was covered well too. Extended functionality is claimed at 13% of the exam, I felt there were plenty of RFID questions but not many EDI, that’s why I thought it’d be useful to split exam into two to cover all of them equally. BRE is still there and good, cause it’s usually not very known/loved feature of the package. At the and, for those who plan to get certified, my advice would be to know all those areas of BizTalk for guaranteed passing: messaging and orchestrations, core adapters, routing, patterns; development of all artifacts and orchestrations; debugging and exceptions handling; packaging, deployment, tracking and administration; WCF bindings and adapters; BAM, BRE, RFID, EDI, etc. You may get by not knowing one smaller non-essential part (like I did with RFID, for example). In such case you better know all other areas very well to cover for the weak spot. If there more than one whiteouts in the knowledge it’s good idea to study and prepare: MSDN, blogs, virtual labs and good VM to play with can help when experience is not enough. So best wishes and good skill to you in passing this certification!

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Shelving &ndash; What is it &ndash; and more importantly, can it help me?

    - by Chris Skardon
    Since we shifted to TFS we’ve had the ability to perform what is known as ‘shelving’. Shelving (whilst not a wholly new topic in the world of SCC) is new to us, and didn’t exist in our previous SCC solution – SVN. Soo… what is it? What? Shelving is a way to check-in but not check-in your code. By shelving you submit a copy of your ‘pending changes’ to the SCC server, (which maintains a list of the shelvesets) and once that is done you can either continue working, or undo your changes, safe in the knowledge that a backup copy exists on the server. You can unshelve your code at any time and get back to the state you were when you shelved. Yer, that is great but why not just check it in?? Shelvesets don’t have to build. The shelveset you put in there could be entirely broken, or it might solve every bug in the system – shelves aren’t continuously integrated so you can shelve anything. Hmmmm… What else? Shelving allows us to do some pretty cool stuff that beforehand was quite frankly a pain. For instance – Gated Check-ins are implemented via the shelving mechanism, when code is checked-in, what you’re actually doing is shelving it, the Build Controller will build the shelveset with the original code and if it succeeds, the code will be committed, if it fails – well – it’s only you that has to fix the code :) Other nice features are things like the ability to share code you are working on… For example, if I was having trouble with a particular piece of code, I could shelve it, and then you (yes you) could then get that shelveset and check out the problem for yourself, and if you fix it?? Well – you could check-it in! Nice, but day-to-day shizzle? Let’s say you’ve been working on your project and your project manager comes over to you and says: “Hey, errr, bad times, there is an urgent bug we need you to fix, it needs to go out now!” (also for this to play out – we’ll need to assume you’re currently working in the 'release’ branch for another bug fix (maybe))… You could undo all your current changes (obviously you’ll probably backup your code using zip or something I imagine) fix the bug, then re-copy your backup over the top, or you could shelve and unshelve. Perhaps some other uses will awaken the shelver in you… :) Before each checkin – if you shelve, you no longer need to worry (if indeed you do) about resolving conflicts and mysteriously losing your code… Going home at night? Not checking in straight away? Why not shelve, this way – should the worst come to the worst and your local pc gives up, you can just get the shelveset onto another machine and be up and running in literally seconds minutes…

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • Commerce, Anyway You Want It

    - by David Dorf
    I believe our industry is finally starting to realize the importance of letting consumers determine how, when, and where to interact with retailers.  Over the last few months I've seen several articles discussing the importance of removing the barriers between existing channels. Paula Rosenblum of RSR first brought the term omni-channel to my attention back in September. She stated, "omni-channel retail isn’t the merging of channels – rather, it’s the use of all possible channels (present and future) to enhance the customer experience in a profitable way." I added to her thoughts in this blog posting in which I said, "For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways." More recently Brian Walker of Gartner suggested we stop using the term multi-channel and begin thinking more about consumer touch-points. "It is time for organizations to leave their channel-oriented ways behind, and enter the era of agile commerce--optimizing their people, processes and technology to serve today's empowered, ever-connected customers across this rapidly evolving set of customer touch points." Now Jason Goldberg, better known as RetailGeek, says we should start breaking down the channel silos by re-casting the VP of E-Commerce as the VP of Digital Marketing, and change his/her focus to driving sales across all channels using digital media. This logic is based on the fact that consumers switch between channels, or touch-points as Brian prefers, as part of their larger buying process. Today's smart consumer leverages the Web, mobile, and stores to provide the best shopping experience, so retailers need to make this easier. Regardless of what we call it, the key take-away is that "multi-channel" is not only an antiquated term but also an idea who's time has passed.  Today, retailers must look at e-commerce, m-commerce, f-commerce, catalogs, and traditional store sales collectively and through the consumers' eyes.  The goal is not to drive sales through each channel but rather to just drive sales -- using whatever method the customer prefers.  There really should be just one cart.

    Read the article

  • VS2010 Launch Presentations

    Last week I was in Vegas to present at the DevConnections / VS2010 Launch event.  The show was well-attended and everybody I spoke to agreed it was educational and enjoyable.  My three talks were all on Wednesday, 14 April 2010, including one at 8am for which I was impressed to see a large turnout in attendance.   Pragmatic ASP.NET Tips, Tricks, and Tools My first session was on tips, tricks, and tools for ASP.NET developers.  This is a talk Ive given in past years, but which I refine every time.  I usually like to have a full session to devote to tools, and a separate talk just for Tips and Tricks, but for this show I was only given the one 75-minute slot, so I had to cut some materials to make things fit.  The talk went well, all the demos work, and the attendees seemed to enjoy it, and I like giving it, so hopefully I can continue to present on this topic in future DevConnections shows. Download the ASP.NET Tips, Tricks, and Tools slides and demos.   Whats New in ASP.NET MVC 2 My second talk of the day followed immediately after the Tips and Tricks talk, and was a brand new talk for me.  I have to throw out a thank-you to Phil for letting me see his MIX slide deck before he gave his talk, as that was a big help.  The official whats new document online is also worth checking out if youre interested in this subject.  Download the Whats New in ASP.NET MVC 2 slides and demos.   SOLIDify Your ASP.NET MVC 2 Application Just because youre using a ASP.NET MVC doesnt mean your code cant still end up being a big ball of mud.  This session describes a number of principles of software design that can help ensure applications remain loosely-coupled and malleable even as they age and increase in features and complexity.  This was my last talk of the day and did have one minor demo failure involving a database constraint.  Ive given this talk many times before, and in this case I had to fit it into a 60-minute timeslot, so Im not sure I had quite enough time to drive home all of the concepts to everyone in the audience.  That said, I did hear a number of positive comments on how the talk went, so thats encouraging. Download the SOLIDify Your ASP.NET MVC 2 Application slides and demos.   In my sessions, I promised to have these posted by the end of the weekend theyre going up at 10pm Sunday night (my time) 2 hours to spare!  Enjoy! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

  • Best Creational Pattern for loggers in a multi-threaded system?

    - by Dipan Mehta
    This is a follow up question on my past questions : Concurrency pattern of logger in multithreaded application As suggested by others, I am putting this question separately. As the learning from the last question. In a multi-threaded environment, the logger should be made thread safe and probably asynchronous (where in messages are queued while a background thread does writing releasing the requesting object thread). The logger could be signleton or it can be a per-group logger which is a generalization of the above. Now, the question that arise is how does logger should be assigned to the object? There are two options I can think of: 1. Object requesting for the logger: Should each of the object call some global API such as get_logger()? Such an API returns "the" singleton or the group logger. However, I feel this involves assumption about the Application environment to implement the logger -which I think is some kind of coupling. If the same object needs to be used by other application - this new application also need to implement such a method. 2. Assign logger through some known API The other alternative approach is to create a kind of virtual class which is implemented by application based on App's own structure and assign the object sometime in the constructor. This is more generalized method. Unfortunately, when there are so many objects - and rather a tree of objects passing on the logger objects to each level is quite messy. My question is there a better way to do this? If you need to pick any one of the above, which approach is would you pick and why? Other questions remain open about how to configure them: How do objects' names or ID are assigned so that will be used for printing on the log messages (as the module names) How do these objects find the appropriate properties (such as log levels, and other such parameters) In the first approach, the central API needs to deal with all this varieties. In the second approach - there needs to be additional work. Hence, I want to understand from the real experience of people, as to how to write logger effectively in such an environment.

    Read the article

  • Maximum Availability with Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Database offers a variety of built-in and optional products for maximum availability, and it is well known for its robust high availability and disaster recovery solutions. With its heterogeneous, real-time, transactional data movement capabilities, Oracle GoldenGate is a key part of the Maximum Availability Architecture for Oracle Database. This week on Thursday Dec. 13th we will be presenting in a live webcast how Oracle GoldenGate fits into Oracle Database Maximum Availability Architecture (MAA). Joe Meeks from the Oracle Database High Availability team will discuss how Oracle GoldenGate complements other key products within MAA such as Active Data Guard. Nick Wagner from GoldenGate PM team will present how to upgrade to latest Oracle Database release without any downtime. Nick will also cover 2 new features of  Oracle GoldenGate 11gR2:  Integrated Capture for Oracle Database and Automated Conflict Detection and Resolution. Nick will provide in depth review of these new features with examples. Oracle GoldenGate also offers maximum availability for non-Oracle databases, such as HP NonStop, SQL Server, DB2 (LUW, iSeries, or zSeries) and more. The same robust, reliable real-time, bidirectional data movement capabilities apply to all supported databases.  I'd like to invite you to join us on Thursday Dec. 13th 10am PT/1pm ET to hear from the product experts on how to use GoldenGate for maximizing database availability and to ask your questions. You can find the registration link below. Webcast: Maximum Availability with Oracle GoldenGate Thursday Dec. 13th 10am PT/1pm ET Look forward to another great webcast with lots of interaction with the audience. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Partners, Start your Engines

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Hello speed racer - OPN here to inform you that in case you missed it, our ISV Focused PartnerCast took place last week in Oracle’s Redwood Shores studio. Without a roadblock in site, Oracle’s key drivers discussed topics like Optimized and Oracle’s Cloud offerings; even OPN partner MSC took it to the next level by stopping by to share with us their first hand experience with the Oracle Exastack Optimized program. By stepping into Oracle’s ”Motor Speedway”, better known as the Exastack lab, MSC was able to fine tune, test and optimize their application on Oracle Exadata, as well as gain outstanding expertise on several technical areas such as optimizing multithreaded applications and database tuning. By optimizing their solution, MSC has “decreased their deployment time and saw a 30 percent performance improvement in database.” Sounds like someone’s gearing up for an “Oracle Indy 500.” By achieving Oracle Exadata Optimized status, MSC is putting performance in the driver’s seat, and their customers at the front of the race by delivering a solution that is tuned for performance, scalability and reliability. So go ahead and let the Oracle Exastack Optimized pit crew take you to the finish line. Learn how to go from 0-60 by watching MSC’s segment of the ISV Focused PartnerCast below. Ready… Set… Optimize! The OPN Communications Team

    Read the article

  • It's Here! Visual Studio 2010 and ASP.NET 4.0 Ship

    Today Microsoft released Visual Studio 2010 and ASP.NET 4.0. I've been using the RC version of Visual Studio 2010 quite a bit for the past couple of months and have really grown to like it. It has a host of features and enhancements that improve developer productivity, from improved IntelliSense to better multiple monitor support. Plus there's something about the user experience that, to me, makes it feel better than Visual Studio 2008. I don't know if it's the new blue color motif or what, but the IDE seems more modern looking and more responsive to my mouse movements and other input. Anyway, if you've not yet downloaded Visual Studio 2010 and ASP.NET 4.0, why not? As with previous versions of Visual Studio there's a free Express Edition and VS2010 and ASP.NET 4.0 runs side-by-side with earlier versions of Visual Studio and ASP.NET. And with Visual Studio 2010's multi-targeting you can even use VS2010 as your development editor for ASP.NET 2.0 and ASP.NET 3.5 web applications. (Although be forewarned if you have multiple developers working on the application that the project files in VS2010 and earlier versions of Visual Studio differ.) This week's article on 4Guys explores my favorite new features of Visual Studio 2010. Here's an excerpt: The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on. This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! And, in closing, here are some helpful VS2010 and ASP.NET 4.0 links: One click installation for ASP.NET 4.0, Visual Web Developer 2010, .NET Framework 4.0, and ASP.NET MVC 2 Eight Quick Hit videos showing some of the cool new VS2010 features VS2010 and ASP.NET 4.0 Release Announcement with some great info/links from none other than Scott Guthrie Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Configuring keyboard input to eliminate unused diacritics

    - by David Cesarino
    I'd like to change the way diacritics work under Xubuntu. My problem My native language is pt-BR and my notebook has an american keyboard. Thus, I use ' and " followed by keys like u and c to achieve things like ú, ç and ü. It all works well. However, in the case of apostrophes and quotes, that creates a problem when I use ' followed by: letters that won't accept the acute accent ( ´ , ACUTE ACCENT -- 0x00B4) at all, like t; and letters that won't accept the acute in pt-BR, like r. In 1, ' with t does absolutely nothing. In 2, ' with r creates r (LATIN SMALL LETTER R WITH ACUTE -- 0x0155), which is used, afaik, for some eastern european languages like slovak. It isn't used in portuguese, just like ?, s, z, ?, ?, ?, n and all consonants (portuguese do not take diacritics in consonants). Question That said, is there a way to better support the portuguese-brazilian language using an american keyboard? It is very common here --- I actually prefer the american keyboard to our own, known as “ABNT”. Desired solution I'd like to deactivate unused diacritics, so case 2 would behave just like case 1. Additionally, if possible, I'd like case 1 to behave like it does under Windows. As an example, typing ' followed by t should write 't (acute followed by T) instead of doing nothing. About 2, in my humble opinion, doing nothing is counterproductive. I realize the behavior is reasonable according to logic ("there isn't t-acute, so please tell the computer to typeset apostrophe --- ', SPACE --- instead of acute). But from a human, practical point of view, I think it makes more sense (to me, at least). Additional comments I believe this also applies to spanish, french, italian and other western european latin languages. On the console (Ctrl+Alt+F?), case 1 is not a problem. I don't need to press space as apostrophes are automatically added. However, there, I'm unable to access cedilla (ç). Two completely different behaviors. If it's just a matter of customizing text config files (possibly creating a custom layout or whatever), I'd be glad to share my efforts. I just need guide on the "howto" part. Somehow Google only points me to the "enough" people (those who cope with the situation and think that it works "well enough"). And since I have definitely migrated to Linux/Xubuntu after years, I'd like to leave just as I like it (and I'm sure others as well). For example, if there is some kind of scripting or definition to tell the computer to do what I described, so be it.

    Read the article

  • Issue 56 - Super Stylesheets Skinning in DotNetNuke 5

    May 2010 Welcome to Issue 56 of DNN Creative Magazine In this issue we show you how to use the powerful new Super Stylesheets skinning feature in DotNetNuke 5. Super Stylesheets are ideal for both beginner and experienced skin designers, they provide skin layouts using CSS. The advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. They are also very quick to build and you can change a skin layout in a matter of minutes rather than hours. We show you how to build a skin from the very beginning using Super Stylesheets, we show you how to create various skin layouts, as well as multi-layouts. We also show you how to style the skin, how to add tokens such as the logo, menu, login links etc. and walk you through how to create a fully working skin from scratch. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create an installable DotNetNuke PA module using OWS. This is an essential technique which allows you to package up the OWS applications that you have created and build them into an installable zip package. The zip file is then installable as a standard DotNetNuke module which means you can easily install your OWS applications on other DotNetNuke installations by simply installing them as a standard DotNetNuke module. To finish, we have part six of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to create a News Carousel using RAD, JQuery and the JCarousel plugin. This issue comes complete with 15 videos. Skinning: Super Stylesheets Skinning in DotNetNuke 5 - DNN Layouts (12 videos - 98mins) Module Development Series: How to Create an Installable DotNetNuke PA Module Using OWS (3 videos - 23mins) How to Implement a News Carousel Using DotNetMushroom RAD and JQuery View issue 56 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 56 issues we have created 578 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >